Generative AI: A Teenager Acting Out

Four years ago, tech journalist and old friend Lee Goldberg asked me to semi-/sort-of collaborate with him on his yearly April Fool’s (April 1st) day article. Basically, Lee had a really fun idea for writing about “artificial stupidity,” instead of “artificial intelligence,” and he asked me to give him some real-AI credibility to use in his article.


First, a Bit of Fun (Light-Hearted 2-Minute Read)

We had SO MUCH FUN, and Lee published that article on March 31, 2020 in Electronic Design, and here’s the link to “NOVIDIA Announces Artificial Stupidity Tech — CEO Claims It Will Make AI Apps Smarter.” There’s just enough “real” AI in this piece to make you double-check the date before believing the whole thing. (Fun, two-minute read, kick off April with a grimace and chuckle.)


And Now, a Real-World Check-In

So now, four years later – where are we?

Three key points.

  • Artificial stupidity – oddly enough, it’s been real. Everything from the ill-fated Gemini 1.5 launch to the pervasive hallucinations in LLMs to the the fact that we can insert falsehoods in an LLM that can’t be removed … it’s all a mad kaleidoscope. (And that last point is more like deliberately adverse behaviors that can’t be eradicated – short of deconstructing an entire LLM and starting from scratch … )
  • The whole “bias” issue in AI … which has rightly garnered a lot of attention. Here’s a novel twist, which is laboratory-study based and MAY OR MAY NOT contribute to REAL-WORLD BEHAVIOR … but we’re getting awfully close to the 2024 elections (about six months from today, just close enough to start getting very concerned about this),
  • We just had a serious AI-uptick. I’m not going to call it a “singularity” or even “pre-singularity” event; but in the past three months we’ve had a significant increase in the slope (the rapidity of change) in the AI world, evidenced by the REAL NVIDIA CEO Jensen Huang’s GTC 2024 Keynote speech (see HERE for the full two-hour YouTube clip AS WELL AS a 16-min YouTube summary), and also Microsoft launched its Copilot (link to Microsoft’s Super Bowl commercial at this SAME PAGE as the NVIDIA links).

So yeah, it’s real.

And it’s about as mature as a teenager with raging hormones with the car keys and a bunch of buddies, out for a “good time” on Friday night.


How to Deal

So our very real, bottom-BOTTOM line is: how do we deal? How do we adapt?

  • We KNOW that society is going to change – very, VERY fast.
  • The LLM emergence over the past 1 1/2 years has been introductory. ALL generative AI is introductory, or transition-AI. (I still call it “pre-AI,” because there’s no real “intelligence” under the hood just yet.)
  • Generative AI is still an immature technology – capable of acting as though it knows what it’s doing, but making all sorts of crazy mistakes. (Teenage kid acting out.)

We, as a society, are just beginning to wrap our minds about interacting with AI and robotics EVERYWHERE … in EVERYTHING … and we don’t (yet!) have a good reference frame for a world that seems to be spinning into a different future, so very, VERY fast.

So – bottom line – is we need to track what’s happening and develop awareness – and then develop some sort of plan.


Timeframe for AGI

It’s all about artificial GENERAL intelligence, or AGI.

  • Generative AI is a transition stage – a stepping stone to AGI.
  • Generative AI is your teenage kid with hormones and pimples and loud music.
  • Generative AI is also fifty years old.

That’s the REAL telling thing … it’s not that generative AI appeared overnight.

Uh-uh.

Generative AI really started with Geoffrey Hinton’s invention (with colleagues) of the Boltzmann machine back in 1983, and THAT was based on John Hopfield’s 1982 invention which was based on William Little’s 1974 invention … so … fifty years ago.

So … we’re ready for the next step – for REAL AI, or for artificial GENERAL intelligence.

I talked about this in last week’s YouTube. Everything from the problems with LLMs to generative AI history to what a real AGI architecture will look like.

I don’t just talk about AGI, I lay out the baseline model and discuss the crucial pieces that will connect the two well-known existing forms of AI (signal and symbol-based), and this is the essential step to creating AGI.

When we can lay out the particulars this clearly – even if there is a whole lot of “filling in the blanks” yet to be done – you know that real AGI is within real grasp.

NOT some “maybe in this timeframe.” (And various pundits and CEOs will pull out some SWAG number.)

It’s much more of a “how long it will take to complete very specific steps in a very specific research plan.” And then, of course, to take things from research to development, and from development to commercialization … but once the research stage is done, there are people who are very, VERY good at taking a concept and building a technology and then making products.

So … once the core research is done, time-to-market should be pretty fast.


Never in Human History

Never in the course of human history have we had to envision a future for which we could not do a simple linear extrapolation.

That is, we’ve never been able to think of actually LIVING in a world so different from our current world that trying to plan ahead means envisioning something VERY different, not just a little-bit-shifted from our current everyday normal.

And the trickiest thing is that we don’t have a reference frame; we don’t have MODELS for how to do this planning.

So … here’s a workable model for society.

What happens when a country goes on a conquest spree and becomes an empire?

What happens to the overall standard of living for members of the conquering country?

For the “conquered country” peoples – not so much fun. Being enslaved, forced to work under harsh conditions, etc. Having their “best and brightest” siphoned off to be administrators for the empire; helping them to rule their own countrymen.

But for people living in the “empire” country – life gets GOOD.

And AGIs – whether resident on some computational platform or embodied in robots – will NOT object to being tasked with work.

I could go on.

But I won’t – because I’m not a historian, and right now, my focus is on developing core elements for workable AI – not investigating how imperialism raised the standard of living for the “conquering “imperialist” countries.

And yes, this is a HUGELY loaded historical, political, social analogy.

I’m just trying to pivot the conversation away from “AI will take all our jobs and we won’t have work” to “AI will elevate the standard of living for ALL of us” – because if we DON’T have an elevated standard of living, then we (the people) won’t have money to buy products and services, and the major AI corporations won’t have clients.

And if they don’t have clients, then THEY will be the ones without jobs.

So there needs to be a set of workable solutions, up to and including government support, funded by the overall increase to GDP that AGI will provide.

SO much needs to be unpacked around this … and I’m not the one to do it, because I need to keep focused the science and the tech-development.

Suffice to say, I believe that AGI can be a good thing.

Just like the invention of ANY tool, or ANY technology, we need to harness it right.


How to Wrap Your Head Around This (Hint: We Need Conversations)

Start thinking about this … and think about joining the Themesis AI Salon on the Sunday after this one – Sunday, April 14 – part of the ongoing “Second Sunday” Themesis AI Salons – and to do that, be sure that you’re on our mailing list. (“About” page, Opt-In form, that sort of thing.)

You ONLY receive a Salon invitation if you’re on our mailing list.

You’ll get more details about our upcoming Salon IF YOU’VE OPTED-IN.

Share via
Copy link
Powered by Social Snap