Pivoting to AGI: What to Read/Watch/Do This Weekend

We are moving from a generative-AI era to an AGI era.

What that means – in the simplest technical terms – is that we’re pivoting from “single-algorithm systems” to systems that must – intrinsically – involve multiple major subsystems, and multiple control structures.

We’re closing out a 50-year connectionist AI era. This era began with inventions by Paul Werbos (1974, backpropagation) and William Little (1974, with the first instance of what later became known as the Little-Hopfield neural network).

There is no single piece that we have already that will be the single key to unlocking full AGI.

Even transformers are not the end-solution; they are a logical extension of generative AI methods.

There are, however, some very useful components.

If you want to set up a weekend of reading, watching, and even doing a (very small, very limited) experiment, here are some suggestions.


Start with Lex Fridman Interviewing Sam Altman (Yes, Again)

OK … I know … you’ve listened to Lex and Sam before. They always have a great conversation.

Fridman, Lex. 2024. “Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI.” Lex Fridman Podcast #419. (March, 2024.) (Accessed April 26, 2024; available online at Lex Fridman interviews Sam Altman, March 2024.)

I’m listening to this now, as I do Friday morning house-cleaning – and doing a deep-clean on the office, where I can listen easily on my main computer’s speakers … and I’ll switch to my laptop and carry that with me around the house as I continue with cleaning and pre-Sabbath chores.

I’m liking this. Loving it, actually. It’s a perspective thing; it’s a grounding thing. And one of the things that I’m loving the most is that Sam is being very real, very open, and speaking so much from his heart – and so much about the love that he has for his colleagues, and how he felt that love (along with other emotions) during that whole difficult time.

For those who think that AI leaders are either strictly after the dollar, or are so completely in their heads that they can’t connect with people, this is a really good podcast.

This one is partly a post-mortem following the horrific board fight that OpenAI went through last fall, but – MORE IMPORTANTLY – a discussion of AGI. As we listen, we get the strong sense that concerns about the future of an AGI-based world, more than an LLM-based world, are what fueled at least some of those dynamics … and continue to foster a lot of reflection, soul-searching, conversations … almost like a vision-quest for not only OpenAI, but for all the leading AI companies.


Yann LeCun’s 2022 Paper on AGI Architectures

We all typically have a dozen or more papers open on browser tabs at any given time … the one that I’ve been reading the most (since this last Wednesday night) is this one by Yann LeCun, which was reposted on LinkedIn by Luis Riera. (Thank you, Luis!)

Luis is calling this “LeCun’s illusion of consciousness.”

OK, the paper is 62 pages long … and I am SO not done with it yet … but I’m not really seeing this as a paper about “consciousness and AGI.” It’s much more concerned with feasible architectures, and a lot of emphasis on reinforcement learning and creating a world model that DOES NOT rely on probabilities! (!!! – LLMs, move over.)

Still reading. Don’t have thoughts yet. Will certainly share as I get more into this. Here’s the link:

  • LeCun, Yann. 2022. “A Path Towards Autonomous Machine Intelligence (Version 0.9.2, 2022-06-27). OpenReview. (Accessed April 26, 2024; available online at LeCun on AGI.)

CORTECONs(R) and Why They (Might!) Matter

The reason that CORTECONs could be important to you is that this method – CORTECONs(R) (“COntent-Retentive, TEmporally-CONNected neural networks) can possibly be one of the missing pieces that we need in building an AGI.

In simplest possible terms – generative AI is built on three things:

  • The reverse Kullback-Leibler divergence,
  • Bayesian conditional probabilities, and
  • Statistical mechanics.

I spent ALL LAST QUARTER teaching myself generative AI … which is where I figured all of this out. (As in … we need to start with the REVERSE Kullback-Leibler, etc.)

So … three “big puzzle piece” building blocks.

Together, these three “building blocks” comprise the foundations of all forms of generative AI, including transformers.

Here’s why this is important to you – especially if you’re building your career around AI.

You can’t read ANY of the more foundational papers in AI & machine learning (ML) without some rudimentary knowledge of statistical mechanics. Any time someone says “energy-based,” they’re referring to something that has its roots in stat mech.

Transformers also use statistical mechanics – with one key difference. They use a more complex enthalpy term.

Here’s the basic free energy equation — that which gets minimized in statistical mechanics. (And in energy-based ML systems.)

If you’d like a fairly light, overview-level discussion, please check out THIS YouTube.

Maren, Alianna J. 2024. “Key Concepts for a New Class of Neural Networks.” Themesis YouTube Channel. (June 30, 2023.) (Accessed April 26, 2024; available at Key Concepts.)

Now here’s the kicker.

Transformers use a more complex enthalpy term. That’s the H in the free energy (F) equation.

This more complex enthalpy in transformers is called “multihead attention.”

If we’ve achieved such a HUGE BREAKTHROUGH with what is really – when you look at it – a fairly straightforward and logical evolution of the basic free energy equation (from a “simple” H to a more complex, “multihead” H), then the only term left for experimentation is the entropy term, or S.

And that’s what we do with CORTECONs(R).

Kind of logical, isn’t it?

In the same sense that playing chess is more interesting than checkers, and the game of Go is more complex and interesting than tic-tac-toe, playing with our entropy term is more fun than just having a simple (boring!) Sum[p(x)*log(p(x))].

Besides … it’s sort of fun.


Simple Code and Worked Example

I spent all day Wednesday ducking responsibilities and chores … immersing myself in the next baby step of code development. And YES! The code works. And the results are … interesting.

And I want to talk with you about this most recent bit of code, and the worked example … as soon as I can settle down and write everything up, make a YouTube … all of that.

The predecessor blogpost, “Brigette Bardot Says It All,” sets the stage; it talks about the specific EXAMPLE that is hard-coded into this code. (Just to make it EXCEEDINGLY easy to run – just plop into any Python IDLE an hit the “Run” button.)

So … you’ll get hands-on with the code, complete with the worked example and instructions and discussion of the math/physics involved, AND how this is a baby-step illustration of what we need for AGI … soon.

But because this involves a bit of physics, and because that physics is a wee bit more complex than the usual, let me encourage you to not only run the YouTube that I published as a Christmas present (published Dec. 26, 2023), but actually the whole CORTECON(R) YouTube series.

Here’s that most recent YouTube, with the PRIOR worked example.

Maren, Alianna J. 2023. “CORTECONs and AGI: Practical Equilibrium Solution for the 1D CVM.” Themesis YouTube Channel (Dec. 26, 2023.) (Accessed April 26, 2024; available online at CORTECONS.)

Of course, there’s a RESOURCE PAGE associated with this … OODLES of resources, if you want to get serious. (Includes the GitHub link for the code, as well!)

Here’s the whole CORTECON(R) playlist … if you have a bit more time.

CORTECONs(R) playlist

And … to put CORTECONs(R) in the context of AGI, here’s a more recent YouTube on AGI architectures.

Maren, Alianna J. 2024. “AGI: An Essential Architecture.” Themesis YouTube Channel (March 29, 2024). (Accessed April 26, 2024; available online at AGI Architecture.)

Summing It Up

This gives you a LOT of listening, reading, watching options … a full binge-list of YouTube, both individually and in playlists.

A nice hour-long listen-in with Lex and Sam.

A MEATY paper by Yann … which will take me a month or more to digest, so you can expect to hear more on this one.

And … code.

If you want to experiment (and catch up on CORTECONs(R)) before I release the next “table-top toy example” (featuring Brigette Bardot, so why not?) — check out that RESOURCE PAGE on the 1D CVM.

But … ONE MORE STEP.

If you’re feeling a need to REALLY get on top of that whole “generative-AI” thing – not just twiddling the knobs, but UNDERSTANDING – then you need to teach yourself the fundamentals.

OR … take the short course with us.

Top Ten Terms in Statistical Mechanics” (aka “Introduction to Generative AI.”) New Cohort starting this week!

You can study on your own.

As I said, we offer OODLES of free resources … check out our RESOURCE PAGE for the “Generative AI” topic.

We have a WHOLE YOUTUBE PLAYLIST on generative AI.

Generative AI Playlist.

But … if you’re like most of us … it’s more fun (and feels a WHOLE LOT BETTER) to make this journey with an experienced trail guide.

Need to talk with us before making your decision?

Use our “gmail” address (which we check most often):

  • themesisinc1 (at) gmail (dot) com.

See you soon!

Share via
Copy link
Powered by Social Snap