The most exciting thing right now is to look at how AGI (artificial GENERAL intelligence) is emerging in 2025. But in order to understand AGI, we first need a firm, solid grasp on generative AI (“Gen-AI”), because AGI is going to be AT LEAST an order of magnitude more complex than Gen-AI.
And Gen-AI is, as we already know, more complex (by at least an order of magnitude) than straightforward, basic AI.
We can self-study Gen-AI. (Many of us do.)
But WHEN we do this Gen-AI study on our own, then it is VERY likely that we’ll have some slips-stumbles-falls.
For all my hard work, I had one of those – just a few weeks ago. I did a whole YouTube example on “entropy,” and totally FORGOT to make it “S = k*ln(W),” and instead worked the numbers for “S = k*W.” Absolutely embarrassing. As you can imagine, THAT YouTube came down REAL fast!
But in terms of making that trek over the “Sierra Nevada mountains of generative AI,” it was a lot like losing my footing, falling head-over-heels down the slope, banging my head against a tree and and knocking myself silly.
Silly mistake – I’ll chalk it up to fatigue, and it was easily recovered.
But along the way, I’ve made BIGGER mistakes – ones where it has taken me longer to realize that I WAS off course. And then, recovery TOOK LONGER.
With the fast pace of AI evolution, you cannot afford to get knocked silly, go off course, or do anything else that will delay you.
You Can’t Afford to Make the Same Bloopers: Introducing the “Twelve Days of Generative AI“
AGI is not here, not quite yet – but we are HUGELY on the verge. And to understand AGI, we need to understand Gen-AI. Therefore, I’m introducing the Twelve Days of Generative AI – a series of emails (to those who have Opted In with Themesis: see https://themesis.com/themesis/), blogposts, and (as I can make them) some short vids.
The purpose is not to introduce new content.
Rather, it is to gather up content that I’ve already created and released over this past year, and make it organized and accessible to you, so that if you want to self-study generative AI (a GREAT idea for this coming holiday season, when you might claim time for yourself), you can do so in a calm, sustained, and methodical manner.
Therefore … (drum roll, please!) …
Day 1: Generative AI: Introduction and Overview
There are several different kinds of Gen-AI. Most people don’t have a solid “bird’s eye view,” and this makes it harder for people to understand the similarities and differences; and – most especially – the common core elements that make ALL of these methods “generative.”
Key points:
- Two basic TYPES of Gen-AI: neural network-based, and variational learning-based. The latter use a model which may or may not involve a neural network.
- All energy-based methods (e.g., the Boltzmann machine, the restricted Boltzmann machine, and various “deep” architecture building on that, use a common core of equations; variational autoencoders (Welling and Kingma) are different – they’re an encoder/decoder, and transformers are an evolution beyond both simple energy-based models and variational autoencoders.
- Variational methods have been around for a long time; they are cited in the “history of Boltzmann machine training” section in Salakhutdinov and Hinton (2012), and are foundational to a whole branch of machine learning. They are also the basis for Karl Friston’s active inference, and from there, renormalising generative models (Friston et al., 2024).
Generative AI Foundations
All generative methods share these common underlying and core foundations:
- The REVERSE Kullback-Leibler divergence as a starting point – comparing the data against a model; varying model parameters,
- Bayesian conditional probability – this is where we introduce the dependence of the latent variables on the observed; this is what makes these methods “generative,” and
- Statistical mechanics – ultimately, we’re minimizing a free energy equation (which we obtain by math manipulations of the reverse KL divergence, once the Bayesian conditional probability has been inserted).
Here Are Some Useful Overview Resources
Five-Minute Intro/Overview Vid (available on YouTube)
This is the easiest entry point, and overviews:
- Difference between how generative AI and discriminative methods work,
- Key elements of Gen-AI (see figure above), and
- Limitations of working with Gen-Ai systems (e.g., ChatGPT).
Fourteen-Minute Overview Vid (available on YouTube)
This builds on the former vid, and details:
- How transformers “evolve” from basic gen-AI,
- The great “missing piece” in creating functional AGI systems (symbolic AI), and
- Moving beyond simple Gen-Ai into AGI architectures (what would be needed).
Each video is backed by a blogpost that details all the papers mentioned (with references formatted in Chicago Author/Date style, as that is now the reference style that we use in the Northwestern Master of Science in Data Science program). Access to the blogpost is from the Description Box associated with each vid.
These two vids would give a great deal of overview during Week 1.
One More Vid – the “Coming AGI Wars”
If we wanted to go beyond this, THIS VID provides an overview of the Gen-AI playing field, current BEFORE Friston et al. introduced Renormalising Generative Models (RGMs) in July of this year. This vid is a bit longer, but has gotten a LOT of views!
I’ll follow up more tomorrow with some resources and suggestions for “autoencoders.”
All my best – AJM
P.S. The Gen-AI Equivalent of the Cloud of Unknowing
The Cloud of Unknowing was a mystical work, written by an unknown Christian author in the late 1300’s. It talks about a contemplative practice. (NOT A BAD IDEA as we approach the holiday season!)
Similarly, Gen-AI can be – if not mystical, then at least mysterious.
The constant juxtaposition between statistical mechanics (free energy as a metaphor) and Bayesian theory (including creation of latent variables) creates a dynamic tension.
One way in which to hold this “tension” in balance is to get a handle on BOTH elements – the stat mech and the Bayesian.
You can read the original, classic papers (as I did).
But then — it is very likely that you’ll get lost in your own “Cloud of Unknowing” as you deal with this juxtaposition. (As I did. Many, MANY times.)
THEREFORE, we developed the short course: Top Ten Terms in Statistical Mechanics – a “vocabulary-based” stat mech introduction.
If you haven’t checked out the Top Ten Terms in Statistical Mechanics Themesis Short Course yet – the price is STILL less than half of the regular price. And the price is going up every day – so check it out now, and consider enrolling if REALLY being on top of Gen-AI is part of your 2025 Action Plan!
Check out the Top Ten Terms course HERE: