“That’s your lede,” said Claude.
“No, no,” I typed back. “I have something different in mind.” And I laid out my five points planned for this post, which Claude obediently organized into bullet points with supporting phrases.
But I could hear Claude grumbling in the background.
And this morning – twenty minutes ago, at 4:50 AM – I woke up and realized. “Damn, Claude was right!”
What Claude was right about – not to “bury the lede” any further – was this: Denis O. didn’t need a supercomputer. He didn’t need a server farm. He didn’t even need a particularly good excuse to sit at his desk.
He used … (drum roll, wait for it … ) a MacBook Pro.
Yeah. Just that. And SeedIQ just totally stomped the ARC-AGI 3 challenge.
Not just the one that the ARC-AGI 3 folks had originally laid out – but the second, more complicated one.
Yup. That one, too.
I’ll Give This One to Claude
Claude was right.
I lean towards the equations.
Claude, ever-so-supportive, will lean with me.
But occasionally, Claude – being much more well-trained on human behavior than I am – will pick up on what will really get people’s attention, and in this case – it was the power issue. Or the lack of power issue. Or really, the very minimal amount of compute (and corresponding power draw) that enabled this very newfangled approach to AI computation to succeed so brilliantly – where others were not even lifting their noses about the 1% performance bar.
So let’s have Denis O. (Denis Ovseyenko) tell this story, as extracted from his LinkedIn post, written just two days ago (April 22nd): “… people are now seriously discussing and investing into space based data centers, orbital solar power, and radiation hardened electronics to keep feeding the compute demands of the current paradigm, instead of asking the more basic question: What if this is the wrong AI paradigm to scale in the first place?”
Denis O. goes on to explain: “Seed IQ completes the tasks fully. LS20 [the first ARC-AGI 3 game], with all of its added complexity, moving pieces, traps, and tighter constraints, solves in about 9 seconds (!!) on my lame MacBook M1 Pro. No GPUs, no space rockets, no sharks with friggin’ laser beams…“
So at this time, please go read Denis O.’s post, in its entirety (it’s not that long), and then come back here. Denis O.’s post leads with: “There is something deeply absurd about the direction of AI right now.”
From a different post, Denis compares the SeedIQ(TM) performance against foundation models:
- Foundation models: “more than 250,000 actions” and “roughly $5,200 in compute” and “on the order of 50 to 200+ GPU hours”
- SeedIQ on LS20: “solves in about 9 seconds (!!) on my lame MacBook M1 Pro”
- Power draw: “around 20W”
So let’s take this as a reset moment for AI. A substantial reset – on the order of the deep learning breakthrough and the invention of transformers.
What this means for us is that we need to rethink not just how we do AI, but also how we educate ourselves (and the next generation of AI specialists).
This is a structural shift in our AI curriculum, not just adding in a few more courses.
But before we do that, let’s look at some elements that we’ll need to include – those that Denis O. identified as SeedIQ components, and also some elements from Friston.
What We Need to Understand SeedIQ
SeedIQ rests on three core elements:
- Active inference – the baseline,
- ΑΩ (Alpha-Omega) Field-of-belief, coupled with a
- Hamiltonian Monte Carlo approach to modeling.
We’ll take these in reverse order.
Hamiltonian Monte Carlo
It’s been forever. As in … decades … since I’ve last seen a Hamiltonian equation. And “Hamiltonian Monte Carlo”? That threw me, at first.
But I’m indebted to “Bayesian Brad” (Bradley Gram-Hansen, Ph.D.) for his GitHub post on Hamiltonian Monte Carlo method. His brief summary is: “The Hamiltonian of a physical system is defined completely with respect to the position q and p momentum variables and defines the total energy of the system.”
So what we’re doing here, in a nutshell: we’re using classic physics (“Hamiltonians”) to describe the energy of a particle. (In this case, an agent.) The agent can move around in its environment by moving from one state-space to another. That is, it is not constrained to move in a linear path, it can move in a more complex and nonlinear manner. Using the game of chess as an analogy, the agent doesn’t have to move in a straight line (like a bishop, rook, or queen). It can move in an irregular manner (like a knight).
Alpha-Omega Field-of-Belief
Denise Holt, AIX Global Innovations Co-Founder, describes the agent communications protocols in SeedIQ (Substack, Jan. 5, 2026): “Crucially, no single signal determines action. Coherence is maintained through belief propagation across agents rather than centralized command or symbolic control loops. This allows weak signals that are individually ambiguous to become meaningful when they align.” [Bold mine.]
Denis O. further explains this in his LinkedIn post (April 22, 2026): “Each agent maintains its own internal world model… Global coherence … emerges because those per agent world models evolve under the same viability bounds, the same governing structure, and the same constraint space… Through resonance scaling and shared structural alignment, the interaction surface compresses to O(1). Agents start by exchanging only errors, constraints, and future belief projections, then that exchange drops away as coherence rises. In the dominant regime, this becomes effectively ZERO comms. Consensus is silent.“
That last sentence – “Consensus is silent” – is integral. That means that as a collection of agents resolves their understanding of an environment, the amount of work needed to maintain coherence diminishes to near-zero, rather than remaining computationally huge. This becomes ESSENTIAL — not optional — as we move toward genuinely complex, real-world multi-agent systems.
Active Inference
And now we come to the crux of the matter – active inference.
Active inference is built on variational inference, which is a core concept in machine learning. The problem is – it’s not taught as widely as we’d like.
There are reasons for this. Like any emerging discipline, we early adopters have had to teach ourselves from the original research papers – mostly those by Friston, or Friston and colleagues. But it’s more than that.
Let me identify these barriers briefly, because we’ll address them more completely in a later post. But for now, for this moment, let’s step back and survey – because the landscape is changing rapidly.
Dearth of Teaching Materials
Teaching ourselves ANYTHING from raw source materials is hard. Teaching ourselves active inference via direct encounter with Friston’s papers is PARTICULARLY HARD.
I taught myself “early Fristonese” – that is, in 2024, I completed decoding Friston (2013, 2015) – just one equation – using Friston’s original works, a predecessor dissertation by Beal (2003) that established notation and the fundamental derivations, and a later tutorial on variational inference (Blei, Kucukelbir, and McAuliffe, 2017, 2018).
This took me eight years.
You don’t have eight years to invest in understanding early Friston – particularly because later Friston changed notation substantially, and then subsequent papers had lots of little changes in notation, along with substantively different ways of breaking down the primary equations and interpreting the component parts.
Trying to trace Friston’s notation and interpretations through multiple papers, with different co-authors, is a direct route to madness. (I know. I’ve tried.)
Fortunately, there are now books.
Specifically, two books:
- Active Inference: The Free Energy Principle in Mind, Brain, and Behavior, by Parr, Pezzulo, Friston (re-released in paperback in 2025), and
- Fundamentals of Active Inference: Principles, Algorithms, and Applications of the Free Energy Principle for Engineers, by S.V. Namjoshi (2026).
The latter book – the “Fundamentals” – has been out for just a month. The Active Inference Institute is sponsoring a study group on this book – your choice of three different meeting times for online groups, each led by a different facilitator. At 536 pages, with VERY small font, it is well-written but difficult (in a pure physical sense) to read. Worth the effort. Should possibly be supplemented by the Kindle version, in order to zoom in on the specific words and equations.
Speaking of which, it is VERY equations-dense; written at a true graduate-level, and I would think that a full semester (or even year-long) course would be appropriate.
The other thing – just as important in an academic setting – is the need for a laboratory component. I’ve just become aware of a candidate, and will describe in a future blogpost.
Dearth of Qualified Faculty
In order to study active inference within a structured group setting, there needs to be someone who knows the topic well enough to teach it – or at least lead a group study session. I’m aware of some options, and will describe in more detail in a later post. Briefly, these are:
- AIX Global Innovations Active Inference “Learning Lab,”
- Active Inference Institute – a wealth of resources, some organic to AII,
- A GitHub repository of active inference educational materials, maintained by Dr. Daniel Friedman (Active Inference Institute Board of Directors President) – ranging from kindergarten to graduate school (I haven’t vetted this yet; now on my to-do list), and
- Themesis, Inc. – a collection of YouTubes, blogposts (like this one), and a foundational course called “Top Ten Terms in Statistical Mechanics.”
Dearth of Fundamental Understanding – It’s Not Traditional AI!
Bottom, bottom-line. The real reason why it’s so damn hard to learn this stuff.
Active inference rests on variational inference. Variational inference rests on the free energy concept, drawn from statistical mechanics. When we take free energy over into either variational or active inference, it becomes less of a model, more of a metaphor. But the bottom line still exists – the entry point to understanding variational (and then active) inference is rooted in the notion of free energy.
This is a totally different approach to learning variational inference – or any form of generative AI – than is being taught.
Simply because the educational framework has been built around a historical approach to AI and neural networks that starts off with backpropagation, and goes on from there.
Being ready to understand active inference means a different starting point – one which most AI teachers and program directors/administrators/deans just don’t want to understand. Because it’s more conceptually challenging than backpropagation.
But now, universities will have to step back and reconsider. Will they continue to teach AI as a set of trade school skills? Or will they recognize that there was a good reason why Hinton & Hopfield got the 2024 Nobel Prize in Physics, and that AI now requires a complete rethinking of curriculum?
That rethinking is already underway. And it starts with understanding what active inference actually is, and where it comes from.
What We’d Need to Learn – In Order to Understand
If this has sparked your interest — and I hope it has — here are three entry points, in order of accessibility:
- Start here: Denise Holt’s Substack. Her firsthand account of the ARC-AGI 3 story is the best single piece of writing on what SeedIQ actually did and why it matters. Read it first.
- Go deeper: Denis O.’s LinkedIn. His April 21st post on the compute comparison is essential. Follow him — he’s posting actively right now and the material is extraordinary.
- For the serious student: The Active Inference Institute’s textbook study group on Namjoshi’s Fundamentals of Active Inference is open and running. 140 participants worldwide. Three meeting times. Free to join; book suggested but not required.
Also – perhaps most essential to building your own core understanding:
- Build the foundation: If you want to understand where active inference comes from — the statistical mechanics substrate, the free energy concept, the generative model framework — Top Ten Terms in Statistical Mechanics at Themesis Academy is where to start. Designed specifically for AI practitioners who need the conceptual grounding that most programs skip.
More is coming. This conversation has barely started.
The Themesis quarterly Open House Salon meets on a Sunday near the beginning of each quarter. To get on the invite list, opt in at themesis.com/themesis/.