The AGI Landscape Just Changed: SeedIQ, Active Inference, and What ARC-AGI 3 Actually Tells Us

An incredible roller-coaster of a couple of weeks!

Every time I think I’ve wrapped my head around the big storyline – the big breakthroughs, how they all connect together into a meaningful arc – every time I feel organized to write something – there’s a massive new breakthrough. These last two weeks topped everything.

What Just Happened

In a nutshell, SeedIQ – up until now a totally stealth company – leaped out from the darkness to totally stomp the ARC-AGI 3 benchmark. But did they get the recognition that they deserve?

Hell, no.

SeedIQ refused to release its code. They were totally willing to give up prize money. But they sure weren’t willing to give up code.

BUT … they were performing at 100% amongst the HUMANS on the ARC-AGI 3 leaderboard.

So what happened next … not saying anything was causative, but … the ARC-AGI 3 people made the challenge more difficult.

And the SeedIQ folks – once again – totally nailed it.

Just read the story: ARC AGI 3: We Didn’t Expect This to Happen

Better than any entertainment that you’ll find this weekend.


A Paradigm Shift — On the Level of the Nobel Prize

Here’s our bottom line. What just happened was profoundly significant. This is innovation on the level of:

  • Hopfield NN (basic stat mech model applied to a neural network) => Boltzmann machine (introducing latent variables, solving the problem). (This was a Hopfield => Hinton breakthrough combination, and yes, 2024 Nobel Prize in Physics.)
  • Basic (restricted) Boltzmann machine => (via contrastive divergence) deep learning – a major innovation in how RBMs learned allowing scaling – allowing the build of a deep structure, where prior-to, only shallow structure worked. (This was Hinton, alone. Creative genius in isolation for more than a decade.)
  • TF*IDF (term frequency*inverse document frequency) => Latent Dirichlet Allocation – breakthrough in NLP (natural language processing), first significant introduction of latent variables (Blei, Ng, & Jordan, 2003).

What I’m saying here is — MAJOR breakthrough.

Now over the past several years, our AI world has been dominated by transformer-based generative models (LLMs) coupled with RLHF (reinforcement learning w/ human feedback). And yes, huge progress. Monstrous money. Mind-boggling energy expenditure.

But these LLM + RLHF advances (and yes, let’s include chain-of-thought, other inference methods, and the recent introduction of agents) have taken us down a certain path – and we’ve been largely too blindered to look at other avenues.

The reason that we’ve been hugely focused on RL as a reasoning baseline has been that the potential competitor method – active inference – has not scaled well.

What SeedIQ just did was break that scaling barrier.


The Back-Chain: Where Did SeedIQ Come From?

To understand why this matters, you need to trace the lineage.

Please check out Seed IQ: From Simulating Outcomes to Managing Reality in Real Time.

Then read Denis O.’s story about how he moved from Verses.ai to AIX Global Innovations, founded by Denise Holt, to find the right environment for creating SeedIQ:

As Denise Holt (Founder & CEO) describes it: “Seed IQ focuses on maintaining system viability over time, balancing constraints, adapting to change, and ensuring stability. It operates within live environments, continuously updating its internal structure in response to ever-evolving conditions.

There’s a YouTube. I’ve watched once, need to rewatch multiple times.

The lineage runs like this:

Active Inference (Friston) → Renormalising Generative Models (Friston + VERSES team, 2024) → SeedIQ™ (Denis O., AIX Global Innovations)

Karl Friston’s active inference framework has been theoretically compelling for over a decade — but it didn’t scale. The 2024 breakthrough, captured in the “From Pixels to Planning” paper (Friston et al., 2024), introduced Renormalising Generative Models (RGMs): a hierarchical form of active inference that finally made scaling possible. [See my YouTube and blogpost comparing RGMs with Action Perception Divergence for a full treatment. Blogpost: Contrast-and-Compare: Friston et al. (2024) and Hafner et al. ]

Here’s the YouTube that I did on RGMs: “AGI: Comparing Renormalising Generative Models (RGMs) with Action Perception Divergence (APD).”

SeedIQ took that foundation — RGMs, multi-agent architecture, and what Holt calls “secret sauce” — and built something that can operate in live environments, continuously updating, continuously adapting.

There’s more to the SeedIQ conceptual base – I’m limiting my description to just the thread that connects through RGMs to active inference, which is a key component. But there’s much more, which I’ll address in subsequent blogposts.

The structural parallel to deep learning’s history is not accidental. Consider:

  • Hopfield networks → Boltzmann machines (introducing latent variables) — Nobel Prize 2024
  • Restricted Boltzmann machines → Deep Learning (via contrastive divergence, then Hinton + Salakhutdinov 2006/2012)
  • Active Inference → RGMs → SeedIQ

Each transition followed the same pattern: a theoretically powerful but limited baseline architecture, a structural breakthrough that enabled scaling, and then an explosion of capability.

We are at that third moment. Right now.

Only this time, it’s the baseline active inference (Friston) => SeedIQ capability – hierarchical, multi-agents, more. (Denis O.) Conceptually the same level of leaping.


The Quantum Signal: Why This Goes Further Than You Think

Here’s where it gets even bigger.

Denise Holt and her colleague Denis O. didn’t build SeedIQ for ARC-AGI 3. They built it for something else entirely: quantum computing execution governance.

Quantum systems have a problem that nobody has solved. They don’t fail cleanly. They drift. They slide gradually into states where results look plausible — numeric, structured, well-formed — but are fundamentally meaningless. And by the time you notice, you’ve burned tens of thousands of dollars in QPU time and gotten nothing.

This is not a hardware problem. It’s not a physics problem. It’s a control engineering problem.

As Holt puts it: current AI tools operating in quantum systems are “reactive. They respond after problems appear. They do not maintain an internal representation of execution health, constraint boundaries, or irreversible failure regions over time.”

SeedIQ does.

Read this Substack post by Holt: “Quantum Now Has a Path to Scale. Seed IQ Just Proved It.

SeedIQ monitors execution trajectories continuously. It detects when stability is degrading. And crucially — it knows when to stop.

“Halting is not failure. It is governance.”

That phrase deserves to sit with you for a moment.

What SeedIQ brings to quantum computing is the same thing it brought to ARC-AGI 3: not prediction, not optimization — governance. The ability to maintain viability over time under uncertainty, and to know when continuing is no longer worth it.

This is active inference doing what it was always theoretically designed to do. Managing reality in real time.

And if it works in quantum — the most unforgiving operational environment that exists — it works anywhere.


How to Go Deeper

SeedIQ draws on multiple converging concepts — active inference, geometric metacognition, differential geometry, and what Denis O. calls the Alpha Omega Field of Belief with Hamiltonian Monte Carlo (ΑΩ FoB HMC). This is not a simple architecture to unpack in a single blogpost.

For context on how Denis O. arrived here — and why he chose AIX Global Innovations over VERSES — his own LinkedIn post says it better than I can. Worth reading in full. I gave that link above, here it is again: Denis O. – in his own words.

For the theoretical foundation, start with Renormalising Generative Models. My YouTube from August 2024 walks through the evolution from basic active inference to RGMs and why it matters. Here’s that YouTube again.

The supporting blogpost contains the full reference list including the “Pixels to Planning” paper (Friston et al., 2024).

For the quantum execution governance argument, read Denise Holt’s full article: Quantum Now Has a Path to Scale. Seed IQ Just Proved It. And for the ARC-AGI 3 story itself, start here: ARC AGI 3: We Didn’t Expect This to Happen

More is coming. This is week one of a much longer conversation.


What This Means

We are not watching an incremental improvement in AI capability.

We are watching the moment when a theoretically powerful but practically limited framework — active inference — broke through its scaling barrier and demonstrated real-world performance that leaves every LLM in the dust on the tasks that matter most.

The ARC-AGI 3 benchmark was supposed to be the wall. SeedIQ walked through it.

The quantum execution governance problem was supposed to require hardware solutions. SeedIQ reframed it as a control engineering problem — and solved it.

This is week one. The conversation has barely started.

If you want to be part of it — you know where to find us.

The Themesis quarterly “Open House” Salon meets on the second Sunday of each month. Details at themesis.com. You MUST be “Opted-In” to receive an invitation. To Opt-In – to be on our invite list – go to: https://themesis.com/themesis. Use the Opt-In form. We’ll see you soon!

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap