AGI Architecture Stability with Multiple Timeframes

The single most important design principle that absolutely must underlie AGI architectures is system stability; that is, bringing the AGI world model back to a stable state when there have been stimulus infusions due to external observations, internal feedback, or even just small perturbations.

Figure 1: Elements of a basic AGI architecture, focusing on mediation between a signal-level input layer and a symbolic-level “world model” layer, with a CORTECON(R) (COntent-Retentive, TEmporally-CONnected mediation system), as well as an external control loop setting the CORTECON(R) parameters. For the full description, see the YouTube shown below.

This YouTube vid discusses AGI architectures, circa March, 2024.


The Most Important Element in Creating an AGI

The most important thing – as we create an AGI – is to ensure system stability.

We don’t need to think about “system stability” for current transformer-based systems, even those with some “reasoning” ability (reinforcement learning on top), is that those systems are inherently stable.

  • The transformer architecture is a simple generative process – it is an encoder/decoder architecture. It’s pretrained before you interact with it. It simply generates the “next,” and then the “next,” and then the “next” – but it’s not going to have stability issues.
  • Reinforcement learning drives a system to a specific goal. It does not “explore” a landscape – it is very goal-oriented. Thus, it is not very susceptible to stability issues.

In contrast, AGI systems will involve interacting major components.

Specifically, AGI will involve creating and updating and maintaining a world model.

Leading AGI researchers – e.g., Yann LeCun and Karl Friston (as well as myself) – think of the world model as the AGI’s central core.

Quick look: here’s how Yann LeCun envisions an AGI architecture.

Figure 2. Yann LeCun’s conceptual approach to AGI architecture.

Note that in LeCun’s conceptual AGI architecture, the world model interacts with:

  • The “configurator,”
  • Short-term memory,
  • The “actor,” and
  • The “cost (minimization)” component.

Two Essential Papers Establishing AGI Framework


References and Resources

  • Friston, Karl, Rosalyn J. Moran, Yukie Nagai, Tadahiro Taniguchi, Hiroaki Gomi, and Josh Tenenbaum. 2021. “World Model Learning and Inference.” Neural Networks 144 (Dec. 2021): 573-590. (Full-text PDF available from ResearchGate; accessed Mar. 29, 2025, at PDF link.)
  • Gumbsch, C. 2024. “Learning Hierarchical World Models … ” OpenReview (Accessed Mar. 29, 2025; available online at OpenReview.)

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap