This blogpost accompanies a YouTube (thumbnail below), which is still under development.
Blogpost STILL In Progress!
Dear All –
It’s 8 AM on Tuesday, Oct. 222nd.
Sorry, I was just a little “post” after getting this YouTube up on Sunday.
I’ll slowly be filling in these extra pieces.
Please check back soon. And thank you! – AJM
Quick Summary
We can now identify three well-known base-types of AGI likely to emerge within the next few years, as well as a potential fourth.
By “base-type,” we are referring to the foundational method on which the AGI is built. We have identified three such well-known methods, with two major companies (OpenAI and Google’s DeepMind) each using a similar method – likely with small but significant differences.
In addition, there is a fourth method that is not yet well-known. This method provides a necessary bridge for neuro-symbolic computing, and is thus included here.
These four base-types are:
- LLM+DeepRL: Large Language Model (LLM) or similar transformer-based method plus deep reinforcement learning (RL) – OpenAI and Google’s DeepMind,
- H-JEPA: Hierarchical Joint Embedding Predictive Architecture, as proposed by Yann LeCun, and championed by Meta,
- RGM: Renormalising Generative Models, as proposed by Karl Friston and colleagues, and championed by Verses.ai, and
- CORTECONs(R): COntent-Retentive, TEmporally-CONnected neural networks, as proposed by Alianna Maren, and championed by Themesis.
Each base-type offers distinct strengths and advantages, which we discuss in the following sections.
Google and OpenAI: LLMs + (Deep) Reinforcement Learning
Both Google and OpenAI are building “inference” capabilities that layer (deep) reinforcement learning on their existing LLMs.
The most notable instances of “inference” have to do with planning, typically as used for solving a mathematical problem.
Limitations of Reinforcement Learning
Reinforcement learning is known to be a form of “narrow intelligence,” and it is not clear yet whether such RL-based approaches will be sufficient to mature into full-scale AGIs.
See LeCun (2022, in his overview architecture paper, for a discussion of reinforcement learning as being a form of “narrow” AI. Similarly, Sajid et al., 2020, in a detailed contrast-and-compare of active inference with reinforcement learning, come to the same conclusion – that RL is useful, but limited and narrow, despite the recent successes of using RL for some blindingly awesome, show-stopping applications.
RGMs and H-JEPA (Verses.ai and Meta)
Two other base-types – Friston et al.’s Renormalising Generative Models (RGMs) and LeCun’s Hierarchical Joint Embedding Predictive Architecture (H-JEPA) are both means for creating hierarchies of planning elements, with successively greater abstractions in the upper levels of the hierarchy.
Key differences between the two are:
- RGMs (and its foundation method, active inference) are generative, and can work well with state-space models (i.e., modeling discretely-different states, as in different states in a board game or video game).
- H-JEPA (as well as JEPA) is NOT generative, and works with “differentiable” world models – i.e., those that are (preferably) continuous, allowing for smooth differentiation.
Currently, Verses.ai (where Prof. Friston is Chief Scientist) is addressing the “spatial web,” or AGI at edge devices.
Prof. LeCun (Chief AI Scientist at Meta) points out how JEPA can be used for correlating information from two different sensors, as might be done in an autonomous vehicle.
Neither Friston nor LeCun are explicitly addressing how signal-level information can be correlated with symbolic representations.
CORTECONs(R): Temporal Connectivity for Enabling Neuro-Symbolic Connections
Of the various methods discussed for AGI, CORTECONs(R) (COntent-Retentive, TEmporally-CONnected neural networks, Maren at Themesis) are in the earliest development stage. They use a different form of a free energy equation that allows control over both the total number of active latent nodes, as well as “clustering” of latent nodes, so that latent node “clusters” can provide long-term activation of ontology-layer nodes. This enables sustained activation within the symbolic representation layer or “world model” of an AGI system, allowing symbolically-based reasoning to occur with access to information stored at that layer (e.g., symbolic node attributes, relationships, and beliefs).
Sun Tzu’s The Art of War
We’ll be using Sun Tzu’s The Art of War consistently throughout this and upcoming YouTubes (and their associated blogposts) as a framework for evaluating the players and positions in the evolving AGI landscape.
Here’s a good online version of The Art of War.
If you want to invest in a book, we’ve found that Michael Nylan’s recent translation is excellent – nuanced and well-balanced.
The “Five Key Reads”
In the YouTube, I mentioned that I’d suggest five key papers to read – ones that would serve as “building blocks” as we establish our basis for emerging AGI.
Here they are, in order:
- LeCun’s classic (somewhat notorious) H-JEPA paper,
- Sajid et al. – important contrast-and-compare discussion on active inference and reinforcement learning,
- Friston et al.’s recent (July 22, 2024) “Renormalising Generative Models” (RGMs) paper,
- Hafner et al. (2022) on “Deep Hierarchical Planning from Pixels,” and
- Maren (2021) on CORTECONs(R).
LeCun (2022) on H-JEPA
- LeCun, Yann. 2022. “A Path Towards Autonomous Machine Intelligence. Version 0.9.2, 2022-06-27” OpenReview (June 27, 2022). (Accessed May 20, 2024. Available online at OpenReview.)
Sajid et al. (2020) on Active Inference and Reinforcement Learning
- Sajid, Noor, Philip J. Ball, Thomas Parr, and Karl J. Friston. 2020. “Active Inference: Demystified and Compared.” arXiv::1909.10863v3 [cs.AI] 30 Oct 2020. (Accessed Aug. 10, 2024; available online at https://arxiv.org/pdf/1909.10863.)
Friston et al. (2024) on RGMs (Renormalising Generative Models)
- Friston, Karl, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley and Thomas Parr. 2024. “From Pixels to Planning: Scale-Free Active Inference.” arXiv:2407.20292v1 [cs.LG] 27 Jul 2024. doi:10.48550/arXiv.2407.20292. (Accessed Aug. 10, 2024; available online at https://arxiv.org/pdf/2407.20292.)
Hafner et al. (2022) on Reinforcement Learning
I mention two Hafner papers in the vid, here’s a good one for getting started – on the role of reinforcement learning for constructing hierarchical plans.
- Hafner, Danijar, Kuang-Huei Lee, Ian Fischer, and Pieter Abbeel. 2022. “Deep Hierarchical Planning from Pixels.” Proc. 36th Conf. on NIPS (Neural Information Processing Systems). (Accessed Oct. 22, 2024; available online at Proc. NIPS 2022.)
Maren (2021) on CORTECONs(R)
This paper addresses the 2-D Cluster Variation Method, which is integral to the CORTECON (COntent-Retentive, TEmporally-CONnected neural network) approach.
- Maren, Alianna J. 2021. “The 2-D Cluster Variation Method: Topography Illustrations and Their Enthalpy Parameter Correlations.” Entropy 23(3):319; doi:10.3390/e23030319. (Accessed Oct. 22, 2024; available online at Entropy.)
Planning: A Key Test for AGI
One of the great AGI focal efforts right now is on getting AI systems to do planning. The status as of early spring, 2024, as reported in recent article by Cal Newport in The New Yorker (also reprinted in his blog at CalNewport.com), is that GPT-4 can do some planning well, and in other cases, it falls apart.
More recently, a recent (Oct. 15, 2024) study by Apple researchers shows deficiencies in the planning capabilities of three leading LLMs, including OpenAI’s Strawberry-based o1. They note: “We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data. When we add a single clause that appears relevant to the question, we observe significant performance drops (up to 65%) across all state-of-the-art models, even though the added clause does not contribute to the reasoning chain needed to reach the final answer.”
Google’s DeepMind: LLMs + Deep Reinforcement Learning
Google’s DeepMind continues to build on its known strength in reinforcement learning as its primary approach to creating AGI.
As expressed by Rhiannon Williams in a 2024 MIT Technology Review article, “Google DeepMind says it has trained two specialized AI systems to solve complex math problems involving advanced reasoning.”
Williams notes that “[solving math problems] often require[s] forming and drawing on abstractions. They also involve complex hierarchical planning, as well as setting subgoals, backtracking, and trying new paths. All these are challenging for AI.”
Google’s approach combines its Gemini LLM with reinforcement learning; “The key is a version of DeepMind’s Gemini AI that’s fine-tuned to automatically translate math problems phrased in natural, informal language into formal statements, which are easier for the AI to process. This created a large library of formal math problems with varying degrees of difficulty.”
In a different (but related) vein, Demis Hassabis has guided DeepMind (which he founded in 2010 with Shane Legg and Mustafa Suleyman, together with David Silver) towards developing reinforcement learning capabilities, often applied to various games. As reported by Will Douglas Heaven (2024), DeepMind’s Genie can now create video games from scratch.
What we’ve found interesting is DeepMind’s progressive development of reinforcement learning-based capabilities over time
OpenAI: LLMs + Reinforcement Learning
OpenAI appears to be using reinforcement learning as a primary means for achieving AGI, or at least next-gen AI, in its products – most recently o1.
Kylie Robinson, writing for The Verge, summarizes: “For OpenAI, o1 represents a step toward its broader goal of human-like artificial intelligence. More practically, it does a better job at writing code and solving multistep problems than previous models. But it’s also more expensive and slower to use than GPT-4o.”
Robinson goes on to describe o1 training: “The training behind o1 is fundamentally different from its predecessors, OpenAI’s research lead, Jerry Tworek … says o1 ‘has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it.’”
With more detail on training, Robinson notes that “OpenAI taught previous GPT models to mimic patterns from its training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and penalties. It then uses a ‘chain of thought’ to process queries, similarly to how humans process problems by going through them step-by-step.”
As expressed by Nathan Lambert, “o1 is a system designed by training new models on long reasoning chains, with lots of reinforcement learning 🍒, and deploying them at scale. Unlike traditional autoregressive language models, it is doing an online search for the user. It is spending more on inference, which confirms the existence of new scaling laws — inference scaling laws.”
Again, quoting from N. Lambert, “This looks far closer to closed-loop control than anything we have seen in the language modeling space ever.”
I think that the key point is that identification of closed-loop control. AGI will require internal feedback and control systems, not just a single push-through approach, which has been dominant among many prior systems.
One more useful quote from Dr. Lambert: “OpenAI’s o1 is the first product that is searching over text in a large-scale deployment. This is a major breakthrough and will transform the deployment stacks and expectations for AI products. o1 is a useful demo to bridge the gap from existing language models to a world of different agentic products.” (Bold and italics mine.)
Prognostications
OpenAI’s approach to developing reasoning capabilities – and moving into AGI – seem as strong as possible, playing off the well-known reinforcement learning approach and coupling it with step-by-step inference.
The problem, despite OpenAI’s recent fundraising (bringing its market valuation to $157B), is its continued brain drain. Throughout the past year, OpenAI has lost several of its key tech players. It’s not apparent if this has hit the “Strawberry” team or not, but this certainly cannot be good for morale.
Further, there are substantive weaknesses with the LLM + DeepRL approach, evidenced in a recent study by Apple.
Meta: H-JEPA (Hierarchical Joint Embedding Predictive Architecture)
{* Details forthcoming – papers, prior YouTubes, prior blogposts, etc. *)
Verses.ai: RGM (Renormalising Generative Models)
{* Details forthcoming – papers, prior YouTubes, prior blogposts, etc. *)
Themesis: CORTECONs(R)
The final, and least well-known AGI building block is the CORTECON(R) (COntent-Retentive, TEmporally-CONnected neural network), being developed by Themesis, Inc.
{* Details to follow. *}
Comparing the Five AGI Approaches
AGI Architecture
The one thing that is hugely clear about any attempts to create an AGI architecture – especially one that involves a neural-symbolic integration – is that existing computational platforms are not enough.
This is, of course, a problem that is also being addressed in all attempts to do inference; e.g., to integrate reinforcement learning with an LLM, or any similar approach.
A recent study by Wan et al. (2024) addresses this issue. In “Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture,” they note that “[o]ur studies
reveal that neuro-symbolic models suffer from inefficiencies on
off-the-shelf hardware, due to the memory-bound nature of
vector-symbolic and logical operations, complex flow control, data
dependencies, sparsity variations, and limited scalability. Based
on profiling insights, we suggest cross-layer optimization solutions
and present a hardware acceleration case study for vector
symbolic architecture to improve the performance, efficiency, and
scalability of neuro-symbolic computing.”
Creating chips that can support AGI architectures will need high-level investment, which is the sort that conceivably could come from major investors in across-the-board AGI, such as Masayoshi Son, in line with other major investments that Son has recently made in multiple levels of AGI development, as reported by Chowdhury (2024),
Resources and References
Sun Tzu’s The Art of War
- Sun Tzu (Author) and Michael Nylan (Translator). 2020. The Art of War: A New Translation by Michael Nylan. (New York: W. W. Norton & Company.)
Planning in AI Systems
- Iman Mirzadeh, Iman, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar. 2024. “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.” arXiv:2410.05229v1 [cs.LG] 7 Oct 2024. (Accessed Oct. 15, 2024; available at arXiv.)
- Newport, Cal. 2024. “ChatGPT Can’t Plan. This Matters.” CalNewport.com (March 21, 2024). (Accessed Oct. 15, 2024; available at CalNewport.com.)
Assessing Reinforcement Learning for AGI Potential
- Sajid, Noor, Philip J. Ball, Thomas Parr, and Karl J. Friston. 2020. “Active Inference: Demystified and Compared.” arXiv::1909.10863v3 [cs.AI] 30 Oct 2020. (Accessed Aug. 10, 2024; available online at https://arxiv.org/pdf/1909.10863.)
- LeCun, Yann. 2022. “A Path Towards Autonomous Machine Intelligence. Version 0.9.2, 2022-06-27” OpenReview (June 27, 2022). (Accessed May 20, 2024. Available online at OpenReview.)
AGI-RL-LLM: AGI Based on a Combination of Reinforcement Learning with a Large Language Model
OpenAI
- Lambert, Nathan. 2024. “Reverse Engineering OpenAI’s o1.” Interconnects.ai Tech Blog (24 Sept. 2024). (Accessed Oct. 7, 2024; available at N. Lambert’s Interconnects.ai.)
- Mirzadeh, Iman, et al. 2024. “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.” arXiv:2410.05229v1 [cs.LG] 7 Oct 2024. (Accessed Oct. 15, 2024; available online at arXiv.)
- Robinson, Kylie. 2024. “OpenAI Releases o1, Its First Model with ‘Reasoning’ Abilities.” The Verge (Sept. 12, 2024). (Accessed Oct. 7, 2024; available at The Verge.)
- Zeitchik, Steven. 2024. “What the Heck Is Going On At OpenAI?” The Hollywood Reporter (Oct. 4, 2024). (Accessed Oct. 7, 2024; available at The Hollywood Reporter.)
- Newport, Cal. 2024. “Can an AI Make Plans?” The New Yorker (March 15, 2024). (Accessed Oct. 15, 2024; available at The New Yorker.)
- Google SIMA Team. 2024. “A Generalist AI Agent for 3D Virtual Environments.” Google DeepMind Tech Blog series ( 2024). (Accessed Oct. 7, 2024; available at Google DeepMind.)
- Heaven, Will Douglas. 2024. “Google DeepMind’s New Generative Model Makes Super Mario–like Games from Scratch.” MIT Technology Review (Feb. 29, 2024). (Accessed Oct. 15, 2024; available at MIT Tech Review.)
- Williams, Rhiannon. 2024. “Google DeepMind’s New AI Systems Can Now Solve Complex Math Problems.” MIT Technology Review (July 25, 2024). (Accessed Oct. 7, 2024; available at MIT Tech Review)
AGI-RGM: AGI Based on Renormalisation Generative Models
- Friston, Karl, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley and Thomas Parr. 2024. “From Pixels to Planning: Scale-Free Active Inference.” arXiv:2407.20292v1 [cs.LG] 27 Jul 2024. doi:10.48550/arXiv.2407.20292. (Accessed Aug. 10, 2024; available online at https://arxiv.org/pdf/2407.20292.)
AGI-H-JEPA: AGI Based on Hierarchical Joint Embedding Predictive Architecture
It’s a pay-to-read piece; if you can’t get it, not the worst thing, but if you CAN get it – it’s nice.
- Heikkilä, Melissa and Will Douglas Heaven. 2022. “Yann LeCun has a Bold New Vision of AI.” MIT Technology Review (Available to those who pay HERE.)
And there’s a brief summary of this already brief article that IS available, for the many-MANY of us who’ve subscribed to Medium.com. Useful if you want this in a very small nutshell.
- Phair, Meryl. 2022. “MIT Technology Review Features Yann LeCun’s Vision for the Future of Machine Learning.” Medium.com. (Available HERE.)
These are Yann LeCun’s works on JEPA and H-JEPA, together with commentary and recent works in concert with others at Meta-FAIR.
- Garrido, Quentin, Mahmoud Assran, Nicolas Ballas, Adrien Bardes, Laurent Najman and Yann LeCun. 2024. “Learning and Leveraging World Models in Visual Representation Learning.” arXiv: 2403.00504v1 [cs.CV] 1 Mar 2024. (Accessed May 27, 2024; available online at arXiv.)
- LeCun, Yann. 2022. “A Path Towards Autonomous Machine Intelligence. Version 0.9.2, 2022-06-27” OpenReview (June 27, 2022). (Accessed May 20, 2024. Available online at OpenReview.)
- OpenReview Comments on LeCun’s “A Path Towards Autonomous Machine Intelligence (2020).” (OpenReview Comments on LeCun (2020).)
Also, THIS PRIOR BLOGPOST presented Yann LeCun’s 2020 Plenary at the ICRA conference, as well as a couple of works on self-supervised learning – which LeCun discusses at some length.
- Maren, Alianna J. 2023. “Latent Variables.” Themesis Blogpost Series (Dec., 2023). (Accessed May 28, 2024; available HERE.)
LeCun’s very similar Keynote address at the 2020 ICLR conference is HERE.
AGI-CORTECON: AGI Based on COntent-Retentive, TEmporally-CONnected Neural Networks
{* Details forthcoming – papers, prior YouTubes, prior blogposts, etc. *)
AGI Chips and Architectures
- Chowdhury, Hasan. 2024. “SoftBank Just Handed OpenAI $500 Million. For Masayoshi Son, It’s Just the Beginning.” Business Insider Africa (2 Oct. 2024). (Accessed Oct. 7, 2024, available at Business Insider Africa.)
- Wan, Zishen, et al. 2024. “Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture.” IEEE Trans. on Circuits and Systems for Artificial Intelligence (September, 2024). Available also on arXiv; arXiv:2409.13153v2 [cs.AR] (23 Sep 2024). (Accessed Oct. 7, 2024; available at arXiv.)
LLM Challenges: Energy + Cost
- Lockett, Will. 2024. “OpenAI Is About To Make The AI Bubble A You Problem.” Medium.com (member-only story; Sept. 15, 2024). (Accessed Oct. 15, 2024; available at Medium.com – members only.)
Some Challenges Facing REAL AGI (Symbolic Reasoning)
- Aram Ebtekar, Aram and Marcus Hutter. 2024. “Modeling the Arrows of Time with Causal Multibaker Maps.” Entropy 26(9):776; doi:10:,3390/e26090776. (Accessed Oct. 15, 2024; available at Entropy.)
Industry Notes
OpenAI moving towards edge computing:
- Coldewey, Devin. 2024. “OpenAI Snatches Up Microsoft Generative AI Research Lead.” TechCrunch (Oct. 14, 2024). (Accessed Oct. 15, 2024; available at TechCrunch.)