Quick Look at AGI Evolution: Current Status

Quick Highlight Summary

AGI isn’t here yet, but we’ve got a pretty good sense of where and how it will evolve.

We identify where narrow AI is pushing forward – FAR FASTER than we’d projected in last week’s YouTube + blogpost.

This means that we need to focus – faster than we expected – on the AGI underpinnings, so that we can be on top of AGI as it emerges – likely MUCH FASTER than we’d thought possible.

Specific strategies identified, discussed, and recommended. These include a YouTube generative AI playlist – see link below – GREAT way to spend a weekend of binge-watching. You WILL get a decent gen-AI framing with this!

Themesis, Inc. “Generative AI Playlist.” Revised 2024.

AGI as We Know It Today

In the previous blogpost and YouTube combination, we suggested a framework for the emerging AGI landscape.

To be clear – AGI is NOT HERE YET.

But, we have a reasonable sense of the landscape from which AGI will evolve.

Figure 1. Three distinct platforms or methods support distinctly different kinds of AGI evolution.

There are just three fundamental methods – methods now currently in use – that provide the jumping-off points for AGI evolution:

  • RGMs (Renormalising Generative Models) – from Friston & Colleagues, out of Verses.ai,
  • H-JEPA (Hierarchical Joint Embedding Predictive Architecture) – from LeCun & Colleagues, out of Meta, and
  • LLMs (and variations on that theme) + (Deep) Reinforcement Learning (and variations on that theme) – from EVERYONE ELSE (Microsoft, OpenAI, Anthropic/Amazon, presumably from Salesforce … the world-at-large).

For details, see last week’s blogpost as well as our most recent YouTube:

YouTube video on “AGI Wars: Emerging Landscapes,” illustrating three known emerging AGIs; AGI-H-JEPA (lower-left) emerging from LeCun’s H-JEPA evolution at Meta; AGI-RGM (upper-left), emerging from Friston’s work at Verses.ai; and AGI-RL-LLM (center left), an AGI architecture using reinforcement learning (RL) built on top of a large language model (LLM), with separate versions being developed by both OpenAI and Google.

Recent NARROW AI Evolutions – “Agents”

It was a bit of a shocker, last week.

We had THOUGHT that there would be some timebetween producing the vid where we predicted “narrow AI” as a rapid outpouring from the LLM giants – that is, a cobbling-together of (deep) reinforcement learning and their particular LLM (or variation on that theme – something involving vision, code, math, etc.) – and the actual emergence of those same “narrow AI” agents.

We were wrong.

Last week alone, various companies announced their “AI Agents” – in a rapid flurry:

  • Salesforce: “Agentforce” AI agents trained using reinforcement learning coupled with company-specific information on the “datacloud.” (Indirect snark against agents trained on LLMs.) (They’ve been at this for a while, but had a new announcement last week.)
  • Microsoft: Copilot agents (again, reinforcement learning). Satya Nadella announces the release of ten different Copilot agents. (Direct competition with Salesforce.)
  • Anthropic: Also in the mix; recent agent releases.

Here are a few quick links.

I am SO NOT SUGGESTING that you click-and-open all the links, or do more than a fast-scan of the YouTubes – all this does is set the context.

Quick Summary: Agents are here. They are typically developed using reinforcement learning. This is what establishes specifically-tuned performance for different customer applications, and allows these agents to stay within “guardrails.” Not every company blurb or tech piece mentions reinforcement learning, but this is what underlies all of them. These agents work with either an LLM or with a (customer-)specific data repository.


What This Means to You

Whether you’re a CXO in a major “agentic” company, or are developing your own AI career – the quick word is: reinforcement learning is the dominant “tool in the toolbox” right now. It’s typically coupled with SOMETHING – an LLM or a (customer-specific) data repository.

If you’re early in your career, this is a good time to pick up on reinforcement learning skills.

But for the long haul – for REAL AGI – my money is still on active inference, and its recent evolution into RGMs (Renormalising Generative Models).

I’ve never been all that interested or invested in reinforcement learning – it’s never “tugged” at me the way that active inference has.

But active inference (and see the WEALTH of new information – tech blogs, vids, all sorts of new things added to Verses.ai) is a long-term play.

Reinforcement learning is the “here-and-now.” Probably … jobs available. (Especially if you can demo a project coupling RL with ANYTHING.)

And … I don’t mean to diminish reinforcement learning … but it’s fundamentally an OLDER method. (Originally conceived in 1979, if you read Barto and Sutton – see last week’s blogpost.)

This doesn’t diminish it – rather, it points to WHY there is such a strong support for RL – there’s been DECADES for researchers to develop and refine methods.

In contrast, active inference is:

  • Much newer … 2010 origin point vs. 1979 for RL,
  • MUCH more mathematically / conceptually complex, and
  • JUST NOW being brought into industry applications. (At this point, only one company – Verses.ai – is significantly invested into active inference, and they have a much smaller funding base than the megaliths using RL.)

Immediate Impact on Your Personal Action Plan

You can learn reinforcement learning with NO reference – AT ALL – to generative methods. (That is, until you get to “deep” RL – which DOES involve some generative, because it will have conditional probabilities.)

But for simplicity, say that you can learn reinforcement learning pretty fast. You can take a one-quarter course at Northwestern on RL, or take a bunch of Coursera courses, or read a lot of articles and watch YouTubes, and put together a decent project, and claim some expertise with RL – all within a single quarter.

In contrast, learning active inference … my best guess and guidance … would be to plan on about three quarters, or nearly a year. (Or more. I STILL don’t claim to understand it that well – despite having spent YEARS on the subject.)

MOST IMPORTANT: You can do RL without generative. (Mostly.)

In contrast, active inference is absolutely DEPENDENT on generative.

You HAVE to get your “generative AI” knowledge SOLID to progress to active inference (via the variational inference “gateway”), and this is absolutely essential.

We offer a short course. It’s in revision status now. Release likely within two weeks – just ahead of November’s Black Friday / Cyber Monday sales. Look for this.

Themesis Short Course: “Top Ten Terms in Statistical Mechanics.” Available for new release mid-November, 2024.

You could have a handle on generative AI before Christmas, and spend a little of your Christmas vacation pursuing deeper topics (such as variational/active inference), and then feel pretty solid by the beginning of 2025 – which is when I think that active inference will REALLY START TO EMERGE – and AGI WILL ALSO APPEAR.

Limited, but real.


Quick Peek Inside the Course: “Day 5: Entropy” Video

Generative AI uses statistical mechanics.

What’s important to know is that the essential, core stat-mech equation – the free energy equation – is interpreted and used differently in neural networks energy-based neural networks (one kind of generative AI; this includes transformers) versus the variational inference kind of generative AI. HUGELY important distinction!

Here’s the latest vid:

Maren, Alianna J. 2024. “Day 5: “Entropy” – Available on YouTube through Sunday, Nov. 3 Only!.” Themesis, Inc. YouTube Channel (Nov. 1, 2024). (Available online at Themesis YouTube Channel.)

The Tech Wars – Quick Summary on Agents

Microsoft vs. Salesforce

This last week, Yahoo Finance put out THIS YouTube on the Microsoft-vs-Salesforce dynamic in fielding their AI agents.

Yahoo Finance. 2024. “Microsoft, Salesforce lead autonomous AI agent race.”

Predictably, this vid is focused on market size – anticipated by Yahoo Finance people to be $50B by 2030.

Salesforce

Salesforce is busy promoting their Agentforce with a series of YouTube vids. One of the most recent is here.

YoutTube video: Salesforce. 2024. “Discover Agentforce: What AI Was Meant to Be.” Salesforce YouTube Channel (Oct. 30, 2024). (Accessed Nov. 1, 2024; available at Salesforce YouTube.)

This is actually a pretty good (42-min) vid. I like the discussion (min 8:36) on the difference between “agents” and “bots” or “copilots.” (Direct snark at Microsoft, with that last one.) (Quick summary: bots are structured and rigid, copilots are reactive, agents are proactive.)

One of the presenters touches on reinforcement learning a bit – distinguishing how it is different from a large set of structured if-thens.

They emphasize that their user interface for agent-building is “low-code/no-code.”

Salesforce has been at this for a while – they are NOT newcomers! Two years ago, THIS tech blog from Salesforce:

  • Wang, Huan and Stephan Zheng. 2022. “Turbocharge Multi-Agent Reinforcement Learning with WarpDrive and PyTorch Lightning.” The 360 Blog (produced by Salesforce) (May 20, 2022). (Accessed Nov. 1, 2024; available online at The 360 Blog.)

Microsoft

MIcrosoft is now emphasizing Copilot agents. They are promising that these “New Autonomous Agents Scale Your Team Like Never Before.”

Here, Microsoft’s Satya Nadella introduces Copilot agents in a YouTube on “AI Tour Demo: Copilot Studio and Pre-built Agents.”

YoutTube video: Microsoft. 2024.

In this Microsoft YouTube, the presenter goes into some detail about reinforcement learning policies for agents. It’s a bit old (“Reinforcement Learning Day 2019,’ held Oct. 3, 2019), but shows that Microsoft also is not new to this game.

Amato, Christopher. 2019. “Scalable and Robust Multi-Agent Reinforcement Learning.” Microsoft YouTube Channel. (Accessed Nov. 1, 2024; available at Microsoft YouTube ChannelReinforcement Learning Day 2019.)

Anthropic

Anthropic is also heavily invested in agents.

The AI Daily Brief: Artificial Intelligence News. 2024. “The Next Step in Our Journey to AI Agents: Anthropic’s Computer Use.” The AI Daily Brief YouTube Channel (Oct. 23, 2024). (The AI Daily Brief.)

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap