Emerging AGIs: Early 2024 Playing Field

AGI is not here … just yet.

[*BLOGPOST IN PROGRESS – CHECK BACK FOR DAILY UPDATES!* (AJM, Tuesday, May 14, 2024; 0200 Hawai’i Time)]

But – it’s getting close.

Very, VERY close.

What’s interesting – and important to all of us – is that there is no “singular” AGI, and it’s important to make this distinction.

There are going to multiple AGIs – actually, multiple KINDS of AGIs.

In the immediate future, we’ll see multiple kinds of proto-AGIs; systems that attempt AGI capabilities (and perhaps come close), but within a few years, we’ll view them as “prototypes” – and early ones at that.

There will be some things in common, and some key differences.

This blogpost supports a YouTube created for the second week in May, 2024. (Actual release date still TBD.) The theme of this YouTube is “Emerging AGIs: The Early 2024 Playing Field.


LLM-Based AGIs

By far, the first round of AGIs and proto-AGIs will be those based on LLMs and other transformer-based systems.

There are six distinct LLM “families” on the market.

Figure 1. We have six “LLM Players” that are candidates for “emerging AGIs;” the larger of these will certainly attempt proto-AGIs; some may succeed in giving early AGI capabilities.

AGI Creation Needs Money + Infrastructure + People

Just to be clear: no matter what any of these companies announce, and no matter what pundits have to say – NONE of these are at (or anywhere near) AGI-capable right now.

What each team already has:

  • Solid funding, in-place, and infrastructure – labs, access to computing power,
  • At least a baseline LLM capability, and
  • People who have at least some sense of what they want (and need) to do.

What’s needed: It takes time for a research team to create an common language and common vision. There is no real way to hurry this process – we’re talking human beings, exchanging ideas – and that means that the first thing that EACH team needs to do is establish a common working language.

Each person needs to roughly understand what the other team members know and bring to the table. This also takes time.

AGI is more complex – more subtle, more far-ranging – than building an LLM. There are more scientific disciplines involved. The back-study requires more years of effort. So, this is an area where the more established teams (Google’s Deep Mind, Google Brain, and Meta’s Fundamental AI Research lab) will all have the advantage.

Even teams that have not had an AGI emphasis in the past (e.g., OpenAI’s team), have a sufficiently strong cadre of very bright minds that have interacted with each other enough (* barring certain attempted palace coups *) so that there’s a solid core of ideas.

On the other hand, bringing together a new group of people, with new leadership, stirs the pot – creative ideas that may not have been supported in one environment may find opportunity in a new one.

We’ll talk more about that in the future.

For today … let’s just look at the money trail.


LLMs: Following the Money

We’ll follow the money with the six companies who are already strong LLM players.


OpenAI + Microsoft (Copilot)

We start with OpenAI, where it all began.

Recent stats:

  • OpenAI has received about $11B in investment funding over the past several years, most notably $10B from Microsoft in Jan., 2023, and an undisclosed amount in April, 2024.
  • Microsoft, the largest investor in OpenAI, wraps Copilot, its next-gen moonshot, firmly around an OpenAI base. Microsoft is the world’s most-valued corporation, with a market cap of $3.1T.

Google and Gemini

Despite it’s ill-fated launch (those overly-woke gen-AI “historical images,” were just too much), Gemini remains the new solid core of Google’s generative AI capability.


Meta

Meta’s LlaMa 3 has been lauded as a front-runner LLM. Brief stats:


X.ai

Elon Musk just announced a $6B investment round into his AI company, X.ai, which he formed in March, 2023. He’s assembled a considerable tech-team. His intention is to build the next “super-app.” The current X.ai product is Grok.ai, a multimodal chatbot that some say rivals ChatGPT.


Anthropic

Anthropic, which formed in 2023, has emphasized its commitment to “ethical AI,” with a focus on alignment and AI safety. Anthropic takes a somewhat different approach to training its LLMs.

Instead of laboriously providing a model with human feedback, the researchers give a model a list of general principles (a “constitution”) to follow when creating its responses. A second AI then monitors the first, continually critiquing how well it follows the principles.

Mark Sullivan. 2024. “How Anthropic Has Doubled Down on AI Safety.” (March 19, 2024). Fast Company.
  • Anthropic’s valuation is currently pegged at $15B, which is down from an earlier valuation of $18B. Anthropic reportedly sought the lower valuation to mitigate some of the effects of a higher valuation, which would allow prior financial backers to gain more shares once Anthropic’s valuation reached a higher level.

Mistral

Mistral, a French-based AI company, has released versions of its flagship Mixtral (play on the “mixture of experts” notion), since its April, 2023 formation.

{* Blogpost in progress *}


Reinforcement Learning

The notion of reinforcement learning (RL), based on work by Richard Sutton and Andy Barto in 1979, is nearly as old as our current neural networks. (We trace that origin point to Werbos’s 1974 invention of backpropagation.) The earliest forerunner publication was by Sutton & Barto in 1981.

This deep evolutionary history is comparable to that of generative AI, where the originating paper was by William Little in 1974, with Hopfield’s expansion and commentary in 1982, and Hinton’s synthesis of latent variables with energy-based neural networks into the Boltzmann machine in 1983.

The notion of reinforcement learning builds strongly on ideas of optimal adaptive control advanced by Bellman. (See a good tutorial by Torres, 2020.)


Future of AI – as Envisioned by Sam Altman

Sam Altman, in his recent interview with Palantir, describes his envisioned future for AI (really, AGI).

Sam discusses (see Min. 4:57) two alternative futures that people envision for AGI: one is the “AGI as alter ego” – that is, over time, the AI “becomes me” – handling all sorts of (routine) tasks the way that we would each envision. The other role that people discuss could be described as: “I want a great senior employee.”

Also, at Min. 5:38: “There’s a difference between a senior employee and an agent.”

In a recent interview with MIT Technology Review, Altman describes his envisioned “killer app” for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation [that] I’ve ever had, but doesn’t feel like an extension.” The MIT Technology Review goes on to describe Altman’s vision for AI as “It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.”

AJM’s Note: Personally, I think that this is a likely evolutionary path for certain kinds of AGIs. Let’s keep in mind that emerging AGIs will not be all the same; we’re going to have an evolutionary explosion – much like when birds evolved from dinosaurs. There were a LOT of different birds that emerged!

And one more personal note: AGIs – because they’ll need to fuse signal-based processing (no matter how large the context window) with symbolic reasoning – will be more like hippogriffs than birds. They’ll be a strange, semi-mystical fusion of bird (a dinosaur evolution; the signal-processing baseline) with mammal (the ontology/knowledge-graph/symbolic-world-model representation).

Figure. AGIs will be more like hippogriffs than either dinosaurs or mammals. That is, they will necessarily be hybrids of signal-level processing (text, speech, or video/image, or even code) with symbolic representations.

Resources and References

LLMs: Investment and Parent Company Market Cap/Valuation

OpenAI and Microsoft’s Copilot

  • OpenAI has received about $11B in investment funding over the past several years, most notably $10B from Microsoft in Jan., 2023, and an undisclosed amount in April, 2024. While the venerable Wikipedia is not the best source for academic citations, their OpenAI history and funding description is reasonably up-to-date and accurate.
  • Microsoft, the largest investor in OpenAI, wraps Copilot, its next-gen moonshot, firmly around an OpenAI base. Microsoft is the world’s most-valued corporation, with a market cap of $3.1T.

Google and Gemini


Meta


X.ai


Anthropic


Mistral


Generative AI – Early Works

{* References to be added. *}


Reinforcement Learning

  • Sutton, Richard S. and Andrew G. Barto. 1981. “Toward a Modern Theory of Adaptive Networks: Expectation and Prediction.”  Psychological Review. 88(2): 135–170. doi:10.1037/0033-295X.88.2.135. (Accessed May 14, 2024; article link.)
  • Sutton, Richard S. and Andrew G. Barto. 2015. Reinforcement Learning: An Introduction (2nd Ed.; in progress). (Accessed May 14, 2024; available as a PDF.)
  • Torres, Jordi. 2020. “The Bellman Equation: V-function and Q-function Explained.” Towards Data Science (June 11, 2020.) (Accessed May 14, 2024; available at TowardsDataScience.com.)

Verses.ai

  • Verses has received approximately $21M in funding since its incorporation in 2018; the most recent investment was for $6.81M on April 17, 2024.

Sam Altman Interviews

  • O’Donnell, James. 2024. “Sam Altman Says Helpful Agents Are Poised to Become AI’s Killer Function.” MIT Technology Review (May 1, 2024). (Accessed May 15, 2024; available online at MIT Tech Review.)
Share via
Copy link
Powered by Social Snap