AGI: Generative AI, AGI, the Future of AI, and You

It’s been a hugely up-ticking time in AI lately. Just … a lot of shifts. (Sidebar: We’ve collected a few YouTubes – theirs, not ours – in support, click on Salon 5 for some good watching.)

Every so often, each of us (as an AI student or professional) needs to take a moment and recalibrate. Ask the basics. Who are we, in the AI realm? What do we desire to learn, master, do? Why do we want this – what’s our personal dream, vision, passion?

And then (this is the recalibration part) – how does this clarified vision of ourself and our desires mesh in with what is going on in AI, right now? (And also, with the best SET of projections that we can each come up with … from near-term to reasonably out-there.)


Establishing a Baseline

No matter who we are, where we are in our studies or endeavors, and what we hope to do – and no matter where the shifting, ever-(rapidly-) evolving AI landscape is at (at any given moment), our first step is to establish a baseline.

We all know that, over the past 1 1/2 years, the AI world has gone through a rapid and major evolution with the emergence of generative AI as a “something that everyone can use.” (Same sidenote: check out Salon 5, which will happen on Sunday, April 14, 2024, for some good backgrounder.)

A few points to clarify.


Generative AI – A VERY Brief History and Overview

First, generative AI is not new.

Generative AI, in fact, is about fifty years old. It’s based on work done by William Little (1974), and then John Hopfield (1982) and Geoffrey Hinton and colleagues (1983).

Generative AI is based on the energy-based neural network known as the Boltzmann machine (later the restricted Boltzmann machine, or RBM), which had its predecessor work in a 1974 invention by William Little and John Hopfield expanded in his 1982 study.

The transformer architecture, on which the MOST RECENT evolutions in generative AI are based, is also not new. It’s a 2017 invention, by a group of Google Brain researchers (Vaswani et al., 2017).

Transformers led to the evolution of Large Language Models (LLMs) as well as other methods, e.g. generative image creation.

All of our LLMs (Large Language Models) and other transformer-based methods come from that one moment.


Generative AI is More than Transformer-Based Architectures

The important thing to realize is that generative AI is not just transformer-based methods, such as LLMs and image generation.

There are actually several KINDS of generative AIs. Transformer-based generative AIs are just the most recent version.

Generative AI methods include:

  • Energy-based neural networks, including the Hopfield neural network, the Boltzmann machine, the restricted Boltzmann machine (RBM), and all descendants – such as deep learning networks and generative adversarial networks (GANs).
  • Variational inference, which is the means by which we try to approximate a model that describes a (typically) large and complex set of possibilities; variational inference methods are used (for example) in modeling the potential visual landscape for autonomous vehicles. Karl Friston’s active inference is an advance on basic variational inference.
  • Variational autoencoders, which have something in common with Boltzmann machines AND with variational inference, but are neither; they also have something in common with transformers, but are not a direct precursor; these are their own distinctive AI-eco-niche, and are used to “encode” and then “decode” representations of (typically) complex systems.
  • Transformers, the newest kid on the block – used for generating sequences of text, images, and code, all based on input “prompts.”

Here’s the overall picture of generative AI methods.

There are three original kinds of generative AI (energy-based neural networks, variational inference, and variational autoencoders), and transformer-based methods (added in 2017) are the newest form of generative AI that involve some architectural elements that go beyond basic generative methods.

The Math, Information Theory, and Physics behind Generative AI

There are three essential methods used in generative AI:

  • The reverse Kullback-Leibler divergence,
  • Bayesian conditional probabilities, and
  • Statistical mechanics.
All generative AI methods have a common core in their fundamentals, including the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics.

When we move on to transformers, we still keep these three core fundamentals, but also add in two new methods:

  • Multihead attention, and
  • Positional encoding (sinusoidal signal representation).

Generative AI is NOT Artificial General Intelligence (AGI)

Generative AI is a significant step.

However, current instantiations tell us about the limits of generative AI (or gen-AI). Gen-AI takes a LOT of computational units, parameters, training, and is VERY energy-expensive.

And it doesn’t always work. Hallucinations, that sort of thing. We discuss these limitations in this analysis of Google’s lamented early Gemini 1.5 release.

What we CAN’T get out of LLMs – presenting the case for AGI.

In this vid, we tease what it is that WILL solve the problems inherent in not only Google’s Gemini 1.5, but in all LLMs – their ultimate reliance on an (extensive, exhaustive) past history of probabilistic associations – with no “knowledge” of what these probabilistic likelihoods really mean.

What we really want is artificial GENERAL intelligence, or AGI.

We introduce a basic AGI architecture in THIS YouTube video.


How to Self-Study Generative AI

In the FOLLOWING blogpost, we propose a self-study plan.

We have lots of resources.

We also offer a short course.

More on this tomorrow.

Be well – AJM



What’s Next with Themesis (and How to Stay Informed)

With CORTECONs now introduced (in multiple previous YouTubes and blogposts), and with a decent coverage of generative AI, we’re moving back to the realm of AGI, and building out components of a very basic, very rudimentary, yet conceptually-complete AGI.

We’ll be reporting on this as time goes on.

To be sure that you get updates, be certain to go to our Opt-In page, which is the “About” menu button.

Scroll down, find the Opt-In form, fill it in, check your email for the confirmation, and confirm.

Be certain to MOVE THE THEMESIS EMAILS to your preferred folder. We’re pretty strict about our email subscibers these days; those who don’t open their emails find themselves dropped. Nemesis-style justice; the milder version.

If you REALLY want to stay on our email list (which is a privilege, not an obligation), then CLICK on some buttons occasionally. That tells us:

  • That you’re alive and you’re paying attention, and
  • What interests you – so we can respond to those interests.

The real fun – with AGI – is just ahead. Join us for an uproarious, wild, fun journey.

Share via
Copy link
Powered by Social Snap