AGI: Generative AI, AGI, the Future of AI, and You

Generative AI is about fifty years old. There are four main kinds of generative Ai (energy-based neural networks, variational inference, variational autoencoders, and transformers). There are three fundamental methods underlying all forms of generative AI: the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics. Transformer-based methods add in multi-head attention and positional encoding. Generative AI is not, and never can be, artificial general intelligence, or AGI. AGI requires bringing in more architectural components, such as ontologies (e.g., knowledge graphs), and a linking mechanism. Themesis has developed this linking mechanism, CORTECONs(R), for COntent-Retentive, TEmporally-CONnected neural networks. CORTECONs(R) will enable near-term AGI development. Preliminary CORTECON work, based on the cluster variation method in statistical mechanics, includes theory, architecture, code, and worked examples, all available for public access. Community participation is encouraged.

Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

AJM Note: ALTHOUGH NEARLY COMPLETE, references and discussion still being added to this blogpost. This note will be removed once the blogpost is completed – anticipated over the Memorial Day weekend, 2023. Note updated 12:30 AM, Hawai’i Time, Tuesday, May 29, 2023. This blogpost accompanies a YouTube vid on the same topic of the Evolution… Continue reading Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

New Neural Network Class: Framework: The Future of AI (Part 2 of 3)

We want to identify how and where the next “big breakthrough” will occur in AI. We use three tools or approaches to identify where this next big breakthrough will occur: The Quick Overview Get the quick overview with this YouTube #short: The Full YouTube Maren, Alianna J. 2023. “A New Neural Network Class: Creating the… Continue reading New Neural Network Class: Framework: The Future of AI (Part 2 of 3)

Kuhnian Normal and Breakthrough Moments: The Future of AI (Part 1 of 3)

Over the past fifty years, there have only been a few Kuhnian “paradigm shift” moments in neural networks. We’re ready for something new!

The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3

Free energy is the universal solvent of AI (artificial intelligence). It is the single underlying rule or principle that makes AI possible. Actually, that’s a simplification. There are THREE key things that underlie AI – whether we’re talking deep learning or variational methods. These are: Free energy – which we’ll discuss in this post, Latent… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3

When a Classifier Acts as an Autoencoder, and an Autoencoder Acts as a Classifier (Part 1 of 3)

One of the biggest mental sinkholes into which AI students can get trapped is not quite understanding the fundamental difference between how our two basic “building block” networks operate: the Multilayer Perceptron (MLP), trained with backpropagation (or any form of gradient descent learning), and the (restricted) Boltzmann machine (RBM), trained with contrastive divergence. It’s easy… Continue reading When a Classifier Acts as an Autoencoder, and an Autoencoder Acts as a Classifier (Part 1 of 3)

Entropy in Energy-Based Neural Networks – Seven Key Papers (Part 3 of 3)

If you’ve looked at some classic papers in energy-based neural networks (e.g., the Hopfield neural network, the Boltzmann machine, the restricted Boltzmann machine, and all forms of deep learning), you’ll see that they don’t use the word “entropy.” At the same time, we’ve stated that entropy is a fundamental concept in these energy-based neural networks.… Continue reading Entropy in Energy-Based Neural Networks – Seven Key Papers (Part 3 of 3)

How Backpropagation and (Restricted) Boltzmann Machine Learning Combine in Deep Architectures

One of the most important things for us to understand, as we come into the “deep learning” aspect of AI (for the first time), is the relationship between backpropagation and the (restricted) Boltzmann machines, which we know comprise the essential core of various “deep learning” architectures. The essential idea in deep architectures is this: Each… Continue reading How Backpropagation and (Restricted) Boltzmann Machine Learning Combine in Deep Architectures

Latent Variables Enabled Effective Energy-Based Neural Networks: Seven Key Papers (Part 2 of 3)

Latent variables enabled effective energy-based neural networks. The key problem with the Little/Hopfield neural network was its limited memory capacity. This problem was resolved when Hinton, Ackley, and Sejnowski introduced the notion of latent variables, creating the Boltzmann machine. Seven key papers define the evolution of energy-based neural networks. Previously, we examined the first two… Continue reading Latent Variables Enabled Effective Energy-Based Neural Networks: Seven Key Papers (Part 2 of 3)