Brigette Bardot Says It All (An Exercise in the 1D Cluster Variation Method)

Probably best to get the kids out of the room before you play this one. Lots of heavy breathing by Brigette. And you can read the backstory here. (And a bit more here, if you’re so inclined.) But to business … The Starting Point … and the FIRST Illustrative Text String I had previously worked… Continue reading Brigette Bardot Says It All (An Exercise in the 1D Cluster Variation Method)

AGI: Generative AI, AGI, the Future of AI, and You

Generative AI is about fifty years old. There are four main kinds of generative Ai (energy-based neural networks, variational inference, variational autoencoders, and transformers). There are three fundamental methods underlying all forms of generative AI: the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics. Transformer-based methods add in multi-head attention and positional encoding. Generative AI is not, and never can be, artificial general intelligence, or AGI. AGI requires bringing in more architectural components, such as ontologies (e.g., knowledge graphs), and a linking mechanism. Themesis has developed this linking mechanism, CORTECONs(R), for COntent-Retentive, TEmporally-CONnected neural networks. CORTECONs(R) will enable near-term AGI development. Preliminary CORTECON work, based on the cluster variation method in statistical mechanics, includes theory, architecture, code, and worked examples, all available for public access. Community participation is encouraged.

CORTECONs: AGI in 2024-2025 – R&D Plan Overview

By the end of 2024, we anticipate having a fully-functional CORTECON (COntent-Retentive, TEmporally-CONnected) framework in place. This will be the core AGI (artificial general intelligence) engine. This is all very straightforward. It’s a calm, steady development – we expect it will all unfold rather smoothly. The essential AGI engine is a CORTECON. The main internal… Continue reading CORTECONs: AGI in 2024-2025 – R&D Plan Overview

CORTECONs and AGI: Reaching Latent Layer Equilibrium

The most important thing in building an AGI is the ability to repeated bring the latent layer to equilibrium. This is the fundamental capability that has been missing in previous neural networks. The lack of a dynamic process to continuously reach a free energy minimum is why we have not had, until now, a robust… Continue reading CORTECONs and AGI: Reaching Latent Layer Equilibrium

CORTECONS: A New Class of Neural Networks

In the classic science fiction novel, Do Androids Dream of Electric Sheep?, author Philip K. Dick gives us a futuristic plotline that would – even today – be more exciting and thought-provoking than many of the newly-released “AI/robot as monster” movies. The key question today is: Can androids dream? This is not as far-fetched as… Continue reading CORTECONS: A New Class of Neural Networks

1D CVM Object Instance Attributes: wLeft Details

We illustrate how the specific values for a single object-oriented instance attribute are determined. We do this for the specific case of the “wLeft” instance attribute for the Node object in the 1-D cluster variation method (1D CVM) code. The same considerations will apply when we progress to the 2D CVM code. The specific values… Continue reading 1D CVM Object Instance Attributes: wLeft Details

Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

AJM Note: ALTHOUGH NEARLY COMPLETE, references and discussion still being added to this blogpost. This note will be removed once the blogpost is completed – anticipated over the Memorial Day weekend, 2023. Note updated 12:30 AM, Hawai’i Time, Tuesday, May 29, 2023. This blogpost accompanies a YouTube vid on the same topic of the Evolution… Continue reading Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

Kuhnian Normal and Breakthrough Moments: The Future of AI (Part 1 of 3)

Over the past fifty years, there have only been a few Kuhnian “paradigm shift” moments in neural networks. We’re ready for something new!

Variational Free Energy: Getting-Started Guide and Resource Compendium

Many of you who have followed the evolution of this variational inference discussion (over the past ten blogposts), may be wondering where to start. This would be particularly true for readers who are not necessarily familiar with the variational-anything literature, and would like to begin with the easiest, most intuitive-explanatory articles possible, and then gently… Continue reading Variational Free Energy: Getting-Started Guide and Resource Compendium

Variational Free Energy and Active Inference: Pt 5

The End of This Story This blogpost brings us to the end of a five-part series on variational free energy and active inference. Essentially, we’ve focused only on that first part – on variational free energy. Specifically, we’ve been after Karl Friston’s Eqn. 2.7 in his 2013 paper, “Life as We Know It,” and similarly… Continue reading Variational Free Energy and Active Inference: Pt 5