Generative AI is about fifty years old. There are four main kinds of generative Ai (energy-based neural networks, variational inference, variational autoencoders, and transformers). There are three fundamental methods underlying all forms of generative AI: the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics. Transformer-based methods add in multi-head attention and positional encoding. Generative AI is not, and never can be, artificial general intelligence, or AGI. AGI requires bringing in more architectural components, such as ontologies (e.g., knowledge graphs), and a linking mechanism. Themesis has developed this linking mechanism, CORTECONs(R), for COntent-Retentive, TEmporally-CONnected neural networks. CORTECONs(R) will enable near-term AGI development. Preliminary CORTECON work, based on the cluster variation method in statistical mechanics, includes theory, architecture, code, and worked examples, all available for public access. Community participation is encouraged.
Category: K-L Divergence
Variational Free Energy: Getting-Started Guide and Resource Compendium
Many of you who have followed the evolution of this variational inference discussion (over the past ten blogposts), may be wondering where to start. This would be particularly true for readers who are not necessarily familiar with the variational-anything literature, and would like to begin with the easiest, most intuitive-explanatory articles possible, and then gently… Continue reading Variational Free Energy: Getting-Started Guide and Resource Compendium
Variational Free Energy and Active Inference: Pt 5
The End of This Story This blogpost brings us to the end of a five-part series on variational free energy and active inference. Essentially, we’ve focused only on that first part – on variational free energy. Specifically, we’ve been after Karl Friston’s Eqn. 2.7 in his 2013 paper, “Life as We Know It,” and similarly… Continue reading Variational Free Energy and Active Inference: Pt 5
Variational Free Energy and Active Inference: Pt 4
Today, we interpret the q(Psi | r) and p(Psi, s, a, r | m) in Friston’s (2013) “Life as We Know It” (Eqn. 2.7) and Friston et al. (2015) “Knowing One’s Place” (Eqn. 3.2). This discussion moves forward from where we left off in the previous post, identifying how Friston’s notation builds on Beal’s (2003)… Continue reading Variational Free Energy and Active Inference: Pt 4
Variational Free Energy and Active Inference: Pt 3
When we left off in our last post, we’d determined that Friston (2013) and Friston et al. (2015) reversed the typical P and Q notation that was commonly used for the Kullback-Leibler divergence. Just as a refresher, we’re posting those last two images again. The following Figure 1 was originally Figure 5 in last week’s… Continue reading Variational Free Energy and Active Inference: Pt 3
Variational Free Energy and Active Inference: Pt 2
Our intention with this post is to cover not only the notion, but the notation, used by Karl Friston in his 2013 paper, “Life as We Know It.” (Actually, we’re addressing a very small notational subset – albeit one that needs to be treated with great care and caution.) To do this, we’re also discussing… Continue reading Variational Free Energy and Active Inference: Pt 2
Variational Free Energy and Active Inference: Pt 1
Overarching Story Line This new blogpost series, on variational free energy and active inference, presents tutorial-level studies centered on the free energy equation (Eq. 2.7) of Karl Friston’s 2013 paper, “Life as We Know It.” Specifically, we’re focused on the free energy equation shown in Figure 1 below. Over this blogpost series, we will reinforce… Continue reading Variational Free Energy and Active Inference: Pt 1
Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
I thought it would be (relatively) straightforward to wrap this up. Over the past several posts in this series, we’ve discussed the Kullback-Leibler (K-L) divergence and free energy. In particular, we’ve described free energy as the “universal solvent” for artificial intelligence and machine learning methods. This next (and last) post in this series was intended… Continue reading Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
We need a “black diamond” rating system to mark the tutorials, YouTubes, and other resources that help us learn the AI fundamentals. Case in point: Last week, I added a blogpost by Damian Ejlli to the References list. It is “Three Statistical Physics Concepts and Methods Used in Machine Learning.” (You’ll see it again in… Continue reading Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3
Free energy is the universal solvent of AI (artificial intelligence). It is the single underlying rule or principle that makes AI possible. Actually, that’s a simplification. There are THREE key things that underlie AI – whether we’re talking deep learning or variational methods. These are: Free energy – which we’ll discuss in this post, Latent… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3