Probably best to get the kids out of the room before you play this one. Lots of heavy breathing by Brigette. And you can read the backstory here. (And a bit more here, if you’re so inclined.) But to business … The Starting Point … and the FIRST Illustrative Text String I had previously worked… Continue reading Brigette Bardot Says It All (An Exercise in the 1D Cluster Variation Method)
Category: Stat Mech – Entropy
CORTECONs and AGI: Reaching Latent Layer Equilibrium
The most important thing in building an AGI is the ability to repeated bring the latent layer to equilibrium. This is the fundamental capability that has been missing in previous neural networks. The lack of a dynamic process to continuously reach a free energy minimum is why we have not had, until now, a robust… Continue reading CORTECONs and AGI: Reaching Latent Layer Equilibrium
CORTECONS: A New Class of Neural Networks
In the classic science fiction novel, Do Androids Dream of Electric Sheep?, author Philip K. Dick gives us a futuristic plotline that would – even today – be more exciting and thought-provoking than many of the newly-released “AI/robot as monster” movies. The key question today is: Can androids dream? This is not as far-fetched as… Continue reading CORTECONS: A New Class of Neural Networks
Variational Free Energy: Getting-Started Guide and Resource Compendium
Many of you who have followed the evolution of this variational inference discussion (over the past ten blogposts), may be wondering where to start. This would be particularly true for readers who are not necessarily familiar with the variational-anything literature, and would like to begin with the easiest, most intuitive-explanatory articles possible, and then gently… Continue reading Variational Free Energy: Getting-Started Guide and Resource Compendium
Variational Free Energy and Active Inference: Pt 5
The End of This Story This blogpost brings us to the end of a five-part series on variational free energy and active inference. Essentially, we’ve focused only on that first part – on variational free energy. Specifically, we’ve been after Karl Friston’s Eqn. 2.7 in his 2013 paper, “Life as We Know It,” and similarly… Continue reading Variational Free Energy and Active Inference: Pt 5
Variational Free Energy and Active Inference: Pt 4
Today, we interpret the q(Psi | r) and p(Psi, s, a, r | m) in Friston’s (2013) “Life as We Know It” (Eqn. 2.7) and Friston et al. (2015) “Knowing One’s Place” (Eqn. 3.2). This discussion moves forward from where we left off in the previous post, identifying how Friston’s notation builds on Beal’s (2003)… Continue reading Variational Free Energy and Active Inference: Pt 4
Variational Free Energy and Active Inference: Pt 3
When we left off in our last post, we’d determined that Friston (2013) and Friston et al. (2015) reversed the typical P and Q notation that was commonly used for the Kullback-Leibler divergence. Just as a refresher, we’re posting those last two images again. The following Figure 1 was originally Figure 5 in last week’s… Continue reading Variational Free Energy and Active Inference: Pt 3
Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
I thought it would be (relatively) straightforward to wrap this up. Over the past several posts in this series, we’ve discussed the Kullback-Leibler (K-L) divergence and free energy. In particular, we’ve described free energy as the “universal solvent” for artificial intelligence and machine learning methods. This next (and last) post in this series was intended… Continue reading Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
We need a “black diamond” rating system to mark the tutorials, YouTubes, and other resources that help us learn the AI fundamentals. Case in point: Last week, I added a blogpost by Damian Ejlli to the References list. It is “Three Statistical Physics Concepts and Methods Used in Machine Learning.” (You’ll see it again in… Continue reading Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3
Free energy is the universal solvent of AI (artificial intelligence). It is the single underlying rule or principle that makes AI possible. Actually, that’s a simplification. There are THREE key things that underlie AI – whether we’re talking deep learning or variational methods. These are: Free energy – which we’ll discuss in this post, Latent… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3