We want to create an AGI (artificial general intelligence). If you’re reading this post, we trust that is your intention as well. We already know that AGI won’t come out of transformers. They are, in their essence, content addressable memories. That’s what they can do; that’s ALL that they can do. Our core equation comes… Continue reading AGI Notation: Friston’s Use of “Psi”
Category: Info Theory – Kullback-Leibler DIvergence
Variational Free Energy: Getting-Started Guide and Resource Compendium
Many of you who have followed the evolution of this variational inference discussion (over the past ten blogposts), may be wondering where to start. This would be particularly true for readers who are not necessarily familiar with the variational-anything literature, and would like to begin with the easiest, most intuitive-explanatory articles possible, and then gently… Continue reading Variational Free Energy: Getting-Started Guide and Resource Compendium
Variational Free Energy and Active Inference: Pt 5
The End of This Story This blogpost brings us to the end of a five-part series on variational free energy and active inference. Essentially, we’ve focused only on that first part – on variational free energy. Specifically, we’ve been after Karl Friston’s Eqn. 2.7 in his 2013 paper, “Life as We Know It,” and similarly… Continue reading Variational Free Energy and Active Inference: Pt 5
Variational Free Energy and Active Inference: Pt 3
When we left off in our last post, we’d determined that Friston (2013) and Friston et al. (2015) reversed the typical P and Q notation that was commonly used for the Kullback-Leibler divergence. Just as a refresher, we’re posting those last two images again. The following Figure 1 was originally Figure 5 in last week’s… Continue reading Variational Free Energy and Active Inference: Pt 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
Let’s talk about the Kullback-Leibler divergence. (Sometimes, we call this the “K-L divergence.”) It’s the foundation, the building block, for variational methods. The Kullback-Leibler divergence is a made-up measure. It’s not one of those “fundamental laws of the universe.” It’s strictly a made-up human thing. Nevertheless, it’s become very useful – and is worth our… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)
Variational Methods: Where They Are in the AI/ML World The bleeding-leading edge of AI and machine learning (ML) deals with variational methods. Variational inference, in particular, is needed because we can’t envision every possible instance that would comprise a good training and testing data set. There will ALWAYS be some sort of oddball thing that… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)