Overarching Story Line This new blogpost series, on variational free energy and active inference, presents tutorial-level studies centered on the free energy equation (Eq. 2.7) of Karl Friston’s 2013 paper, “Life as We Know It.” Specifically, we’re focused on the free energy equation shown in Figure 1 below. Over this blogpost series, we will reinforce… Continue reading Variational Free Energy and Active Inference: Pt 1
Category: A Resource – Papers
Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
I thought it would be (relatively) straightforward to wrap this up. Over the past several posts in this series, we’ve discussed the Kullback-Leibler (K-L) divergence and free energy. In particular, we’ve described free energy as the “universal solvent” for artificial intelligence and machine learning methods. This next (and last) post in this series was intended… Continue reading Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
We need a “black diamond” rating system to mark the tutorials, YouTubes, and other resources that help us learn the AI fundamentals. Case in point: Last week, I added a blogpost by Damian Ejlli to the References list. It is “Three Statistical Physics Concepts and Methods Used in Machine Learning.” (You’ll see it again in… Continue reading Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3
Free energy is the universal solvent of AI (artificial intelligence). It is the single underlying rule or principle that makes AI possible. Actually, that’s a simplification. There are THREE key things that underlie AI – whether we’re talking deep learning or variational methods. These are: Free energy – which we’ll discuss in this post, Latent… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
Let’s talk about the Kullback-Leibler divergence. (Sometimes, we call this the “K-L divergence.”) It’s the foundation, the building block, for variational methods. The Kullback-Leibler divergence is a made-up measure. It’s not one of those “fundamental laws of the universe.” It’s strictly a made-up human thing. Nevertheless, it’s become very useful – and is worth our… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)
Variational Methods: Where They Are in the AI/ML World The bleeding-leading edge of AI and machine learning (ML) deals with variational methods. Variational inference, in particular, is needed because we can’t envision every possible instance that would comprise a good training and testing data set. There will ALWAYS be some sort of oddball thing that… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)
Major Blooper – Coffee Reward to First Three Finders!
To err is human. REALLY screwing things up … makes for interesting stories, and the basis for the first-ever Themesis “coffee reward” offer. So here’s what’s at stake: there is a blooper, and it involves notation (more on that shortly), and we (that is, Themesis, Inc.) will send an 8-oz package of Kona coffee to… Continue reading Major Blooper – Coffee Reward to First Three Finders!
Peer Reviews, Acknowledgments, and Working in Pairs
We’ve previously addressed many aspects of “how to write your research paper.” (In fact, there is a full YouTube playlist on this; see the link at the end of this post.) Now, we’re shifting focus to how you wrap things up at the end. This is a “little things that count” move. We want to… Continue reading Peer Reviews, Acknowledgments, and Working in Pairs
When a Classifier Acts as an Autoencoder, and an Autoencoder Acts as a Classifier (Part 1 of 3)
One of the biggest mental sinkholes into which AI students can get trapped is not quite understanding the fundamental difference between how our two basic “building block” networks operate: the Multilayer Perceptron (MLP), trained with backpropagation (or any form of gradient descent learning), and the (restricted) Boltzmann machine (RBM), trained with contrastive divergence. It’s easy… Continue reading When a Classifier Acts as an Autoencoder, and an Autoencoder Acts as a Classifier (Part 1 of 3)
Strategizing Your Research Project: Developing Your Portfolio Elements
When life gets hard, and you still have to get your research project done, sometimes it’s hard to focus and figure out the “most important thing.” Blog Update: How to Get More Focus and More Time Dear All – this is an update, added Sept. 22, 2022. Building your Portfolio is a BIG THING. Just… Continue reading Strategizing Your Research Project: Developing Your Portfolio Elements