I thought it would be (relatively) straightforward to wrap this up. Over the past several posts in this series, we’ve discussed the Kullback-Leibler (K-L) divergence and free energy. In particular, we’ve described free energy as the “universal solvent” for artificial intelligence and machine learning methods. This next (and last) post in this series was intended… Continue reading Kullback-Leibler, Etc. – Part 3 of 3: The Annotated Resources List
Tag: Karl Friston
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
Let’s talk about the Kullback-Leibler divergence. (Sometimes, we call this the “K-L divergence.”) It’s the foundation, the building block, for variational methods. The Kullback-Leibler divergence is a made-up measure. It’s not one of those “fundamental laws of the universe.” It’s strictly a made-up human thing. Nevertheless, it’s become very useful – and is worth our… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)
Variational Methods: Where They Are in the AI/ML World The bleeding-leading edge of AI and machine learning (ML) deals with variational methods. Variational inference, in particular, is needed because we can’t envision every possible instance that would comprise a good training and testing data set. There will ALWAYS be some sort of oddball thing that… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)