We need a “black diamond” rating system to mark the tutorials, YouTubes, and other resources that help us learn the AI fundamentals. Case in point: Last week, I added a blogpost by Damian Ejlli to the References list. It is “Three Statistical Physics Concepts and Methods Used in Machine Learning.” (You’ll see it again in… Continue reading Kullback-Leibler, Etc. – Part 2.5 of 3: Black Diamonds
Category: A Resource
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3
Free energy is the universal solvent of AI (artificial intelligence). It is the single underlying rule or principle that makes AI possible. Actually, that’s a simplification. There are THREE key things that underlie AI – whether we’re talking deep learning or variational methods. These are: Free energy – which we’ll discuss in this post, Latent… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 2 of 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
Let’s talk about the Kullback-Leibler divergence. (Sometimes, we call this the “K-L divergence.”) It’s the foundation, the building block, for variational methods. The Kullback-Leibler divergence is a made-up measure. It’s not one of those “fundamental laws of the universe.” It’s strictly a made-up human thing. Nevertheless, it’s become very useful – and is worth our… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational – Part 1.5 of 3
The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)
Variational Methods: Where They Are in the AI/ML World The bleeding-leading edge of AI and machine learning (ML) deals with variational methods. Variational inference, in particular, is needed because we can’t envision every possible instance that would comprise a good training and testing data set. There will ALWAYS be some sort of oddball thing that… Continue reading The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3)
Major Blooper – Coffee Reward to First Three Finders!
To err is human. REALLY screwing things up … makes for interesting stories, and the basis for the first-ever Themesis “coffee reward” offer. So here’s what’s at stake: there is a blooper, and it involves notation (more on that shortly), and we (that is, Themesis, Inc.) will send an 8-oz package of Kona coffee to… Continue reading Major Blooper – Coffee Reward to First Three Finders!
Peer Reviews, Acknowledgments, and Working in Pairs
We’ve previously addressed many aspects of “how to write your research paper.” (In fact, there is a full YouTube playlist on this; see the link at the end of this post.) Now, we’re shifting focus to how you wrap things up at the end. This is a “little things that count” move. We want to… Continue reading Peer Reviews, Acknowledgments, and Working in Pairs
When a Classifier Acts as an Autoencoder, and an Autoencoder Acts as a Classifier (Part 1 of 3)
One of the biggest mental sinkholes into which AI students can get trapped is not quite understanding the fundamental difference between how our two basic “building block” networks operate: the Multilayer Perceptron (MLP), trained with backpropagation (or any form of gradient descent learning), and the (restricted) Boltzmann machine (RBM), trained with contrastive divergence. It’s easy… Continue reading When a Classifier Acts as an Autoencoder, and an Autoencoder Acts as a Classifier (Part 1 of 3)
Strategizing Your Research Project: Developing Your Portfolio Elements
When life gets hard, and you still have to get your research project done, sometimes it’s hard to focus and figure out the “most important thing.” Blog Update: How to Get More Focus and More Time Dear All – this is an update, added Sept. 22, 2022. Building your Portfolio is a BIG THING. Just… Continue reading Strategizing Your Research Project: Developing Your Portfolio Elements
Entropy in Energy-Based Neural Networks – Seven Key Papers (Part 3 of 3)
If you’ve looked at some classic papers in energy-based neural networks (e.g., the Hopfield neural network, the Boltzmann machine, the restricted Boltzmann machine, and all forms of deep learning), you’ll see that they don’t use the word “entropy.” At the same time, we’ve stated that entropy is a fundamental concept in these energy-based neural networks.… Continue reading Entropy in Energy-Based Neural Networks – Seven Key Papers (Part 3 of 3)
Understanding Entropy: Essential to Mastering Advanced AI
Have you been stumped when trying to read the classic AI papers? Are notions such as free energy and entropy confusing? This is probably because ALL areas of advanced AI are based, to some extent, on statistical mechanics. That means that you need to understand some “stat mech” rudiments to get through those papers. One… Continue reading Understanding Entropy: Essential to Mastering Advanced AI