One of the two “world model” methods that we’re considering for AGI is APD (Action Perception Divergence), an evolution of active inference, introduced by Hafner et al. in 2022.
This blogpost accompanies the fourth vid in the AGI YouTube series.
Previously in This Series …
This blogpost is fourth in a series, as is the corresponding YouTube. For prior related YouTubes and blogs, please visit the prior blogpost: Comparing Three AGI Contenders (Part 1 of Many).
The Essential Reading List – Three Papers from Our
“Key Reads”
Three of the key papers that we discussed in the YouTube are presented again here; they are also among the FIVE papers presented in the SECOND blogpost in this series: AGI Basics: Five Key Reads.
Here they are again, for your convenience – this time, we present the Hafner paper first.
1. Hafner et al., 2022, “Action Perception Divergence”
This paper, led by Danijar Hafner, takes Friston’s earlier notion of active inference one step further.
AJM’s Note: Mention of this paper received multiple “thumbs up” in the comments for the YouTube associated with this post! Clearly an important paper!
- Hafner, Danijar, Pedro A. Ortega, Jimmy Ba, Thomas Parr, Karl Friston, and Nicolas Heess. “Action and Perception as Divergence Minimization.” arXiv:2009.01791v3 [cs.AI] 13 Feb 2022. (Accessed May 20, 2024. Available online at arXiv.)
#2. Friston, 2013, “Life as We Know It”
AJM’s Note: Friston’s 2013 paper is the central point for theoretical (and mathematical) development of his notions on free energy in the brain, and in any living system. He starts with the notion of a system separated by Markov boundary from its external environment. Moves on from there.
- Friston, Karl. 2013. “Life as We Know It.” Journal of The Royal Society Interface. 10. doi:10.1098/rsif.2013.0475. (Accessed Oct. 13, 2022; pdf.)
#3. Friston et al., 2015, “Knowing One’s Place”
AJM’s Note: Friston and colleagues, in their 2015 paper “Knowing One’s Place,” show how self-assembly (or self-organization) can arise out of variational free energy minimization. Very interesting read!
- Friston, K.; Levin, M.; Sengupta, B.; Pezzulo, G. 2015. “Knowing One’s Place: A Free-Energy Approach to Pattern Regulation.” J. R. Soc. Interface. 12:20141383. doi:10.1098/rsif.2014.1383. (Accessed Oct. 3, 2022; pdf.)
Important: Read Side-by-Side with Hafner et al.
True story: I got snarled up, ONCE AGAIN, interpreting the notation in the Hafner et al. (2022) “Action Perception Divergence” paper.
In fact, snarled up to where I had already shot some vid, EDITED that vid, and was well on my way to wrapping up a YouTube when I realized that I wasn’t quite getting it … and went back and started re-reading Hafner et al.
Then I pulled out the paper by Noor Sajid and colleagues (2020).
Now … interesting note … the first version of the Hafner et al. paper came out (on arXiv) in 2020, around the same time that the Sajid et al. paper was published.
There are a LOT OF SIMILARITIES in thought, in the language, between these two papers … and sometimes, reading the same thing but from two slightly different perspectives can be ever so helpful. (Kind of like reading about the same portion of Jesus’ life in two different Gospels.)
So … I would love to put down some of those key insights, and I will … but not right now. (Future blogpost theme … )
Instead, we’ll keep this blogpost as the repository of “important papers.”
And with that in mind, here’s that paper by Sajid et al. (2020).
- Sajid, Noor, Philip J. Ball, Thomas Parr, and Karl J. Friston. 2020. “Active Inference: Demystified and Compared.” arXiv:1909.10863v3 [cs.AI] 30 Oct 2020. (Accessed 17 June 2022; https://arxiv.org/abs/1909.10863 )
Predecessor Works
BOTH papers – Hafner et al. (2022) and Sajid et al. (2020) reference Friston et al. (2017), which I’m just now starting to read … I’d spent time with two of Friston’s earlier works (2013 and 2015). But, since Friston et a. (2017) seems to be more central to both Hafner et al. (2022) and Sajid et al. (2020), here it is – with a link to an online (free) pdf.
- Friston, K., T. FitzGerald, F. Rigoli, P. Schwartenbeck, G. Pezzulo. 2017. “Active Inference: a Process Theory.” Neural Computation, 29(1). (Accessed July 11, 2024; available online at Friston et al. (2017). Active Inference.)
This 2019 paper by Schwartenbeck et al. (including Friston) seems similar, and may build on the concepts in Friston et al. (2017). (AJM’s note: still reading it … )
- Schwartenbeck, Philipp, Johannes Passecker, Tobias U. Hauser, Thomas HB FItzgerald, Martin Kronbichler, and Karl J. Friston. “Computational Mechanisms of Curiosity and Goal-Directed Exploration.” eLife (Neuroscience) (May 10, 2019). doi:10.7554/eLife.41703. (Accessed July 11, 2024; available online at Schwartenbeck et al. Computational Mechanisms.)
The Schwartenbeck et al. paper (and also look at the abstract for the Friston et al. (2017) paper), seems to be a way-station in the road to Action Perception Divergence, which is more broad and comprehensive. Sajid et al. (2020) together with Hafner et al. (2022) discuss active inference as a strategy for directing agents.
Important Tutorial Papers
Two of these tutorials are my own, and others are some of the most important (and citable) papers in this area.
The Kullback-Leibler Divergence – and the REVERSE Kullback-Leibler (AJM Tutorial)
All generative AI methods begin with the reverse Kullback-Leibler divergence.
This is true in the Hafner et al. paper as well as all other generative AI works (including works by Friston, who very bluntly states that active inference is a generative method, in that it invokes Bayesian inference.)
As part of studying the Kullback-Leibler divergence (and the reverse KL), as well as clarifying notation, I wrote – then REWROTE this little tutorial. (It’s still privately published; will add a bit more to it from the APD perspective, get some reviews from colleagues, and then upload to arXiv.)
- Maren, Alianna J. 2024. “Minding Your P’s and Q’s: Notational Variations Expressing the Kullback-Leibler Divergence.” Themesis, Inc. Technical Note THM TN2024-001 (ajm). (PDF last accessed Feb. 02, 2024.)
Variational Inference / Active Inference Tutorial (AJM’s Work)
This is the “Rosetta Stone” tutorial on variational Bayes; it traces Friston’s notation back into Matthew Beal’s dissertation (his primary notation source), and also cross-correlates with David Blei et al.
NOTE: it needs a minor rewording around the “P’s and Q’s” – will get to that soon.)
- Maren, Alianna J. 2019. (Revised 2022.) “Derivation of Variational Bayes Equations.” Themesis Technical Report TR-2019-01v5 (ajm). arXiv1906.08804v5 [cs.NE]. doi:10.48550/arXiv.1906.08804. (Accessed June 8, 2023; available online at Deriv Var. Bayes (v5).)
Blei et al. on Variational Inference
AJM’s Note: I refer to the Blei et al. tutorial because it is very solid and lucid. If we’re trying to understand variational ANYTHING (variational Bayes, variational inference, variational autoencoders, etc.); Blei et al. make some comments and offer a perspective that is very complementary to that given by Beal in his 2003 dissertation.
- Blei, D.M., A. Kucukelbir, and J.D. McAuliffe. 2016. “Variational Inference: A Review for Statisticians.” arXiv:1601.00670v9 doi:10.48550/1601.00670 (Accessed June 28, 2022; pdf. )
Other Cited Papers
In the YouTube, I mentioned this paper on the (reverse) Kullback-Leibler divergence that was offered as a reference by Hafner et al.; here’s the citation.
Here’s a look at the introductory paragraph on that paper – you can see that there is an atypical representation of the Kullback-Leibler divergence, and the entire discussion is very abstract.
- Csiszár, I., and F. Matus. 2003. “Information Projections Revisited.” IEEE Transactions on Information Theory, 49(6): 1474-1490. (Accessed July 16, 2024; available online at Csiszar and Matus pdf.)
Understanding Friston: AJM’s YouTube and Blogpost Series
One of the trickiest things in understanding Friston’s original active inference formulation is getting insight into his notation.
This YouTube was an attempt to interpret how Friston used “Psi” (to represent the external world) in his papers.
Prior to that YouTube, I spent both the summer and fall quarters of 2022 re-teaching myself variational inference (after I’d found a notational mistake in an early draft of my own variational inference tutorial). During that time, I put out a baker’s dozen blogposts on the Kullback-Leibler divergence, free energy, and variational everything.
Here’s the LAST blogpost from that series, which has an EXTREMELY COMPREHENSIVE resource compendium.
- Maren, Alianna J. 2022. “Variational Free Energy: Getting-Started Guide and Resource Compendium.” Themesis Blogpost Series (Nov. 16, 2022). (Accessed July 11, 2024; available at Resource Compendium.)
Here’s the FIRST blogpost in that series, which starts with the Kullback-Leibler divergence.
- Maren, Alianna J. 2022. “The Kullback-Leibler Divergence, Free Energy, and All Things Variational (Part 1 of 3).” Themesis Blogpost Series (June 28, 2022). (Accessed July 11, 2024; available at Kullback-Leibler Divergence, Part 1.)
To Be Continued …
{* Blogpost still in progress. AJM, Tuesday, July 16, 2024; 8:20 AM. Hawai’i time. *}