Twelve Days of Generative AI: Day 3 – Transformers and Beyond

In this blogpost, we investigate how transformer-based architectures actually implement the three key elements of generative AI: The Easiest Entry Point I love Matthias Bal’s blogposts that put transformers into a statistical mechanics context – here’s one of my favorites, and possibly the EASIEST to read. And truly, you don’t have to read the whole… Continue reading Twelve Days of Generative AI: Day 3 – Transformers and Beyond

Twelve Days of Generative AI: Day 2 – Autoencoders

Today we address one of the most basic components of generative AI – a good, old-fashioned autoencoder. We have TWO KEY RESOURCES for you: For some quick YouTubes, you might consider: References and Resources This gives you the formal Chicago (Author-Date) style references for the two documents identified earlier.

Twelve Days of Generative AI: Day 1 – Introduction and Overview

The most exciting thing right now is to look at how AGI (artificial GENERAL intelligence) is emerging in 2025. But in order to understand AGI, we first need a firm, solid grasp on generative AI (“Gen-AI”), because AGI is going to be AT LEAST an order of magnitude more complex than Gen-AI. And Gen-AI is,… Continue reading Twelve Days of Generative AI: Day 1 – Introduction and Overview

AGI Wars: Emerging Landscape

This blogpost accompanies a YouTube (thumbnail below), which is still under development. Blogpost STILL In Progress! Dear All – It’s 8 AM on Tuesday, Oct. 222nd. Sorry, I was just a little “post” after getting this YouTube up on Sunday. I’ll slowly be filling in these extra pieces. Please check back soon. And thank you!… Continue reading AGI Wars: Emerging Landscape

AGI Wars: 2024 Nobel Prize Award: Part 1- Physics

Of the five 2024 Nobel Prize awards made in physics and chemistry, we can reasonably say that three out of five went to Google employees (Demis Hassabis & John Jumper) or former employee (Geoffrey Hinton). Of the two remaining awards, one went to an academic researcher (David Baker) who collaborated with a Google employee ,… Continue reading AGI Wars: 2024 Nobel Prize Award: Part 1- Physics

Free Energy Principle Plus Semantics

This post accompanies a YouTube published on Sept. 15, 2024, which follows up on a suggestion made by Sharif that we consider a paper by Ramstead, Friston, and Hipólito (2020). This paper is titled “Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations.” It is, indeed,… Continue reading Free Energy Principle Plus Semantics

AGI: RGMs, JEPA, and CORTECONs(R): Three AGI Building Blocks

This blogpost accompanies the YouTube “AGI: Three Foundation Methods – RGM, JEPA, and CORTECONs(R),” published Sept. 12, 2024. In this YouTube, we identify three ways in which to follow up with us: Here’s the details for each. Level 1. Self-Study on CORTECONs(R) We identify several steps for a gentle, hands-on introduction to CORTECONs(R) – more… Continue reading AGI: RGMs, JEPA, and CORTECONs(R): Three AGI Building Blocks

Contrast-and-Compare: Friston et al. (2024) and Hafner et al. (2022)

This blog is in progress. (AJM, Friday, Aug. 30, 2024; 09:00 AM HI Time) This blogpost accompanies the YouTube on “AGI: Action Perception Divergence (APD) vs. Renormalizing Generative Models (RGMs)” This blogpost – and the accompanying YouTube – is in response to a question asked in the Comments section of the prior YouTube. Here’s that… Continue reading Contrast-and-Compare: Friston et al. (2024) and Hafner et al. (2022)

Big AGI Breakthrough: Leveling the Playing Field

Three weeks ago, the AGI world tilted on its axis. More specifically, Friston et al. (2024) introduced an evolutionary advance in active inference, which they call renormalising generative models (RGM). This blogpost addresses three key questions: Here’s the YouTube that accompanies this blogpost: Background Friston’s evolution of active inference is one of the key methods… Continue reading Big AGI Breakthrough: Leveling the Playing Field