OK, I’ll admit – I tend to click on some AI articles offered by my MSN.com feed – enough to ensure that about 30% of all the MSN-offered articles are AI-related. And my Themesis email inbox? (The one that I use as a grab-all for everybody’s EVERYTHING?) That’s full to the point of toppling over,… Continue reading The Biggest Problem for AI Workers Is “Focus”
Neuro-Symbolic AGI: Emerging Groundswell Shift
Three key indicators that the shift from a “simple” AI architecture (transformers + reinforcement learning) to a “real” (and more complex” neuro-symbolic AI is FINALLY happening: YouTube: “Neuro-Symbolic AI Manager: Engineer or Magician?” Jobs First There has been no point in talking about neuro-symbolic AI (or any form of AGI beyond transformers + RL) until… Continue reading Neuro-Symbolic AGI: Emerging Groundswell Shift
LLMs Don’t Work; Why They Don’t and What’s Next
Two and a half years since the first ChatGPT release, and researchers, developers, and business leaders are reluctantly coming to the same conclusions: So the real and compelling question is: what’s next? Actually, there are two questions: This last question leads us in all KINDS of directions, ranging from: {* Work in progress *} References… Continue reading LLMs Don’t Work; Why They Don’t and What’s Next
AGI Architecture Stability with Multiple Timeframes
The single most important design principle that absolutely must underlie AGI architectures is system stability; that is, bringing the AGI world model back to a stable state when there have been stimulus infusions due to external observations, internal feedback, or even just small perturbations. This YouTube vid discusses AGI architectures, circa March, 2024. The Most… Continue reading AGI Architecture Stability with Multiple Timeframes
Steps Towards an AGI Architecture (“Aw, Piffle!”)
In the last blogpost and accompanying YouTube, we ventured the opinion that Google was using two “representation levels” to create its lovely Veo-2 generative AI video capabilities. It now appears that we (that’s the “editorial” we) may have been wrong. Ah, piffle! (And damn, and overall disappointment at our end.) BUT … the good that… Continue reading Steps Towards an AGI Architecture (“Aw, Piffle!”)
Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
Summary-at-a-Glance Google’s use of physically-realistic object models in its Veo-2 video generation system takes us a step closer to artificial general intelligence (AGI). No, it is NOT – in and of itself – an AGI. It is still a generative capability, specifically good at text-to-video generation. However, it differs sharply from simpler, “pure-play” video generative… Continue reading Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
What Will REALLY Give Us Some AGI
Recently (as of late December, 2024), a huge hoopla about OpenAI’s o3, and also about Gemini 2 – with the question percolating (once again): is this AGI? Are we heading towards “superintelligence”? Let’s pull back a moment. OpenAI’s o3 uses strategic chain-of-thought reasoning. (See Wang et al. for a good exposition; Sept. 2024.) This is… Continue reading What Will REALLY Give Us Some AGI
Twelve Days of Generative AI: Day 3 – Transformers and Beyond
In this blogpost, we investigate how transformer-based architectures actually implement the three key elements of generative AI: The Easiest Entry Point I love Matthias Bal’s blogposts that put transformers into a statistical mechanics context – here’s one of my favorites, and possibly the EASIEST to read. And truly, you don’t have to read the whole… Continue reading Twelve Days of Generative AI: Day 3 – Transformers and Beyond
Twelve Days of Generative AI: Day 2 – Autoencoders
Today we address one of the most basic components of generative AI – a good, old-fashioned autoencoder. We have TWO KEY RESOURCES for you: For some quick YouTubes, you might consider: References and Resources This gives you the formal Chicago (Author-Date) style references for the two documents identified earlier.
Twelve Days of Generative AI: Day 1 – Introduction and Overview
The most exciting thing right now is to look at how AGI (artificial GENERAL intelligence) is emerging in 2025. But in order to understand AGI, we first need a firm, solid grasp on generative AI (“Gen-AI”), because AGI is going to be AT LEAST an order of magnitude more complex than Gen-AI. And Gen-AI is,… Continue reading Twelve Days of Generative AI: Day 1 – Introduction and Overview