This blogpost accompanies a YouTube video {* still in edit mode, check back for YouTube link coming soon *} in which I review three papers that I’d stashed in my “super-secret-special-storage tote” when I was moving from the mainland to Hawai’i – over ten years ago! For ten years, the “special tote” languished – until… Continue reading Three “Golden Oldies” Point to AGI
Author: AJ Maren
How to Prepare for the AGI World: Tools, Pseudo-Persons, and Slaves
The smart survival strategy for each of us is to start thinking and planning now for a world in which there are not just AIs, but AGIs. This will be a very different world. In this blogpost, and in the accompanying YouTube vid (see below), we look at the most essential distinction that we can… Continue reading How to Prepare for the AGI World: Tools, Pseudo-Persons, and Slaves
Significant Step Towards AGI: Verses’ Robotic Planning Horizons
The biggest challenge that we face now is to get beyond the simple combination of LLMs (large language models, including all forms of multimedia models) with RLHF (reinforcement learning with human feedback, or any other form of feedback). Our current methods are pushing the limit of what we can do with some (fairly substantial and… Continue reading Significant Step Towards AGI: Verses’ Robotic Planning Horizons
The Biggest Problem for AI Workers Is “Focus”
OK, I’ll admit – I tend to click on some AI articles offered by my MSN.com feed – enough to ensure that about 30% of all the MSN-offered articles are AI-related. And my Themesis email inbox? (The one that I use as a grab-all for everybody’s EVERYTHING?) That’s full to the point of toppling over,… Continue reading The Biggest Problem for AI Workers Is “Focus”
Neuro-Symbolic AGI: Emerging Groundswell Shift
Three key indicators that the shift from a “simple” AI architecture (transformers + reinforcement learning) to a “real” (and more complex” neuro-symbolic AI is FINALLY happening: YouTube: “Neuro-Symbolic AI Manager: Engineer or Magician?” Jobs First There has been no point in talking about neuro-symbolic AI (or any form of AGI beyond transformers + RL) until… Continue reading Neuro-Symbolic AGI: Emerging Groundswell Shift
LLMs Don’t Work; Why They Don’t and What’s Next
Two and a half years since the first ChatGPT release, and researchers, developers, and business leaders are reluctantly coming to the same conclusions: So the real and compelling question is: what’s next? Actually, there are two questions: This last question leads us in all KINDS of directions, ranging from: {* Work in progress *} References… Continue reading LLMs Don’t Work; Why They Don’t and What’s Next
AGI Architecture Stability with Multiple Timeframes
The single most important design principle that absolutely must underlie AGI architectures is system stability; that is, bringing the AGI world model back to a stable state when there have been stimulus infusions due to external observations, internal feedback, or even just small perturbations. This YouTube vid discusses AGI architectures, circa March, 2024. The Most… Continue reading AGI Architecture Stability with Multiple Timeframes
Steps Towards an AGI Architecture (“Aw, Piffle!”)
In the last blogpost and accompanying YouTube, we ventured the opinion that Google was using two “representation levels” to create its lovely Veo-2 generative AI video capabilities. It now appears that we (that’s the “editorial” we) may have been wrong. Ah, piffle! (And damn, and overall disappointment at our end.) BUT … the good that… Continue reading Steps Towards an AGI Architecture (“Aw, Piffle!”)
Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
Summary-at-a-Glance Google’s use of physically-realistic object models in its Veo-2 video generation system takes us a step closer to artificial general intelligence (AGI). No, it is NOT – in and of itself – an AGI. It is still a generative capability, specifically good at text-to-video generation. However, it differs sharply from simpler, “pure-play” video generative… Continue reading Veo-2 vs Sora: What Google Has (That OpenAI Doesn’t)
What Will REALLY Give Us Some AGI
Recently (as of late December, 2024), a huge hoopla about OpenAI’s o3, and also about Gemini 2 – with the question percolating (once again): is this AGI? Are we heading towards “superintelligence”? Let’s pull back a moment. OpenAI’s o3 uses strategic chain-of-thought reasoning. (See Wang et al. for a good exposition; Sept. 2024.) This is… Continue reading What Will REALLY Give Us Some AGI