AI Misalignment: Anthropic’s Studies and More

Anthropic just released a study on AI misalignment that is even more chilling than what they found last week, when it became clear that a hacker group had used Claude Code blocks to hack at least thirty different organizations. In today’s study, Anthropic researchers M. MacDiarmid and others revealed the results of a study where… Continue reading AI Misalignment: Anthropic’s Studies and More

World Models: Five Competing Approaches

As of late 2025, there are five different approaches to creating world models. Here, we do a contrast-and-compare, identifying not only the specific areas in which each is (or can be) useful, the companies and “Chief Magicians” (leading scientists) behind each, and – most importantly – the money. This blogpost is still under development; we’ll… Continue reading World Models: Five Competing Approaches

How to Prepare for the AGI World: Tools, Pseudo-Persons, and Slaves

The smart survival strategy for each of us is to start thinking and planning now for a world in which there are not just AIs, but AGIs. This will be a very different world. In this blogpost, and in the accompanying YouTube vid (see below), we look at the most essential distinction that we can… Continue reading How to Prepare for the AGI World: Tools, Pseudo-Persons, and Slaves

Significant Step Towards AGI: Verses’ Robotic Planning Horizons

The biggest challenge that we face now is to get beyond the simple combination of LLMs (large language models, including all forms of multimedia models) with RLHF (reinforcement learning with human feedback, or any other form of feedback). Our current methods are pushing the limit of what we can do with some (fairly substantial and… Continue reading Significant Step Towards AGI: Verses’ Robotic Planning Horizons

Neuro-Symbolic AGI: Emerging Groundswell Shift

Three key indicators that the shift from a “simple” AI architecture (transformers + reinforcement learning) to a “real” (and more complex” neuro-symbolic AI is FINALLY happening: YouTube: “Neuro-Symbolic AI Manager: Engineer or Magician?” Jobs First There has been no point in talking about neuro-symbolic AI (or any form of AGI beyond transformers + RL) until… Continue reading Neuro-Symbolic AGI: Emerging Groundswell Shift

LLMs Don’t Work; Why They Don’t and What’s Next

Two and a half years since the first ChatGPT release, and researchers, developers, and business leaders are reluctantly coming to the same conclusions: So the real and compelling question is: what’s next? Actually, there are two questions: This last question leads us in all KINDS of directions, ranging from: {* Work in progress *} References… Continue reading LLMs Don’t Work; Why They Don’t and What’s Next

AGI Architecture Stability with Multiple Timeframes

The single most important design principle that absolutely must underlie AGI architectures is system stability; that is, bringing the AGI world model back to a stable state when there have been stimulus infusions due to external observations, internal feedback, or even just small perturbations. This YouTube vid discusses AGI architectures, circa March, 2024. The Most… Continue reading AGI Architecture Stability with Multiple Timeframes

Steps Towards an AGI Architecture (“Aw, Piffle!”)

In the last blogpost and accompanying YouTube, we ventured the opinion that Google was using two “representation levels” to create its lovely Veo-2 generative AI video capabilities. It now appears that we (that’s the “editorial” we) may have been wrong. Ah, piffle! (And damn, and overall disappointment at our end.) BUT … the good that… Continue reading Steps Towards an AGI Architecture (“Aw, Piffle!”)