{* Note from AJM: This blogpost is largely complete – but I’ll be adding more links to prior YouTubes and blogposts within the next 24 hours, connecting this to points made in the accompanying YouTube. AJM – Wednesday, Oct. 29, 2025; 4:20 PM Hawai’i time (10:20 PM East Coast). *} If you’ve read Fever Dream… Continue reading Loyalty, Core Values, and AGI
Category: artificial general intelligence
Three “Golden Oldies” Point to AGI
This blogpost accompanies a YouTube video {* still in edit mode, check back for YouTube link coming soon *} in which I review three papers that I’d stashed in my “super-secret-special-storage tote” when I was moving from the mainland to Hawai’i – over ten years ago! For ten years, the “special tote” languished – until… Continue reading Three “Golden Oldies” Point to AGI
Pivoting to AGI: What to Read/Watch/Do This Weekend
We are moving from a generative-AI era to an AGI era. What that means – in the simplest technical terms – is that we’re pivoting from “single-algorithm systems” to systems that must – intrinsically – involve multiple major subsystems, and multiple control structures. We’re closing out a 50-year connectionist AI era. This era began with… Continue reading Pivoting to AGI: What to Read/Watch/Do This Weekend
AGI: Google’s Mixture of Depths is a Baby Step Towards Artificial General Intelligence (AGI)
LLMs, mixture-of-depths, mixture-of-experts, MoD, MoE
AGI: Generative AI, AGI, the Future of AI, and You
Generative AI is about fifty years old. There are four main kinds of generative Ai (energy-based neural networks, variational inference, variational autoencoders, and transformers). There are three fundamental methods underlying all forms of generative AI: the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics. Transformer-based methods add in multi-head attention and positional encoding. Generative AI is not, and never can be, artificial general intelligence, or AGI. AGI requires bringing in more architectural components, such as ontologies (e.g., knowledge graphs), and a linking mechanism. Themesis has developed this linking mechanism, CORTECONs(R), for COntent-Retentive, TEmporally-CONnected neural networks. CORTECONs(R) will enable near-term AGI development. Preliminary CORTECON work, based on the cluster variation method in statistical mechanics, includes theory, architecture, code, and worked examples, all available for public access. Community participation is encouraged.
CORTECONs: AGI in 2024-2025 – R&D Plan Overview
By the end of 2024, we anticipate having a fully-functional CORTECON (COntent-Retentive, TEmporally-CONnected) framework in place. This will be the core AGI (artificial general intelligence) engine. This is all very straightforward. It’s a calm, steady development – we expect it will all unfold rather smoothly. The essential AGI engine is a CORTECON. The main internal… Continue reading CORTECONs: AGI in 2024-2025 – R&D Plan Overview