Pivoting to AGI: What to Read/Watch/Do This Weekend

We are moving from a generative-AI era to an AGI era. What that means – in the simplest technical terms – is that we’re pivoting from “single-algorithm systems” to systems that must – intrinsically – involve multiple major subsystems, and multiple control structures. We’re closing out a 50-year connectionist AI era. This era began with… Continue reading Pivoting to AGI: What to Read/Watch/Do This Weekend

Building Your Online Portfolio (A Collection of Useful Links)

One of the strongest things that we can do to position ourselves – for the next career move, and also for creating a new tier of powerful professional relations – is to build our online Portfolio. This post provides links to good Portfolio examples for three different cases: This post provides a collection of useful… Continue reading Building Your Online Portfolio (A Collection of Useful Links)

It Might All Come Down to Rare Earths

Jensen Huang’s keynote talk at NVIDIA GTC last week was very likely the tip of the iceberg. Demand for processing units is going up. Going CRAZY up. NVIDIA’s new product releases and recent stock price upsurges reflect that. But NVIDIA is not the only chip-maker in the US. The Biden Administration has been investing …… Continue reading It Might All Come Down to Rare Earths

AGI: Generative AI, AGI, the Future of AI, and You

Generative AI is about fifty years old. There are four main kinds of generative Ai (energy-based neural networks, variational inference, variational autoencoders, and transformers). There are three fundamental methods underlying all forms of generative AI: the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics. Transformer-based methods add in multi-head attention and positional encoding. Generative AI is not, and never can be, artificial general intelligence, or AGI. AGI requires bringing in more architectural components, such as ontologies (e.g., knowledge graphs), and a linking mechanism. Themesis has developed this linking mechanism, CORTECONs(R), for COntent-Retentive, TEmporally-CONnected neural networks. CORTECONs(R) will enable near-term AGI development. Preliminary CORTECON work, based on the cluster variation method in statistical mechanics, includes theory, architecture, code, and worked examples, all available for public access. Community participation is encouraged.

CORTECONs: AGI in 2024-2025 – R&D Plan Overview

By the end of 2024, we anticipate having a fully-functional CORTECON (COntent-Retentive, TEmporally-CONnected) framework in place. This will be the core AGI (artificial general intelligence) engine. This is all very straightforward. It’s a calm, steady development – we expect it will all unfold rather smoothly. The essential AGI engine is a CORTECON. The main internal… Continue reading CORTECONs: AGI in 2024-2025 – R&D Plan Overview

CORTECONs and AGI: Reaching Latent Layer Equilibrium

The most important thing in building an AGI is the ability to repeated bring the latent layer to equilibrium. This is the fundamental capability that has been missing in previous neural networks. The lack of a dynamic process to continuously reach a free energy minimum is why we have not had, until now, a robust… Continue reading CORTECONs and AGI: Reaching Latent Layer Equilibrium

AGI Notation: Friston’s Use of “Psi”

We want to create an AGI (artificial general intelligence). If you’re reading this post, we trust that is your intention as well. We already know that AGI won’t come out of transformers. They are, in their essence, content addressable memories. That’s what they can do; that’s ALL that they can do. Our core equation comes… Continue reading AGI Notation: Friston’s Use of “Psi”

1-D Cluster Variation Method: Simple Text String Worked Example (Part 1)

Today, we focus on getting the entropy term in the 1-D cluster variation method (the 1D CVM), using a simple text string as the basis for our worked example. This blog is in-progress. Please check back tomorrow for the updated version. Thank you! – AJM Our End Goal Our end goal – the reason that… Continue reading 1-D Cluster Variation Method: Simple Text String Worked Example (Part 1)

CORTECONS: A New Class of Neural Networks

In the classic science fiction novel, Do Androids Dream of Electric Sheep?, author Philip K. Dick gives us a futuristic plotline that would – even today – be more exciting and thought-provoking than many of the newly-released “AI/robot as monster” movies. The key question today is: Can androids dream? This is not as far-fetched as… Continue reading CORTECONS: A New Class of Neural Networks