Generative AI is about fifty years old. There are four main kinds of generative Ai (energy-based neural networks, variational inference, variational autoencoders, and transformers). There are three fundamental methods underlying all forms of generative AI: the reverse Kullback-Leibler divergence, Bayesian conditional probabilities, and statistical mechanics. Transformer-based methods add in multi-head attention and positional encoding. Generative AI is not, and never can be, artificial general intelligence, or AGI. AGI requires bringing in more architectural components, such as ontologies (e.g., knowledge graphs), and a linking mechanism. Themesis has developed this linking mechanism, CORTECONs(R), for COntent-Retentive, TEmporally-CONnected neural networks. CORTECONs(R) will enable near-term AGI development. Preliminary CORTECON work, based on the cluster variation method in statistical mechanics, includes theory, architecture, code, and worked examples, all available for public access. Community participation is encouraged.
If You’re Teaching Yourself Generative AI: Some Resources (Book Chapters)
Teaching yourself generative AI (“gen-AI”) has to be one of the hardest things in the world. The really important, classic papers – the ones that you WISH THAT YOU COULD READ – all presuppose that you have a lot of knowledge coming in, about all SORTS of things. The situation is the same as it… Continue reading If You’re Teaching Yourself Generative AI: Some Resources (Book Chapters)
CORTECONs: AGI in 2024-2025 – R&D Plan Overview
By the end of 2024, we anticipate having a fully-functional CORTECON (COntent-Retentive, TEmporally-CONnected) framework in place. This will be the core AGI (artificial general intelligence) engine. This is all very straightforward. It’s a calm, steady development – we expect it will all unfold rather smoothly. The essential AGI engine is a CORTECON. The main internal… Continue reading CORTECONs: AGI in 2024-2025 – R&D Plan Overview
CORTECONs and AGI: Reaching Latent Layer Equilibrium
The most important thing in building an AGI is the ability to repeated bring the latent layer to equilibrium. This is the fundamental capability that has been missing in previous neural networks. The lack of a dynamic process to continuously reach a free energy minimum is why we have not had, until now, a robust… Continue reading CORTECONs and AGI: Reaching Latent Layer Equilibrium
Learning Energy-Based Neural Networks
In order to read any of the classic (and important) papers on energy-based neural networks, we need to know the vocabulary and essential concepts from: In today’s associated YouTube video, we illustrate how these different terms – and their respective disciplines – are blended together, using the Salakhutdinov and Hinton (2012) paper as a reference… Continue reading Learning Energy-Based Neural Networks
AGI Notation: Friston’s Use of “Psi”
We want to create an AGI (artificial general intelligence). If you’re reading this post, we trust that is your intention as well. We already know that AGI won’t come out of transformers. They are, in their essence, content addressable memories. That’s what they can do; that’s ALL that they can do. Our core equation comes… Continue reading AGI Notation: Friston’s Use of “Psi”
Socrates, Pericles, and Aspasia: How A Muse Helped the Genesis of Western Thought
Socrates is a name that most of us know, even if only tangentially. We’ve probably all heard of the “Socratic method” of teaching. Many of us also know of Pericles, the great Athenian general who lead their armies in their various wars – including those against Sparta. But Aspasia? Not as many of us have… Continue reading Socrates, Pericles, and Aspasia: How A Muse Helped the Genesis of Western Thought
Neurophysiology-Based AGI Framework in 2024
This is the blogpost associated with THIS YOUTUBE: The BEST “Readings” Resources to go with this YouTube are TWO prior blogposts: Also, the prior neurophysiology-oriented YouTube has a LOT of useful content – it’s all on how the neurophysiology that we can use as inspiration has evolved over these past 50 ++ years. So the… Continue reading Neurophysiology-Based AGI Framework in 2024
The AI Salon: AGI and Latent Variables
One of the most important things that we can do, in creating AGI (artificial general intelligence), is to work through the latent variable issues that are foremost in AI and machine learning (ML) research now. We identified these in our July 10, 2023 blogpost on Latent Variables in Neural Networks and Machine Learning. We wrote… Continue reading The AI Salon: AGI and Latent Variables
The 1D CVM (Cluster Variation Method): Complete Interactive Code (Part 2)
The most important element in creating an AGI (artificial general intelligence) is that the latent node layer needs to allow a range of neural dynamics. The most important of these dynamics will be the ability for the system to rapidly undergo a state change, from mostly “off” nodes to mostly “on.” Neurophysiologists have observed this… Continue reading The 1D CVM (Cluster Variation Method): Complete Interactive Code (Part 2)