The next era of artificial intelligence (AI), or artificial general intelligence (AGI), will rest on neurophysiology models that emphasize neuronal group dynamics, rather than the behaviors of single neurons. This post addresses three questions that underlie the next-era neurophysiological underpinnings supporting neural networks – the NEXT generation of neural networks modeling: Neurophysiology: Important Early Works… Continue reading Next-Era AGI: Neurophysiology Basis
1-D Cluster Variation Method: Simple Text String Worked Example (Part 1)
Today, we focus on getting the entropy term in the 1-D cluster variation method (the 1D CVM), using a simple text string as the basis for our worked example. This blog is in-progress. Please check back tomorrow for the updated version. Thank you! – AJM Our End Goal Our end goal – the reason that… Continue reading 1-D Cluster Variation Method: Simple Text String Worked Example (Part 1)
CORTECONs: Executive / Board / Investor Briefing
This briefing is the first step for those considering research and development (R&D) involving CORTECONs (COntent-Retentive, TEmporally-CONnected neural networks), which have been developed by Themesis Principal Alianna J. Maren, Ph.D. We organize this briefing using the five well-known questions for reporting: This briefing accompanies a YouTube presentation, and the link to that presentation will be… Continue reading CORTECONs: Executive / Board / Investor Briefing
Your AI Career: Positioning Yourself for Maximal Win
Two weeks ago, for the first time, I whipped out my credit card and signed up with Medium.com – all to access just a single article. For years, I’d successfully resisted that siren-call from Medium, keeping my access to the monthly minimum. But this one … was a must-read. A Bit of Backstory For those… Continue reading Your AI Career: Positioning Yourself for Maximal Win
CORTECONS: A New Class of Neural Networks
In the classic science fiction novel, Do Androids Dream of Electric Sheep?, author Philip K. Dick gives us a futuristic plotline that would – even today – be more exciting and thought-provoking than many of the newly-released “AI/robot as monster” movies. The key question today is: Can androids dream? This is not as far-fetched as… Continue reading CORTECONS: A New Class of Neural Networks
1D CVM Object Instance Attributes: wLeft Details
We illustrate how the specific values for a single object-oriented instance attribute are determined. We do this for the specific case of the “wLeft” instance attribute for the Node object in the 1-D cluster variation method (1D CVM) code. The same considerations will apply when we progress to the 2D CVM code. The specific values… Continue reading 1D CVM Object Instance Attributes: wLeft Details
1-D Cluster Variation Method (1D CVM): Code Soon!
1
Unboxing Grossberg’s “Conscious Mind, Resonant Brain”
Steve Grossberg’s book, Conscious Mind, Resonant Brain, summarizes fifty years of neurophysiology-inspired research and inventions in neural networks architectures, most notably the Adaptive Resonance Theory (ART) network, where his colleague Gail Carpenter was first author on two very important papers. We’ve done something a little different this time … a literal “unboxing” of the physical… Continue reading Unboxing Grossberg’s “Conscious Mind, Resonant Brain”
Latent Variables in Neural Networks and Machine Learning
Latent variables are one of the most important concepts in both energy-based neural networks (the restricted Boltzmann machine and everything that descends from it), as well as key natural language processing (NLP) algorithms such as LDA (latent Dirichlet allocation), all forms of transformers, and machine learning methods such as variational inference. The notion of finding… Continue reading Latent Variables in Neural Networks and Machine Learning
Key Features for a New Class of Neural Networks
A new class of neural networks will use a laterally-connected neuron layer (hidden or “latent” nodes) to enable three new kinds of temporal behavior: Memory persistence (“Holding that thought”) – neural clusters with variable slow activation degradation, allowing persistent activation after stimulus presentation, Learned temporal associations (“That reminds me …”) – neural clusters with slowly… Continue reading Key Features for a New Class of Neural Networks