CORTECONs: Executive / Board / Investor Briefing

This briefing is the first step for those considering research and development (R&D) involving CORTECONs (COntent-Retentive, TEmporally-CONnected neural networks), which have been developed by Themesis Principal Alianna J. Maren, Ph.D. We organize this briefing using the five well-known questions for reporting: This briefing accompanies a YouTube presentation, and the link to that presentation will be… Continue reading CORTECONs: Executive / Board / Investor Briefing

CORTECONS: A New Class of Neural Networks

In the classic science fiction novel, Do Androids Dream of Electric Sheep?, author Philip K. Dick gives us a futuristic plotline that would – even today – be more exciting and thought-provoking than many of the newly-released “AI/robot as monster” movies. The key question today is: Can androids dream? This is not as far-fetched as… Continue reading CORTECONS: A New Class of Neural Networks

1D CVM Object Instance Attributes: wLeft Details

We illustrate how the specific values for a single object-oriented instance attribute are determined. We do this for the specific case of the “wLeft” instance attribute for the Node object in the 1-D cluster variation method (1D CVM) code. The same considerations will apply when we progress to the 2D CVM code. The specific values… Continue reading 1D CVM Object Instance Attributes: wLeft Details

Unboxing Grossberg’s “Conscious Mind, Resonant Brain”

Steve Grossberg’s book, Conscious Mind, Resonant Brain, summarizes fifty years of neurophysiology-inspired research and inventions in neural networks architectures, most notably the Adaptive Resonance Theory (ART) network, where his colleague Gail Carpenter was first author on two very important papers. We’ve done something a little different this time … a literal “unboxing” of the physical… Continue reading Unboxing Grossberg’s “Conscious Mind, Resonant Brain”

Latent Variables in Neural Networks and Machine Learning

Latent variables are one of the most important concepts in both energy-based neural networks (the restricted Boltzmann machine and everything that descends from it), as well as key natural language processing (NLP) algorithms such as LDA (latent Dirichlet allocation), all forms of transformers, and machine learning methods such as variational inference. The notion of finding… Continue reading Latent Variables in Neural Networks and Machine Learning

Key Features for a New Class of Neural Networks

A new class of neural networks will use a laterally-connected neuron layer (hidden or “latent” nodes) to enable three new kinds of temporal behavior:  Memory persistence (“Holding that thought”) – neural clusters with variable slow activation degradation, allowing persistent activation after stimulus presentation, Learned temporal associations (“That reminds me …”) – neural clusters with slowly… Continue reading Key Features for a New Class of Neural Networks

Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)

AJM Note: ALTHOUGH NEARLY COMPLETE, references and discussion still being added to this blogpost. This note will be removed once the blogpost is completed – anticipated over the Memorial Day weekend, 2023. Note updated 12:30 AM, Hawai’i Time, Tuesday, May 29, 2023. This blogpost accompanies a YouTube vid on the same topic of the Evolution… Continue reading Evolution of NLP Algorithms through Latent Variables: Future of AI (Part 3 of 3)