Key Features for a New Class of Neural Networks

A new class of neural networks will use a laterally-connected neuron layer (hidden or “latent” nodes) to enable three new kinds of temporal behavior: 

  1. Memory persistence (“Holding that thought”) – neural clusters with variable slow activation degradation, allowing persistent activation after stimulus presentation,
  2. Learned temporal associations (“That reminds me …”) – neural clusters with slowly degrading activation can form associations with newly-activated node clusters, creating temporally-based associations, and
  3. Random activation foraging (“California Dreamin’ …”) – when only “noise” or partial stimulus elements are presented, the network is able to free-associate and move among various previous learned states, and potentially also create new, stabilized activation patterns. 

Key features for this new class of neural networks build on those identified previously, and include: 

  • Latent variables and
  • Free energy minimization.

 


References and Resources

Boltzmann Machines & Deep Learning

Hinton, Geoffrey, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. 2012. “Deep Neural Networks for Acoustic Modeling in Speech Recognition: Four Research Groups Share Their Views.” IEEE Signal Processing Magazine 29 (6) (Nov. 2012); 82-97. doi: 10.1109/MSP.2012.2205597. (Accessed Jan. 31, 2023; available at https://www.cs.toronto.edu/~hinton/absps/DNN-2012-proof.pdf.)


Parallel Pathways in the Brain

We mention parallel pathways in the brain in this week’s accompanying YouTube vid.

Parallel pathways have been known for some time, but some work published in the last few weeks showed us a new pathway in the basal ganglia.

This is breaking research by Aryn Gettis of Carnegie Mellon.

We are highlighting this work because the new neural network class that we will be introducing soon is best seen as a parallel or indirect pathway, such as the kind referenced here in the discussion of basal ganglia neurons.

Our results suggest that negative reinforcement, not motor suppression, is the primary behavioral correlate of signaling along the indirect pathway, and that the motor suppressing effects of A2A-SPN stimulation are driven through inhibitory collaterals within the striatum.”

Isett, Brian R. 2023. See full citation below.

Here’s a layman’s overview:

Here’s the original research article:

  • Isett, Brian R., Katrina P. Nguyen, Jenna C. Schwenk, Jeff R. Yurek, Christen N. Snyder, Maxime V. Vounnatsos, , Kendra A. Adegbesan, Ugne Ziausyte, and Aryn H. Gittis. 2023. “The indirect pathway of the basal ganglia promotes transient punishment but not motor suppression.” Neuron (May 16, 2023) S0896-6273(23)00302-1.. doi: 10.1016/j.neuron.2023.04.017. (Accessed May 17, 2023; available online at https://www.biorxiv.org/content/10.1101/2022.05.18.492478v1.full.)

Avalanches in Neural Systems

AJM’s Note: This is a good, recent experimental study.

  • Heiney, Kristine, Ola Huse Ramstad, Vegard Fiskum, Axel Sandvig, Ioanna Sandvig, and Stefano Nichele. 2022. “Neuronal Avalanche Dynamics and Functional Connectivity Elucidate Information Propagation in vitro.” Front. Neural Circuits 16 (15 September 2022). doi:10.3389/fncir.2022.090631. (Accessed June 21, 2023; available online at https://doi.org/10.3389/fncir.2022.980631.)

AJM’s Note: A little older, still very good.

  • Tagliazucchi, Enzo, Pablo Balenzuela, Daniel Fraiman, and Dante R. Chialvo. 2012. “Criticality in Large-Scale Brain fMRI Dynamics Unveiled by a Novel Point Process Analysis.” Front. Physiol. 3 (08 February 2012) Sec. Fractal Physiology. (Accessed June 21, 2023; available online at  https://doi.org/10.3389/fphys.2012.00015.)

Avalanches in Physical Systems

AJM’s Note: These references were included in the previous blogpost; they’re included again here for your ease and for completeness.

Large-scale cortical dynamics are how the brain works. It’s not just a sets of “input stimulus” neurons firing and exciting others in a layered sequence. Instead, it is groups of neurons (ranging from small to large) firing as a collective whole.

This means that we’re going to take inspiration from the brain, we need neural architectures that have sufficiently large numbers of neurons (in a responsive layer), so that groups of these neurons can become active.

We should be able to control the numbers of neurons in these groups – as well as how long they are active – with a few simple parameters.

The first step is to create a model for this kind of neural collection.

This very nice technical blogpost by James Sethna (one of my favorite authors) includes a lovely dynamic depiction of avalanches in a 3-D system. Fun watching!

AJM’s Note: A full-scale article on the same, with Sethna as the last author.

  • Kuntz, Matthew C., Olga Perkovic, Karin A. Dahmen, Bruce W. Roberts, and James P. Sethna. 1998. “Hysteresis, Avalanches, and Noise: Numerical Methods.” arXiv:cond-mat/9809122v2 [cond-mat.dis-nn] (23 April 1998). doi:10.48550/arxiv.cond-mat/9809122v2. (Accessed June 21, 2023; available online at https://arxiv.org/pdf/cond-mat/9809122.pdf.)

AJM’s Note: This article describes avalanches and hysteresis as part of a first-order phase transition in a model of a neural system. Abstract only here.

  • Scarpetta, Silvia, Ilenia Apicella, Ludovico Minati, and Antonio de Candia. 2018. “Hysteresis, Neural Avalanches, and Critical Behavior Near a First-Order Transition of a Spiking Neural Network.” Phys. Rev. E. 97(6-1):062305 (June 2018). PMID: 30011436. doi: 10.1103/PhysRevE.97.062305. (Accessed June 21, 2023; Abstract available online at https://pubmed.ncbi.nlm.nih.gov/30011436/.)


California Dreamin – The Mamas and the Papas

Share via
Copy link
Powered by Social Snap