1-D Cluster Variation Method (1D CVM): Code Soon!

The basis for the new class of neural networks, or CORTECONs, is a latent variable layer where:

  • There are a (relatively speaking) large number of nodes, and
  • The activations of the nodes in this layer can be governed by BOTH inputs from an input (or visible) layer AND ALSO a free energy minimization process across this lateral layer.

We will say that free energy minimization across this lateral latent layer makes this a dynamic neural network, as compared with a generative neural network. (Yes, we know that the term “dynamic” has been co-opted for a form of learning over time, but believe it is more appropriately used here.)

What makes free energy minimization across the lateral layer possible is that we use an Ising equation with a somewhat more complex entropy term. This entropy formulation was first introduced by Kikuchi (1951) and further advanced by Kikuchi and Brush (1967). This method is called the cluster variation method.

Figure 1. Illustration of a 1-D (single zigzag chain) CVM (cluster variation method) grid. Bottom rows: before two nodes were swapped (user choses two nodes of different activations). Top rows: After the two nodes are swapped. Interactive code will be available soon. (Currently being debugged.)

The current code in the GitHub repository (“simple-1D-CVM-w-Python-turtle-1pt3pt6-…”; see GitHub link in Resources and References, below) does not allow the interactive swap – it just shows the bottom two rows (a single zigzag chain).

ALSO: IMPORTANT DISCLAIMER! – We found a bug. Just last night. We’re not pulling the GitHub link, just letting you know – this code, even though it’s the simplest-of-the-simple (single zigzag chain) is not yet bug-free. We hope to get that bug fixed before the weekend. When we get this fixed, we’ll delete the old code from the repository, and upload the new code, and put an update note into this blogpost, so you’ll know that the current version is there.

BUT – we probably won’t send out another email blast until NEXT Thursday, so you’ll find out then what the current code status is – we hope more evolved – but you’ll know for sure next week.

Work on the interactive code, allowing you (the user) to chose two nodes to swap, is underway. (This code uses Python turtle graphics to make the rows of boxes, and the graphics results are inserted into the MS PPTXTM slide, shown above.)

The real goal, of course, is not just to swap two nodes and see a different pretty picture.

The REAL goal is to see what the 1D CVM grid (the single zigzag chain) has as its free energy, both before and after the node-swap.

This will be where the fun starts!

And presumably, after all the code has been worked out, you (the user) will be able to conduct little one-step experiments, and see that the existing grid is already at free energy equilibrium, meaning that the free energy for this grid is at its lowest level. Any changes to the grid (swapping nodes) results in increased free energy, meaning that the grid is no longer at its equilibrium point.

From here, we can move on to the real questions, which are:

  • How do we create grids that are at free energy minima, when we have defined parameter values? And conversely,
  • When we have a pre-existing grid, what are the parameter values that most closely express the free energy for that grid, and can we modify that grid to bring it to a lower free energy, once we’ve selected our candidate parameters?

These are all questions that pertain to creating dynamic behavior in the lateral CVM grid.

More yet to come.

We will refer back to this single blogpost for the next few weeks, updating it over time, as we complete this code iteration and publish a corresponding YouTube.


Live free or die,” my friend!*

* “Live free or die” – attrib. to U.S. Revolutionary War General John Starck. https://en.wikipedia.org/wiki/Live_Free_or_Die

Alianna J. Maren, Ph.D.

Founder and Chief Scientist

Themesis, Inc.


PS – Why/How This Is Important

It’s not just that we’re creating a new neural network class (psst! – the name is “CORTECONs” – for COntext-REtentive TEmporally-CONnected network) – BUT … we ALSO want to create a substrate that can potentially exhibit the same kind of neuronal avalanche behavior, and other critical-state behavior, that we see in real brains. Super-exciting!

See some really good neurophys references at the end of the blogpost, beyond the basic Resources and References section.



Resources and References

Themesis GitHub Repository

AJM’s Note: This is new. This is the first time that we are pointing you to a code repository – with the intention that this will not just be code, but also PPTs (MS-PPTXTM) of how the code for computing local variables is based on the CVM (cluster variation method) grid architecture, code walkthroughs, worked examples, etc.

AJM’s Further Note: BUGGY CODE! Examine/use at your own risk! (To be fixed ASAP.) This code is the baseline for the interactive code, which is still (as of this writing) being added-on-to and debugged. HOWEVER, it’s got a fun little Python turtle graphics illustration of the 1-D CVM baseline that we’ll be using. The specific code is: simple-1D-CVM-w-Python-turtle-1pt3pt6-…

Prior Related Blogposts

AJM’s Note: These blogposts were written ages ago – under AJM’s personal blogsite. We’ll be creating more materials soon.

The Single Most Useful Research Paper

1-D Cluster Variation Method: Computational Result

  • Maren, Alianna J. 2016. “The Cluster Variation Method: A Primer for Neuroscientists.” Brain Sci. 6(4), 44, https://doi.org/10.3390/brainsci6040044; online access, pdf; accessed 2018/09/19.

Two Important CVM Origination Papers

  • Kikuchi, R. (1951). “A Theory of Cooperative Phenomena.” Phys. Rev. 81, 988-1003, pdf, accessed 2018/09/17.
  • Kikuchi, Ryiochi., & Brush, Stephen G. 1967. “Improvement of the Cluster‐Variation Method,” J. Chem. Phys. 47(1), 195; online as: online – for purchase through American Inst. Physics. Costs $30.00 for non-members.

The Weekly Sidebar: Neuronal Avalanches

A primary impetus for CORTECONs is that we need to create the next tier of computational neural-based models. The fifty years of early, or first-generation, neural networks (1974-2023) relies on the behavior of individual “computational neurons,” loosely based on ideas put forth by McCulloch and Pitts (single neuron model, 1943) and Rosenblatt (the Perceptron, 1957).

Now, the interesting behavior is not based on whether or not a single neuron is active.

Instead, the much more interesting behavior is found in groups of neurons that respond dynamically, as neuronal avalanches.

Much of the recent work suggests that neuronal groups operate in a near-critical region in the brain – meaning that they are poised to engage in “avalanche”-type behavior.

One recent (and very interesting) work has been done by Bellay et al., investigating whether a single neuron can selectively participate in a neuronal avalanche. Their work combines both in vivo and in vitro studies.

We found that single neurons participate selectively in specific LFP-based avalanche patterns. Furthermore, we show in vitro that manipulating the balance of excitation and inhibition abolishes this selectivity. Our results support the view that avalanches represent the selective, scale-invariant formation of neuronal groups in line with the idea of Hebbian cell assemblies underlying cortical information processing.

Bellay et al. See full citation below.
  • Bellay, Timothy, Woodrow L. Shew, Shan Yu, Jessica J. Falco-Walter, and Dietmar Plenz. 2020. “Selective Participation of Single Cortical Neurons in Neuronal Avalanches.” bioRxiv (October 22, 2020). doi: https://doi.org/10.1101/2020.10.21.349340. (Accessed Aug. 2, 2023; available online at https://www.biorxiv.org/content/10.1101/2020.10.21.349340v1.full.) Now published in Frontiers in Neural Circuits doi: 10.3389/fncir.2020.620052

This 2019 article is also useful; good review of important literature and a clarification of different kinds of divergent brain behaviors:

Relevant Historical Papers

Share via
Copy link
Powered by Social Snap