AGI Wars: 2024 Nobel Prize Award: Part 1- Physics

Of the five 2024 Nobel Prize awards made in physics and chemistry, we can reasonably say that three out of five went to Google employees (Demis Hassabis & John Jumper) or former employee (Geoffrey Hinton). Of the two remaining awards, one went to an academic researcher (David Baker) who collaborated with a Google employee , and the other to a researcher (Hopfield) whose work was a necessary predecessor to that of the former Google employee (Hinton).

Since it takes a little time to go through the Nobel Prize selection process, and since committee negotiations – for anything, with any committee – are always a delicate process, we can reasonably assume that these names were not brought before the Nobel Committee for the first time during the 2024 Prize nominations.

Not saying anything snarky.

And not disagreeing with the awards – very nice, and a long time coming!

But let’s take a little look. We can’t quite get under the sheets, but we can certainly see who the players are and what connections can be made.


The Nobel Prize Nomination and Selection Process

The Nobel Prizes in Physics and Chemistry are decided by members of the Royal Swedish Academy of Sciences, in a simple majority vote. However, long before the voting takes place, members of both the Physics and the Chemistry committees separately work throughout an eight-month cycle, officially beginning after nominations are closed at the end of January for that year.

The real work of winnowing down the final Prize recipients is done in committee – as such work is always done. The Physics and Chemistry Committees each consist of five members who are appointed for three-year terms (with a limit on three consecutive terms), as well as additional persons who may be “co-opted” onto a given committee – meaning, consulted for their special expertise.

As presented in the Kungl Vetenskaps Akademian Nobel Prize process description, “The Nobel committees compile reports and write an extensive statement, resulting in their proposal for that year’s prize. The material is presented to the appropriate class, physics or chemistry, and discussed on several occasions.”

Then, “After the position of the classes is clarified, the committees present the proposals to the members of the Academy at a meeting in early October. All Swedish members (and foreign members who are resident in Sweden) present at the meeting may vote on the proposals. The final decision about this years’ Nobel prizes is then made.”

So for all practical purposes, the determination of who gets the Prize each year is made by the respective Nobel Committee – specifically, the Physics and the Chemistry Committees. The voting by the Royal Swedish Academy at large, we trust, is a formality. Any influencing deemed necessary will likely have been conducted between June (when the Committees start writing up their reports) and September (when the “classes” – the physics and chemistry separate groups – discuss their respective Committee’s proposals).

The invitations for nominating a person or persons to receive the Nobel Prize are sent out each September, for the award to be announced in October of the following year. Thus, the nominations of Hopfield and Hinton (Physics) and Hassabis, Jumper, and Baker (Chemistry) would have likely been made in late 2023.


2024 Nobel Prize in Physics: Geoffrey Hinton and John Hopfield

The two 2024 Nobel Prize in Physics awards went to John Hopfield and Geoffrey Hinton.

  • John Hopfield: Inventor of the Hopfield neural network, which I prefer calling the Willams-Hopfield neural network, as Hopfield advanced Williams’ work a nudge, and studied the memory capacity of this network, and
  • Geoffrey Hinton: Inventor of the Boltzmann machine (with David Ackley and Terrance Sejnowski), but also furthering substantial work on the (restricted) Boltzmann machine, leading to the contrastive divergence algorithm, which then led to deep learning methods.

For all practical purposes, generative AI began with Hinton’s invention of the Boltzmann machine.


Hopfield’s Invention: The (William-)Hopfield Neural Network

The Hopfield neural network – an auto-associative network (meaning, it could fill in a partial or noisy pattern) – had very limited memory ability.

Hopfield’s work extended earlier work by William Little, who first described this kind of energy-based neural network. We call this an “energy-based” neural network because it is characterized by an equation from statistical mechanics that is an “energy equation,” or a “free energy equation.”

Maren, Alianna J. 2022. “How to Identify the First Energy-Based Neural Network.” Themesis, Inc. YouTube Channel (Oct. 6, 2022). (Accessed Oct. 12, 2024; Themesis YouTube Channel.)

Essentially, the number of distinct patterns that it could “remember” was limited to about 30% of the total number of nodes. So, for example, a ten-node Hopfield neural network could reliably be trained to learn three patterns.

Maren, Alianna J. 2023. “The Ising Equation Lets You Understand All Energy-Based Neural Networks.” Themesis, Inc. YouTube Channel (Jan. 23, 2023). (Accessed Oct. 12, 2024; Themesis YouTube Channel.)

Hinton’s (First) Invention: The Boltzmann Machine

Hinton created the innovation that burst through this learning bottleneck by inserting latent variables into the network. Essentially, these “latent nodes” learned generalized features that characterized distinct patterns. This meant that a pattern could be “learned” when the neural network was trained to identify “features;” combinations of certain nodes that would be “on” for a specific kind of pattern.

This was a massive breakthrough.

Maren, Alianna J. 2023. “The Boltzmann Machine – The Most Important Energy-Based Neural Network.” Themesis, Inc. YouTube Channel (Jan. 30, 2023). (Accessed Oct. 12, 2024; available at Themesis YouTube Channel.)

The original notion of using latent nodes to learn pattern features was not due to Hinton. That original invention – of inserting a “latent variable layer” or “hidden layer” into a Multilayer Perceptron (MLP) neural network – was due to Paul Werbos, who invented the backpropagation method for use in neural networks.

There was an obscure trail of researchers who may (or may not) have been in communication with each other, leading to Rumelhart, Hinton, and Williams publishing an account of backpropagation in Nature in 1986. This work was included in the massive two-volume compendium, Parallel Distributed Processing, edited by Rumelhart and McClelland in 1986. Publication of this compendium – which brought together neural network studies for the first time under a single “roof” – together with the demonstrable success of the MLP and Boltzmann machine neural networks – led to the first explosion of interest in neural networks.

The important thing is: latent nodes. (Or latent variables, depending on your point of view.)

Latent nodes have been the key to making neural networks “work.” They’ve also been essential in other realms of machine learning, such as natural language processing.

Hinton’s great insight was to introduce latent nodes into a neural network that had limited use – and thus create a network that had a much more powerful capability. This resultant neural network was so effective because it captured the essential “features” or characteristic “small patterns” that, taken in combination, could identify a kind of larger pattern – a task that we call “classification.”

Maren, Alianna J. 2024. “Generative AI: Three Key Math Components.” Themesis, Inc. YouTube Channel (Feb. 9, 2024). (Accessed Oct. 12, 2024; available at Themesis YouTube Channel.) This YouTube shows how latent nodes (as Bayesian “conditional dependencies”) fit in as one of three major components of all generative AI systems.

Hinton has done much more since this original invention, so that his career has indeed cumulatively earned him the honor of being considered a “godfather of AI.”

Very specifically: the thing that we want to remember is that generative AI happens when we introduce latent nodes, or latent variables, as the case may be.


But Is It Physics?

What we’re really asking here is: is the Boltzmann machine really physics-based?

We focus on the Boltzmann machine because it is the useful neural network, although the (Williams-)Hopfield neural network was a necessary predecessor, and established the basic concept.

And the answer is, in short, yes.

The Boltzmann machine rests on using statistical mechanics as a metaphor.

Maren, Alianna J. 2021. “Statistical Mechanics of Neural Networks: The Donner Pass of Artificial Intelligence.” Themesis, Inc. YouTube Channel (Sept. 15, 2021). (Accessed Oct. 12, 2024; available at Themesis YouTube Channel.) This YouTube illustrates how generative AI, involving several disciplines (the Reverse Kullback-Leibler divergence from information theory, Bayesian conditional probabilities for expressing latent variable dependencies, and statistical mechanics) all “converge” to form the “Donner Pass of Artificial Intelligence.”

The Nobel Committee for Physics and Generative AI

Whenever there is a competitive process, and something is selected and pushed forward by a committee, there has to be at least one (or sometimes more) champion(s) for that person or idea.

Just as a matter of curiosity, let’s look at who comprises the Nobel Committee for Physics. There are five Committee members, and three “co-opted” members.

Of these, a couple stand out.

Ellen Moons, Professor of Physics at Karlstadt University, and Chair for the Nobel Physics Committee, described Hopfield’s and Hinton’s work in that they “used fundamental concepts from statistical physics to design artificial neural networks that function as associative memories and find patterns in large data sets.”

Moons may have been particularly attuned to the significance of Hinton’s work, as Moon held a position at Cambridge University and also worked at Cambridge Display Technology during her early years as a scientist, while Hinton himself is a Cambridge University alumnus. This is not to suggest any influence, just noting that sometimes we are more aware of work done by those with whom we have some proximity, in any sense of that word.

Göran Johansson, Full Professor at Applied Quantum Physics at Chalmers University, is likely to have understood and appreciated Hopfield’s and Hinton’s works from the perspective of quantum computing. He writes in his Chambers University page: “A working quantum computer can find detailed properties of large biological molecules, which can lead to new medicines and other novel medical treatments. A quantum computer could also enhance artificial intelligence and gives us qualitatively new possibilities to find structures in large set of data and find better solutions to optimization problems, such as traffic control to avoid congestions.”

Thus, one person – at least – on the Nobel Physics Committee would have had a reasonably strong understanding of not just the physics used in Hopfield’s and Hinton’s works, but their impact on AI and society.


2024 Nobel Prize in Chemistry: Demis Hassebis, John Jumper, and David Baker

The 2024 Nobel Prize in Chemistry award went to Google (DeepMind) researchers Demis Hassebis and John Jumper, and to their academic collaborator, David Baker. At first, this seemed a little surprising.

My initial thought (not knowing that this was for AlphaFold, not any of the other works that have come out of Google’s DeepMind) was: Reinforcement learning isn’t chemistry. What’s going on?

Well, I was wrong on a couple of counts.

AlphaFold does NOT use reinforcement learning.

And AlphaFold’s capabilities are indeed a substantive AI breakthrough.

So let’s dive in.

{* Story to be continued in the next blogpost, which will be second-in-the-series. Saturday, Oct. 12, 2024, 5AM Hawai’i time. *}

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap