This software was largely created by AI Vibe Coding
Created by YouMinds
The 2024 Nobel Prize in Physics was awarded to John J. Hopfield and Geoffrey Hinton for their foundational
work in artificial neural networks, which includes Hopfield networks and Boltzmann machines. Their
contributions have been instrumental in advancing machine learning and AI.
This software shows an energy-based neural network that is similar in function to Hopfield and Boltzmann
networks.
What is this
Neurons: Small circles that represent individual units in the network.
The active (on) or inactive (off) state of each neuron affects the overall pattern the network can
recognize or generate.
Connections: Lines connecting the neurons, showing the relationships (weights) between them.
Every neuron is connected to all other neurons.
Weights show the strength of connections between neurons, influencing how information flows through
the network.
Energy Chart: A graph below the network showing the current energy level of the system.
The Energy Level indicates the stability of the network. Lower energy means a more stable and
reliable pattern.
Learning Cycles: 0
Readout Cycles: 0
Current Energy: 0
Click on neurons
to turn them on or off, simulating their activity
and set a pattern.
Press Start Learning to train the network and recognize patterns.
The energy level decreases.
Press Stop Learning and
Click Neurons to change the pattern and observe the energy level
increase again.
Press Start Readout to see the network's output, showing how it
recalls or generates patterns based on the learned data.
Repeat the process with other patterns and test how many different
patterns the network can store and reconstruct.
Press Add/Remove Neuron to customize your network by adding or
removing neurons, changing the structure and complexity of the network.
What is a Hopfield and Boltzmann Network anyway
Unlike feed-forward networks, Hopfield and Boltzmann machines are recurrent, meaning they form a loop and
use each others outputs as inputs.
Recurrence helps these networks to
refine patterns and continuously improve their guess until they find the best solution.
They manage and recognize complex patterns by looping through data multiple times.
The value of a neuron can be calculated using a simple summation formula based on the input values and
weights as follows:
output = σ (weight1 x input1 + weight2 x
input2 + ... + weightN x inputN)
Where the activation function σ uses a threshold value to determine whether a neuron is on or off. It's
that simple.
Both networks are energy based. Imagine you have a hilly landscape, where the lowest points (valleys)
represent the most stable and desirable states. These valleys are what the networks are trying to find. In
both Hopfield and Boltzmann networks, "energy" is a way to describe the stability of different states or
patterns.
Think of a Hopfield Network of Boltzmann machine as a group of gas particles in a box. Just like gas
particles move around randomly to find an even distribution, the neurons in a Boltzmann machine switch on
and off randomly to find the best pattern. Both systems use randomness to explore different states and find
the most stable, lowest-energy configuration.
Learn how to find valley in machine learning here.
Boltzmann machines are named after Ludwig Boltzmann because they use the principles of energy distribution
and minimization that he studied in gas particles. This connection helps explain how the machines work to
find optimal solutions by exploring different energy states.
Random Exploration: Just like gas particles randomly move and settle into lower energy states, the
neurons in a Boltzmann machine randomly change states to find the most stable configuration.
Energy Minimization: Both systems aim to minimize energy, finding the most probable state (lowest
energy) over time.
Imagine Hopfield networks like a big, interconnected web, similar to a spider web. Each point where the
threads meet is like a neuron, and the threads themselves are connections. These networks are used mainly
for memory storage. Think of them as a chalkboard where you can write, erase, and rewrite patterns
(memories).
Here's the cool part: if you show this network part of a pattern, it can recall the whole thing. It's like
if you see just a corner of a puzzle piece, and you know what the whole picture is. This is useful for
recognizing patterns even if they're incomplete or noisy.
Boltzmann Machines
Now, Boltzmann machines are a bit like Hopfield networks but with a twist. Imagine a group of friends who
gossip to figure out a mystery. Each friend represents a neuron, and they share information until they all
agree on the most likely solution. They "gossip" by turning on and off randomly, but over time, they settle
into a pattern that solves the problem.
Boltzmann machines are used for learning and optimizing. They can find hidden patterns in data and make
decisions based on those patterns. It's like having a super detective that can solve puzzles by trial and
error until it figures out the best solution.
In summary:
Hopfield Networks: Like a chalkboard for patterns, recognizing whole patterns from partial inputs.
Boltzmann Machines: Like a group of friends gossiping to solve a mystery, finding hidden patterns
through trial and error.
Usecase: Netflix Recommendation System
Netflix uses a recommendation system to suggest movies and shows you might like based on your viewing
history and preferences. This system helps you discover new content you might enjoy.
To make these recommendations, Netflix needs to handle a massive amount of data—thousands of movies and
millions of users. This is where dimension reduction comes in. It simplifies the data by focusing on the
most important features.
Dimension Reduction with Boltzmann Machines
Data as a Big Puzzle: Imagine all the movies and user ratings as a huge, complex puzzle.
Boltzmann Machine as a Smart Organizer: The Boltzmann machine acts like a smart organizer that looks
at the puzzle and figures out which pieces (features) are most important.
Simplifying the Puzzle: By focusing on the key pieces, the machine reduces the complexity of the
data, making it easier to work with.
Making Recommendations: With the simplified data, the system can quickly and accurately suggest
movies and shows that match your preferences.
In essence, Boltzmann machines help Netflix by reducing the amount of data they need to process, making it
easier to find patterns and make accurate recommendations.
How was it built
This software was created using Vibe Coding by a Large Language Model LLM / chatbot
and reworked in look & feel.
Some features had to be implemented manually and
corrections and improvements had to be made.
The following Vibe Coding prompts were used on Copilot:
"I want to simulate a hopfield net in a single webpage using javascript. Create html with a canvas that
displays 5 neurons in a circle. connect the neurons with lines. Display the logit value in each neuron
circle. create the necessary code to run the net and update the values of th neurons. display the values
of the weights next to each line. Color the lines according to their values. Allow the user the click on
a neuron to toggle its value. Add a button to start the learning phase and another button to start the
read out."
"put everything in single page"
"implement a real learning phase. Make the learning function async and stop learning when a stable
state has reached or the start button has been pressed again. Also update the line width of the
connectors according to the weights amount. Implement a real readout phase. Make the readout phase async
and stop readout when a stable state has reached or the readout button has been pressed again. Display
the learing cycles and the readout cycles. Reset the cycles when the start or read button has been
pressed to start the processes."
"compute the real weights according to a hopfield net. Check for a real stable state. Do not assume
anything."
"implement a plus and minus button that will add or remove neurons. keep the neurons in a circle. Add
or remove connections accordingly. let the connection lines start and end at the circly outline instead
of the midpoint. increase the font size of the displayed values by two"
"increase the font size of the neuron values by factor of 2. Increase the linewidth of the neurons by a
factor of 2. Use a gradient for the neuron color. Increase the size of the neuroncircle and the canvas
by 2"
"append a chart.js that displays the energy of the net on the y axis and the time on the x axis. Let
the chart display the energy change during learning and readout. The chart should reset on start
learning or start readout"
"append a text field displaying the current energy of the net update the value during learning, readout
or when a neuron is pressed. Remove the animation flag from the chart. Debug the chart algorythm it
seems it shows the end value only"
"the energy is only computed when I press start but does not change during the learning cycles. Also do
not reset the chart on learning or readout, instead append a reset button for the chart. If possible
decrease the size of the weight font by a factor of 2 and align the weight value text to the
connectors."
"the storedPatterns.push([...neurons]); has to go inside the while (learning) loop otherwise learning
does not work during cycles. Also display the weight values in the same angle as the connectors so they
follow on each weight line."
"increase the width of the connectors by 3, keep them dynamic to the weights. Only show the weight
value one time so that it is well readable"