Vibe Coding Home

Hopfield / Boltzmann Network Simulation

This software was largely created by AI Vibe Coding
Created by YouMinds
The 2024 Nobel Prize in Physics was awarded to John J. Hopfield and Geoffrey Hinton for their foundational work in artificial neural networks, which includes Hopfield networks and Boltzmann machines. Their contributions have been instrumental in advancing machine learning and AI.
This software shows an energy-based neural network that is similar in function to Hopfield and Boltzmann networks.
What is this
Learning Cycles: 0
Readout Cycles: 0
Current Energy: 0
Click on neurons to turn them on or off, simulating their activity and set a pattern. Press Start Learning to train the network and recognize patterns. The energy level decreases. Press Stop Learning and Click Neurons to change the pattern and observe the energy level increase again. Press Start Readout to see the network's output, showing how it recalls or generates patterns based on the learned data. Repeat the process with other patterns and test how many different patterns the network can store and reconstruct. Press Add/Remove Neuron to customize your network by adding or removing neurons, changing the structure and complexity of the network.
What is a Hopfield and Boltzmann Network anyway
Unlike feed-forward networks, Hopfield and Boltzmann machines are recurrent, meaning they form a loop and use each others outputs as inputs. Recurrence helps these networks to refine patterns and continuously improve their guess until they find the best solution. They manage and recognize complex patterns by looping through data multiple times.
The value ​​of a neuron can be calculated using a simple summation formula based on the input values ​​and weights as follows:
output = σ (weight1 x input1 + weight2 x input2 + ... + weightN x inputN)
Where the activation function σ uses a threshold value to determine whether a neuron is on or off. It's that simple.
0 Threshold Input 0 1 Output
Both networks are energy based. Imagine you have a hilly landscape, where the lowest points (valleys) represent the most stable and desirable states. These valleys are what the networks are trying to find. In both Hopfield and Boltzmann networks, "energy" is a way to describe the stability of different states or patterns.
Think of a Hopfield Network of Boltzmann machine as a group of gas particles in a box. Just like gas particles move around randomly to find an even distribution, the neurons in a Boltzmann machine switch on and off randomly to find the best pattern. Both systems use randomness to explore different states and find the most stable, lowest-energy configuration.
Learn how to find valley in machine learning here.
Boltzmann machines are named after Ludwig Boltzmann because they use the principles of energy distribution and minimization that he studied in gas particles. This connection helps explain how the machines work to find optimal solutions by exploring different energy states.
View a simulation of gas particles in a box here.
Hopfield Networks
Imagine Hopfield networks like a big, interconnected web, similar to a spider web. Each point where the threads meet is like a neuron, and the threads themselves are connections. These networks are used mainly for memory storage. Think of them as a chalkboard where you can write, erase, and rewrite patterns (memories).
Here's the cool part: if you show this network part of a pattern, it can recall the whole thing. It's like if you see just a corner of a puzzle piece, and you know what the whole picture is. This is useful for recognizing patterns even if they're incomplete or noisy.
Boltzmann Machines
Now, Boltzmann machines are a bit like Hopfield networks but with a twist. Imagine a group of friends who gossip to figure out a mystery. Each friend represents a neuron, and they share information until they all agree on the most likely solution. They "gossip" by turning on and off randomly, but over time, they settle into a pattern that solves the problem.
Boltzmann machines are used for learning and optimizing. They can find hidden patterns in data and make decisions based on those patterns. It's like having a super detective that can solve puzzles by trial and error until it figures out the best solution.
In summary:
Usecase: Netflix Recommendation System
Netflix uses a recommendation system to suggest movies and shows you might like based on your viewing history and preferences. This system helps you discover new content you might enjoy.
To make these recommendations, Netflix needs to handle a massive amount of data—thousands of movies and millions of users. This is where dimension reduction comes in. It simplifies the data by focusing on the most important features.
Dimension Reduction with Boltzmann Machines
In essence, Boltzmann machines help Netflix by reducing the amount of data they need to process, making it easier to find patterns and make accurate recommendations.
How was it built
This software was created using Vibe Coding by a Large Language Model LLM / chatbot and reworked in look & feel.
Some features had to be implemented manually and corrections and improvements had to be made.
The following Vibe Coding prompts were used on Copilot:
"I want to simulate a hopfield net in a single webpage using javascript. Create html with a canvas that displays 5 neurons in a circle. connect the neurons with lines. Display the logit value in each neuron circle. create the necessary code to run the net and update the values of th neurons. display the values of the weights next to each line. Color the lines according to their values. Allow the user the click on a neuron to toggle its value. Add a button to start the learning phase and another button to start the read out."
"put everything in single page"
"implement a real learning phase. Make the learning function async and stop learning when a stable state has reached or the start button has been pressed again. Also update the line width of the connectors according to the weights amount. Implement a real readout phase. Make the readout phase async and stop readout when a stable state has reached or the readout button has been pressed again. Display the learing cycles and the readout cycles. Reset the cycles when the start or read button has been pressed to start the processes."
"compute the real weights according to a hopfield net. Check for a real stable state. Do not assume anything."
"implement a plus and minus button that will add or remove neurons. keep the neurons in a circle. Add or remove connections accordingly. let the connection lines start and end at the circly outline instead of the midpoint. increase the font size of the displayed values by two"
"increase the font size of the neuron values by factor of 2. Increase the linewidth of the neurons by a factor of 2. Use a gradient for the neuron color. Increase the size of the neuroncircle and the canvas by 2"
"append a chart.js that displays the energy of the net on the y axis and the time on the x axis. Let the chart display the energy change during learning and readout. The chart should reset on start learning or start readout"
"append a text field displaying the current energy of the net update the value during learning, readout or when a neuron is pressed. Remove the animation flag from the chart. Debug the chart algorythm it seems it shows the end value only"
"the energy is only computed when I press start but does not change during the learning cycles. Also do not reset the chart on learning or readout, instead append a reset button for the chart. If possible decrease the size of the weight font by a factor of 2 and align the weight value text to the connectors."
"the storedPatterns.push([...neurons]); has to go inside the while (learning) loop otherwise learning does not work during cycles. Also display the weight values in the same angle as the connectors so they follow on each weight line."
"increase the width of the connectors by 3, keep them dynamic to the weights. Only show the weight value one time so that it is well readable"