MicroGPT Visualized

Building a GPT from scratch — an interactive visual guide

← Step 4: Transformer 5.2 Momentum and Adaptive Rates →
Step 5: Adam › 5.1

What Changes

The model architecture is done. Step 4 gave us multi-head attention, a configurable layer loop, and per-layer KV caches — a complete GPT transformer. The only thing left is how we train it.

The problem with SGD

Since Step 1, we’ve used stochastic gradient descent (SGD) with a linearly decaying learning rate. SGD is simple: compute the gradient, multiply by the learning rate, subtract from the parameter.

But SGD treats every parameter identically. The same learning rate applies to the token embeddings, the attention weights, and the output projection — even though their gradients behave very differently. Some parameters get large, noisy gradients. Others get small, consistent ones. A single learning rate can’t be right for all of them.

What Adam adds

Adam — short for Adaptive Moment Estimation — tracks two running statistics for each parameter:

  1. Momentum (first moment) — a smoothed average of recent gradients. Instead of reacting to each gradient in isolation, Adam remembers which direction a parameter has been moving and continues in that direction.

  2. Adaptive rate (second moment) — a smoothed average of recent squared gradients. Parameters with large, volatile gradients get smaller effective step sizes. Parameters with small, stable gradients get larger ones.

Together, these give each parameter its own effective learning rate that adapts over time.

What stays the same

Everything. The model, the dataset, the autograd, the inference loop — all identical to Step 4. The only changes are in the training loop.

What’s new

After this step, the code is functionally identical to Karpathy’s original train.py.

← Step 4: Transformer 5.2 Momentum and Adaptive Rates →