Mathematics of Data
Neural network training is, in general, a non-convex optimization problem. The loss is non-convex in because is a composition of nonlinear functions. Non-convex landscapes have local minima, saddle points, and flat regions — gradient descent could get stuck anywhere.
Yet in practice, gradient descent on neural networks reliably finds solutions with near-zero training loss and good generalization. The empirical success far outstrips the theoretical guarantees.
The Neural Tangent Kernel (NTK), introduced by Jacot, Gabriel, and Hongler (2018), provides the most rigorous explanation we have. In the limit of infinite network width, the training dynamics simplify dramatically: the loss landscape becomes convex, the kernel remains constant throughout training, and convergence is guaranteed. This post derives that result and confronts its limitations.
Consider a two-layer network:
The loss is non-convex in jointly: scaling and preserves but changes the loss landscape geometry. Non-convexity means we cannot guarantee gradient descent finds a global minimum.
Why does it work anyway? The answer requires understanding what happens as networks become very wide.
Suppose the weights barely move during training — they stay close to their initialization . Then we can Taylor-expand the network output:
In this approximation, is linear in . Linear models have convex loss landscapes. If weights barely move, the optimization problem is essentially convex.
When do weights barely move? When the network is very wide. In a wide network with neurons per layer, each weight contributes to the output (to keep the output ). The loss gradient with respect to each weight is , so each weight moves by per gradient step. As , the weights move infinitesimally — the linearization becomes exact.
This is the lazy training or kernel regime: the network computes a fixed feature map and learns a linear model on top.
Definition. The Neural Tangent Kernel is:
This is an inner product of the gradient vectors (the Jacobians of the network output with respect to parameters), evaluated at two inputs and .
Interpretation. measures how similarly inputs and respond to weight perturbations. If changing weights has similar effects on and , then is large. The NTK is a kernel function — symmetric, positive semi-definite — and defines a reproducing kernel Hilbert space (RKHS).
As network width , two remarkable things happen:
1. becomes deterministic. At initialization, is random. The NTK is a sum of terms, each , so by the law of large numbers it concentrates around its expectation. In the limit, is a fixed deterministic kernel that depends only on the architecture, not on the random initialization.
2. stays constant during training. Because each weight moves by per step, the change in per step is , and the change in is . As , the kernel freezes at its initial value . Training does not change the feature map — only the linear combination of features.
These two properties make the infinite-width limit analytically tractable.
Let be the vector of network outputs on the training set at time , and the vector of true labels. Define the squared loss:
Under gradient flow , how does evolve?
By the chain rule:
The gradient of with respect to is:
Substituting:
In matrix form, defining the empirical NTK matrix with :
In the infinite-width limit, is constant. This becomes a linear ODE:
This is a linear system with constant coefficients. The solution is:
where is the matrix exponential. In the eigenbasis of (which is symmetric PSD, so diagonalizable with non-negative eigenvalues ):
Each component of the error decays exponentially at rate .
Key result: If (positive definite — all eigenvalues strictly positive), then and all components decay to zero:
Infinite-width networks achieve zero training loss under gradient flow, at a linear rate determined by the smallest eigenvalue of the NTK matrix.
This is the main theorem of NTK theory. The non-convex neural network training problem reduces, in the infinite-width limit, to fitting a kernel regression model with kernel — a convex problem with a known unique solution.
The eigenvalues of carry information beyond just convergence speed.
Convergence: Directions with large converge fast (within time). Directions with small converge slowly — they are effectively not learned within a finite training budget.
Generalization: After training for time , the network output is:
In the eigenbasis, the -th component of the learned function is . Large eigenvalue directions are learned faithfully; small eigenvalue directions are suppressed. The NTK performs spectral filtering: it implicitly regularizes by not learning directions where has small eigenvalues.
This connects NTK to classical kernel regression. The RKHS norm of the NTK solution bounds the generalization error — the same machinery as in kernel SVM or Gaussian process regression.
P (Maximal Update Parameterization). In standard parameterization, the NTK changes with width: the kernel scales differently in different layers. The P parameterization (Yang & Hu, 2022) scales initialization variance and learning rates so that stabilizes as width grows. This enables hyperparameter transfer: the optimal learning rate found on a small model transfers directly to a large model, saving enormous compute in practice.
The NTK theory is mathematically beautiful and provides our most rigorous framework for understanding neural network training. But it is a theory of a regime neural networks rarely operate in.
Lazy training vs. feature learning. In the NTK regime, weights barely move — the feature map is fixed at initialization. Real networks learn features: early layers develop edge detectors, mid layers develop texture detectors, and so on. This is representation learning, and it requires weights to move significantly. NTK theory misses it entirely.
Depth. In the infinite-width limit, the NTK kernel is the same regardless of depth (for appropriate initialization). But depth clearly matters in practice — a 1-layer network with width performs much worse than a 100-layer network on natural images. NTK theory predicts no benefit to depth.
Finite width. For finite-width networks, changes during training. The linear ODE becomes nonlinear, and the nice closed-form solution breaks down. Corrections are but can compound over long training.
Practical learning rates. NTK theory holds for infinitesimally small learning rates (continuous gradient flow). Practical training uses large discrete steps, which can push networks out of the lazy regime.
The NTK is best understood as a rigorous baseline: it describes the simplest possible regime of neural network training, and real networks deviate from it in exactly the ways that make them powerful. Understanding those deviations is the frontier of deep learning theory.
Part of the Mathematics of Data series — mathematical notes on EE-556 at EPFL.