13 Pinned Neurons and Joint State-Parameter Dynamics
Purpose. Introduces the pinned-neuron construction and the joint flow on (cochain, parameters) that underlies sheaf-based training.
13.1 Key concepts & results
- Pinning: fixing the value of any vertex stalk (not only input).
- Output pinning ⇔ supervised target; bidirectional propagation drives interior neurons to a harmonic extension consistent with both boundaries.
- Joint state-parameter dynamics: simultaneous flow on cochain x and weights θ minimizing total Dirichlet energy ½‖δ_θ x‖².
- Stationary points characterize trained networks under the sheaf objective.
Prerequisites: Ch 8, Ch 10
13.2 Motivating example
Pin both ends of the [2, 4, 1] path graph. Clamp the input vertex to a concrete training point \(x_i\) and clamp the output vertex to its supervised target \(y_i\). The Dirichlet problem now has two boundary conditions instead of one. With the current weights fixed, the interior cochain relaxes to whatever configuration minimizes the residual Dirichlet energy \(E(x; \theta) = \tfrac{1}{2} \|\delta_\theta x\|^2\) subject to both clamps — an intermediate state that the network can produce consistent with its current parameters, but not necessarily consistent with fitting \((x_i, y_i)\).
Now release \(\theta\) and let the weights also flow, slowly, down the same energy. The fixed points of the joint flow on \((x, \theta)\) are exactly the parameter settings for which some interior cochain realizes both boundary conditions with zero residual — i.e., the network that fits the training pair exactly. With multiple training pairs, the joint flow averages over them, and the fixed points characterize the best attainable fit. This is sheaf-based training: no forward/backward alternation, no stored activations, just a single gradient flow of Dirichlet energy on cochain and weights simultaneously.
13.3 Intuition
In Chs. 8–10 the input was pinned and the forward-pass output fell out of the harmonic extension. Nothing in the sheaf picture privileged the input vertex, though — pinning is just Dirichlet boundary data, and any vertex can carry it. Supervised learning, read through this lens, is two-sided pinning: fix the input stalk at \(x_i\), fix the output stalk at \(y_i\), and ask what configuration of interior state + weights is consistent with both ends. The remaining Dirichlet energy measures how badly the two ends disagree through the current network; driving it to zero is fitting the example.
The picture to keep is a string clamped at both ends, draping between them under tension. With the weights frozen, the cochain settles into whatever shape the tension permits — a harmonic interpolation between input and target, mediated by the current sheaf. With the weights also free to move (on a slower timescale), the sheaf itself deforms to reduce the tension further. At a fixed point, the string is slack: the interior is consistent with both ends, which is the same as saying the network maps \(x_i\) to \(y_i\).
Two structural features distinguish this from ordinary backprop. First, the update is local: each weight’s update depends only on the two stalks it connects, not on chain-rule products propagated from the loss. Second, the two-sided pinning is symmetric: the framework treats input and target interchangeably. This symmetry is what lets Ch. 11 generalize cleanly to partial clamping — pin any subset of stalks to any subset of values (Note 5.1 of the paper), which covers hidden-neuron steering, missing-feature imputation, counterfactual editing, and ordinary supervised learning as the special case “pin the input and the output.” The mathematical machinery for all of these is the same Dirichlet problem with different boundary sets.
Intuition device (planned): Picture of two clamps on the path graph, with the cochain ‘draping’ between them like a string under tension.
13.4 Formal development
[TO FILL: formal development — definitions, statements, careful notation]
13.5 Theorem demonstrations
[TO FILL: proofs / proof sketches of the key results named above. Proofs should come *after* the intuition section, as agreed.]
13.6 Worked examples
[TO FILL: worked example(s) carried out by hand]
13.7 Coding lab
lab-11-output-pinning —
[TO FILL: one-paragraph description of the lab's goal]
13.8 Exercises
[TO FILL: 3–6 exercises, graded from warm-up to project-level]
13.9 Further reading
[TO FILL: annotated paragraph of 3–6 references]
13.10 FAQ / common misconceptions
[TO FILL: short Q&A for things readers frequently get wrong]