4  Graph Laplacians, Dirichlet Energy, and Harmonic Functions

Purpose. Builds discrete Hodge intuition on plain graphs so that the sheaf generalization in Ch. 5 feels like a one-line upgrade.

4.1 Motivating example

Heat spreads. If you place a hot rod at one vertex of a graph and keep it there, neighboring vertices warm up, then their neighbors warm up, and eventually the whole graph reaches a steady temperature profile determined entirely by the boundary condition at the hot vertex and the graph structure. That steady profile is a harmonic function — and finding it is the Dirichlet problem on a graph.

This is the exact template for what happens in the sheaf setting of Chs. 5–9. The input data is the “hot rod” (the boundary condition), the sheaf Laplacian drives the dynamics, and the forward pass output of a ReLU network is the unique harmonic extension of the input data. Everything in Ch. 2 is that story, stripped of sheaves, run on ordinary graphs.

4.2 Key concepts and results

  • Graphs, oriented incidence, and the coboundary operator \(\delta = B^T\).
  • Dirichlet energy \(E(x) = \tfrac{1}{2}\|\delta x\|^2\) and its gradient flow \(\dot{x} = -Lx\).
  • Harmonic functions; the Dirichlet problem with pinned boundary data.
  • Convergence: exponential rate governed by the spectral gap \(\lambda_2\).
  • Connection to the sheaf setting: the constant sheaf recovers everything in Ch. 2.

4.3 Intuition

Two vertices connected by an edge “want” to have the same value. If vertex \(u\) has value 3 and vertex \(v\) has value 7, the edge \((u,v)\) contributes a discrepancy of \(7 - 3 = 4\) to the network. The graph Laplacian is the operator that converts all these pairwise discrepancies into a force: vertex \(v\) is pulled down toward its neighbors, and vertex \(u\) is pushed up. Under the resulting gradient flow, all values equilibrate — unless some vertices are pinned to fixed values, in which case the system finds the unique equilibrium consistent with those constraints. That equilibrium is the harmonic extension of the pinned data.

The spectral gap \(\lambda_2\) (the second eigenvalue of \(L\)) quantifies how quickly equilibration happens: the smaller \(\lambda_2\), the longer the equilibration takes. On a long path graph — which is exactly the graph underlying the neural sheaf — \(\lambda_2\) decreases with the number of vertices, predicting slower equilibration for deeper networks. This is observed experimentally in [1] (Table 2).

4.4 Formal development

Graphs and oriented incidence

Let \(G = (V, E)\) be a finite connected graph with \(n = |V|\) vertices and \(m = |E|\) edges, with a chosen orientation on each edge. The incidence matrix (or coboundary matrix) \(\delta \in \mathbb{R}^{m \times n}\) has entries \[\delta_{e,v} = \begin{cases} +1 & \text{if } v = v^+(e) \text{ (head of } e\text{)}, \\ -1 & \text{if } v = v^-(e) \text{ (tail of } e\text{)}, \\ 0 & \text{otherwise}. \end{cases}\]

A 0-cochain \(x \in \mathbb{R}^n\) assigns a real value \(x_v\) to each vertex. The action of \(\delta\) on \(x\) gives the edge differences: \[(\delta x)_e = x_{v^+(e)} - x_{v^-(e)},\] the signed difference of the endpoint values.

Note

Connection to sheaves. In the sheaf generalization (Ch. 4), each vertex and edge carries a vector space (the stalk), and the coboundary involves linear maps between those spaces. Here, all stalks are \(\mathbb{R}\) and all maps are \(\pm 1\). This is the constant sheaf \(\underline{\mathbb{R}}\).

The graph Laplacian and Dirichlet energy

The combinatorial graph Laplacian is \[L = \delta^T \delta \in \mathbb{R}^{n \times n}.\] Explicitly, \((L)_{uv} = -1\) if \(\{u,v\} \in E\), \((L)_{vv} = \deg(v)\), and \((L)_{uv} = 0\) otherwise. The Laplacian is symmetric and positive semidefinite: for any \(x \in \mathbb{R}^n\), \[\langle x, Lx \rangle = \|\delta x\|^2 = \sum_{e \in E} (x_{v^+(e)} - x_{v^-(e)})^2 \geq 0.\]

Definition 4.1 Definition 2.1 (Dirichlet energy). The Dirichlet energy of a 0-cochain \(x\) is \[E(x) = \tfrac{1}{2}\|\delta x\|^2 = \tfrac{1}{2}\langle x, Lx \rangle.\]

The Dirichlet energy measures the total variation or disagreement in \(x\) across the graph. It is zero exactly when \(x\) is constant on each connected component.

The graph heat equation

The gradient of \(E\) with respect to \(x\) is \(\nabla E(x) = Lx\). Gradient descent on \(E\) gives the graph heat equation: \[\frac{dx}{dt} = -\alpha L x, \qquad \alpha > 0. \tag{2.1}\]

Since \(L\) is positive semidefinite with eigenvalues \(0 = \lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n\), the solution \(x(t) = e^{-\alpha Lt} x(0)\) converges exponentially to the projection of \(x(0)\) onto \(\ker L\). On a connected graph, \(\ker L = \mathrm{span}\{\mathbf{1}\}\) (the all-ones vector), so the heat equation drives every signal toward its mean.

Theorem 4.1 Theorem 2.2 (Convergence of the graph heat equation). On a connected graph, solutions to (2.1) satisfy \[\|x(t) - \bar{x}\mathbf{1}\|^2 \leq e^{-2\alpha\lambda_2 t} \|x(0) - \bar{x}\mathbf{1}\|^2,\] where \(\bar{x} = \frac{1}{n}\sum_v x_v\) is the mean and \(\lambda_2 > 0\) is the spectral gap (Fiedler value).

Proof. Expand \(x(0) - \bar{x}\mathbf{1}\) in the eigenbasis of \(L\). The component in the eigenspace \(\ker L\) is zero (since \(x(0) - \bar{x}\mathbf{1}\) is mean-zero). Each remaining component \(\phi_i\) with eigenvalue \(\lambda_i \geq \lambda_2 > 0\) decays as \(e^{-\alpha\lambda_i t}\). The bound follows from \(\lambda_i \geq \lambda_2\) for all \(i \geq 2\). \(\square\)

Harmonic functions and the Dirichlet problem

Definition 4.2 Definition 2.3 (Harmonic function on a graph). A 0-cochain \(f : V \to \mathbb{R}\) is harmonic at vertex \(v\) if \[(Lf)_v = \deg(v) f_v - \sum_{u \sim v} f_u = 0,\] i.e., \(f(v)\) equals the average of its neighbors’ values. A cochain is harmonic (globally) if it is harmonic at every vertex.

Global harmonic functions on a connected graph are exactly the constants. Interesting harmonic functions arise when some vertices are pinned (fixed).

The Dirichlet problem. Given a subset \(U \subseteq V\) of boundary vertices with prescribed values \(u = (u_v)_{v \in U}\), find a function \(f : V \to \mathbb{R}\) such that \(f|_U = u\) and \(f\) is harmonic at every interior vertex \(v \in \Omega = V \setminus U\).

Writing the Laplacian in block form according to the split \(V = U \cup \Omega\): \[L = \begin{pmatrix} L[U,U] & L[U,\Omega] \\ L[\Omega,U] & L[\Omega,\Omega] \end{pmatrix},\] the harmonic condition \((Lf)_v = 0\) for \(v \in \Omega\) becomes \[L[\Omega,\Omega]\, f|_\Omega = -L[\Omega,U]\, u. \tag{2.2}\]

Theorem 4.2 Theorem 2.4 (Unique harmonic extension). If \(G\) is connected and \(U \neq \emptyset\), the matrix \(L[\Omega,\Omega]\) is positive definite and the Dirichlet problem has a unique solution. The solution minimizes the Dirichlet energy \(E(x)\) subject to \(x|_U = u\).

Proof. \(L[\Omega,\Omega]\) is the Laplacian of \(G\) with the boundary vertices removed. Connectivity and \(U \neq \emptyset\) ensure that every interior vertex has a path to \(U\), so no non-constant function can be harmonic on \(\Omega\) alone. Hence \(\ker L[\Omega,\Omega] = \{0\}\) and \(L[\Omega,\Omega]\) is positive definite. The minimum of \(E\) subject to fixed boundary data is attained at the unique solution of (2.2). \(\square\)

Pinned dynamics

The pinned (or \(U\)-restricted) heat equation evolves only the interior coordinates: \[\frac{d f|_\Omega}{dt} = -\alpha\bigl(L[\Omega,\Omega]\, f|_\Omega + L[\Omega,U]\, u\bigr). \tag{2.3}\]

Theorem 4.3 Theorem 2.5 (Convergence to harmonic extension). Solutions to (2.3) converge exponentially to the unique harmonic extension \(f^*\) of \(u\). The rate is \(\alpha \lambda_{\min}(L[\Omega,\Omega]) > 0\).

Theorems 2.4 and 2.5 are the direct prototypes for Proposition 3.4 and Theorem 4.1 of [1] (treated in Chs. 8–9). In those chapters, \(L\) becomes the sheaf Laplacian \(\mathcal{L}_\mathcal{F}\), the boundary data \(u\) becomes the neural network input (pinned), and the harmonic extension \(f^*\) becomes the forward pass output.

4.5 Worked example: the path graph with two pinned ends

The path graph with two pinned ends — vertices \(v_0, v_1, \ldots, v_n\) with \(v_0\) and \(v_n\) pinned — is the running example throughout Chs. 2, 5, 6, and 11. It is the simplest graph exhibiting a non-trivial Dirichlet problem and is the topological skeleton of the neural sheaf.

Setup. Take \(n = 3\): vertices \(v_0, v_1, v_2, v_3\), edges \(e_1 = v_0 \to v_1\), \(e_2 = v_1 \to v_2\), \(e_3 = v_2 \to v_3\). Pin \(v_0 = a\) and \(v_3 = b\).

Laplacian. The full Laplacian is: \[L = \begin{pmatrix} 1 & -1 & 0 & 0 \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & -1 \\ 0 & 0 & -1 & 1 \end{pmatrix}.\]

Restricted system. Interior vertices \(\Omega = \{v_1, v_2\}\), boundary \(U = \{v_0, v_3\}\). The restricted Laplacian is \[L[\Omega,\Omega] = \begin{pmatrix} 2 & -1 \\ -1 & 2 \end{pmatrix}, \qquad L[\Omega,U] = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}.\]

Harmonic extension. Solving \(L[\Omega,\Omega]\, f|_\Omega = -L[\Omega,U](a,b)^T\) gives \[f^*(v_1) = \frac{2a + b}{3}, \qquad f^*(v_2) = \frac{a + 2b}{3}.\] This is linear interpolation between \(a\) and \(b\) — the harmonic function on a path is exactly the discrete version of a linear function. The eigenvalues of \(L[\Omega,\Omega]\) are \(1\) and \(3\), so the pinned heat equation converges at rate \(\alpha \cdot 1 = \alpha\).

Analogy with the neural sheaf. On the neural sheaf path graph (Ch. 7), the input vertex \(v_x\) is pinned, the output vertex \(v_{\hat{y}}\) is free during inference (but pinned during training), and the intermediate vertices \(v_{z^{(\ell)}}, v_{a^{(\ell)}}\) are free. The harmonic extension of the pinned input is the forward pass output. This path graph with two pinned ends is the simplest prototype for that picture.

4.6 Exercises

2.1 (Spectral gap). Compute all eigenvalues of the Laplacian for the path graph \(P_n\) with \(n+1\) vertices. Show that \(\lambda_2 = 2(1 - \cos(\pi/n)) \to 0\) as \(n \to \infty\). Interpret this in terms of how slowly heat spreads on a long path.

2.2 (Harmonic extension is linear interpolation). Prove that the harmonic extension on a path graph is always linear interpolation between the two endpoint values, regardless of \(n\).

2.3 (Weighted Laplacian). Extend Definition 2.1 to a weighted graph where each edge \(e\) carries a weight \(w_e > 0\). Show that the weighted Laplacian \(L_w = \delta^T W \delta\) (where \(W = \mathrm{diag}(w_e)\)) retains the properties in Theorems 2.2–2.5. This generalization corresponds to the reluctance mechanism in [2] and will reappear in Ch. 11 when we model pinned neurons.

2.4 (Discrete maximum principle). Prove that on a connected graph, a harmonic function achieves its maximum and minimum values at boundary vertices. (Hint: if \(f\) achieves its maximum at an interior vertex \(v\), use the harmonic condition to show all neighbors of \(v\) have the same value, then propagate.)

2.5 (The resistor network). In an electrical resistor network, edge \(e\) has resistance \(r_e\). By Kirchhoff’s laws, the voltages at vertices satisfy the weighted Laplacian equation. Show that the Dirichlet energy \(E(x) = \frac{1}{2}\sum_e (x_{v^+} - x_{v^-})^2/r_e\) equals the total power dissipated. This is the classical physical interpretation of the harmonic extension: minimum energy = minimum power dissipation.

2.6 (Project — spectral gap and depth). Implement the path-graph heat equation and plot the convergence rate as a function of path length \(n\) (corresponding to network depth). Compare to the theoretical bound from Exercise 2.1. This is the graph-level version of the empirical observation in [1] (Table 2) that deeper networks have smaller spectral gaps.

4.7 Coding lab

lab-02-graph-heat-equation — Build the graph heat equation for path graphs of varying lengths and observe convergence to the harmonic extension. Starting from random initial values, run the pinned dynamics (2.3) with the two endpoints fixed, and watch the interior values relax to linear interpolation. Plot the Dirichlet energy \(E(x(t))\) on a log scale; the slope gives \(-2\alpha\lambda_2\). Then compare path graphs of length 2, 5, and 10, observing how the spectral gap shrinks and convergence slows — a preview of the depth-dependence of the neural sheaf.

4.8 Further reading

The combinatorial Laplacian is a classical object; good introductions are in Chung’s Spectral Graph Theory and the survey by [3], which covers Hodge Laplacians on simplicial complexes including the graph case. The physical interpretation as a resistor network goes back to Kirchhoff. For the connection to random walks and Markov chains, see any graduate probability text. The key structural result used in later chapters — positive definiteness of the restricted Laplacian — follows from the maximum principle (Exercise 2.4), which also implies the discrete version of the Perron–Frobenius theorem.

4.9 FAQ / common misconceptions

Q: The Laplacian \(L = \delta^T\delta\) looks like a “squared” coboundary. Is that a coincidence? No. The same formula \(L = \delta^*\delta\) defines the Hodge Laplacian in Riemannian geometry and the Laplacian on differential forms. On a graph, \(\delta\) is the discrete analogue of the gradient operator \(\nabla\), so \(L = \delta^T\delta\) is the analogue of \(-\nabla^2\). The sheaf Laplacian \(\mathcal{L}_\mathcal{F} = \delta^T\delta\) in Ch. 5 is the same formula, with a richer \(\delta\) that involves the restriction maps.

Q: Does the sign convention for \(\delta\) matter? The formula \((\delta x)_e = x_{v^+} - x_{v^-}\) depends on the orientation of \(e\). Reversing the orientation negates \((\delta x)_e\) but does not change \(\|\delta x\|^2\) or \(L = \delta^T\delta\). For the Laplacian and Dirichlet energy, the orientation is arbitrary; for the coboundary itself, one must fix a convention. We follow the convention in [1]: restriction maps are placed on edges with the downstream (head) endpoint receiving the positive sign.

Q: Why is \(\lambda_1 = 0\) always? The constant vector \(\mathbf{1}\) satisfies \(\delta \mathbf{1} = 0\) on any graph: the difference between any two identical values is zero. So \(L\mathbf{1} = \delta^T\delta\mathbf{1} = 0\), and \(0\) is always an eigenvalue. On a connected graph, it has multiplicity 1.