**Theorem 2.** Given a connected graph G = (N,E),
partition its nodes into N- and N+ using the spectral bisection algorithm.
Then N- is connected. If no component v_{2}(n) of the second
eigenvector v_{2} is zero, then N+ is also connected.

To prove this theorem, we need several other standard results from linear algebra, some of which we state without proof.

**Definition.** The *spectral radius rho(A)* of a matrix A is the
largest absolute value of any eigenvalue:

rho(A) = max_{i}| lambda_{i}(A) |

**Definition.** A *nonnegative matrix A* is a matrix all of
whose entries are nonnegative. This is written A >= 0.
A *positive matrix A* is a matrix all of whose entries are positive,
written A>0.
We also refer to nonnegative and positive vectors, with similar notation.

**Definition.** The *graph G(A) of an n-by-n matrix A* is a
graph with n nodes, and an edge e=(i,j) if and only if A(i,j) != 0.

**Lemma 1.**Let A by an n-by-n nonnegative matrix, and suppose G(A) is
connected. Then sum_{m=0,...,n-1} A^{m} is a positive matrix.

*Proof of Lemma 1.* The (i,j) entry of A^{m} is a sum of
many terms of the form

A(i,kwhere the sum is over all n_{1}) * A(k_{1},k_{2}) * A(k_{2},k_{3}) *...* * A(k_{m-2},k_{m-1}) * A(k_{m-1},j)

**Definition.** A symmetric matrix with all nonnegative eigenvalues is
called *positive semidefinite*. If the eigenvalues are all positive,
it is called *positive definite*.

**Lemma 2.** If A is n-by-n and symmetric with eigenvalues
lambda_{1} <= ... <= lambda_{n}, then

lambda_{1}= min_{v!=0}v'*A*v / v'*v lambda_{n}= max_{v!=0}v'*A*v / v'*v

*Proof of Lemma 2.* It follows simply from the eigendecomposition
A = Q*Lambda*Q', where Q is an orthogonal matrix whose columns are
eigenvectors, and Lambda = diag(lambda_{1},...,lambda_{n}), using the
substitution

v'*A*v / v'*v = v'*Q*Lambda*Q'*v / v'*Q*Q'*v = y'*Lambda*y / y'*y sumDetails are left to the reader._{i=1,...,n}lambda(i)*y(i)^{2}= ---------------------------- sum_{i=1,...,n}y(i)^{2}

**Cauchy Interlace Theorem** (R. Horn and C. Johnson, "Matrix Analysis",
1988). Let A be an n-by-n symmetric matrix with eigenvalues
lambda_{1} <= ... <= lambda_{n}. Let B = A(1:n-1,1:n-1), the leading
(n-1)-by-(n-1) submatrix of A. Let the eigenvalues of B be
mu_{1} <= ... <= mu_{n-1}.
Then for all i, lambda_{i} <= mu_{i} <= lambda_{i+1}.
Applying this result recursively, we can show that if
C = A(i:j, i:j) for any i and j, and the eigenvalues of C are
chi_{1} <= ... <= chi_{j-i+1},
then A has at least k eigenvalues <= chi_{k}.
In particular lambda_{1} <= chi_{1}.

**Corollary to the Cauchy Interlace Theorem.** Let the symmetric
matrix A be positive (semi)definite. Then any submatrix C=A(i:j,i:j)
is also positive (semi)definite.

**Lemma 3.** If A is symmetric and positive (semi)definite, so
is X'*A*X for any nonsingular matrix X.

*Proof of Lemma 3.* From Lemma 2, the smallest eigenvalue
of X'*A*X is

minSince v'*X'*X*v = (X*v)'*(X'*v) is a sum of squares, it is nonnegative. Thus lambda_{v!=0}v'*X'*A*X*v / v'*v = min_{v!=0}( v'*X'*A*X*v / v'*X'*X*v ) * ( v'*X'*X*v / v'*v ) >= min_{v!=0}( v'*X'*A*X*v / v'*X'*X*v ) * min_{v!=0}( v'*X'*X*v / v'*v ) = min_{Xv!=0}( v'*X'*A*X*v / v'*X'*X*v ) * min_{v!=0}( v'*X'*X*v / v'*v ) = lambda_{1}(A) * lambda_{1}(X'*X)

**Lemma 4.** If A is symmetric matrix with rho(A) < 1,
then I-A is invertible and

(I-A)^{-1}= sum_{i=0,...,infinity}A^{i}

*Proof of Lemma 4.* Since the eigenvalues of A are strictly between
-1 and 1, the eigenvalues of I-A are strictly between 0 and 2, so I-A
is positive definite and so nonsingular. Writing the eigendecomposition
A = Q*Lambda*Q', we see that A^{i} = Q*Lambda^{i}*Q', so the entries
of A^{i} go to zero geometrically, like rho(A)^{i} or faster.
Thus sum_{i=0,...,infinity} A^{i} converges. Since

(I-A) * sumit is easy to see that S(m) = sum_{i=0,...,m}A^{i}= I - A^{m+1}

**Partial proof of Theorem 2.**
(M. Fiedler, "A property of eigenvectors of nonnegative symmetric matrices
and its application to graph theory", Czech. Math. J. 25:619--637, 1975.)
We consider the special (but generic) case where v_{2} is unique
(modulo multiplication by a scalar) and v_{2} has only nonzero entries.
We will use proof by contradiction: Assume that N+ is not connected,
and in fact consists of k connected components. Suppose for illustration
that k=2 (the general case is no harder). Then we can renumber the rows
and columns of A so that

n1 n2 n3 [ A11 0 A13 ] n1 [ v1 ] n1 A = [ 0 A22 A23 ] n2 , vwhere v1 > 0, v2 > 0 and v3 < 0. The two zero blocks in A occur because there are no edges connecting the first n1 nodes (the first connected component of N+) and the following n2 nodes (the second connected component of N+). Then A*v_{2}= [ v2 ] n2 [ A13' A12' A33 ] n3 [ v3 ] n3

A11*v1 + A13*v3 = lambda_{2}*v1 (1)

Note that A13 <= 0, and v3 < 0, so each term in the product A13*v3 is nonnegative and thus A13*v3 >= 0. In fact A13*v3 is nonzero, since otherwise A13 would have to be zero, and so the first n1 nodes alone would form a connected component of G, contradicting our assumption that G is connected.

By the Corollary to the Cauchy Interlace Theorem above, A11 is positive semidefinite since A is. Now let eps be any positive number. Then adding eps*v1 to both sides of (1) yields

(eps*I + A11)*v1 + A13*v3 = (eps+lambdaThe eigenvalues of eps*I + A11 are all at least eps, so eps*I + A11 is positive definite. Write eps*I + A11 = D - N, where D is diagonal, and N >= 0 is zero on the diagonal (-N holds all the offdiagonal entries of eps*I + A11). Then_{2})*v1 (2)

eps*I + A11 = D - N = Dh * ( I - DhBy Lemma 3, I-M is positive definite since D-N is positive definite and Dh is nonsingular. Since the eigenvalues of I-M are 1 minus the eigenvalues of M, the eigenvalues of M must be less than 1. All the eigenvalues of M must also be greater than -1, because by Lemma 2,^{-1}*N*Dh^{-1}) * Dh = Dh * (I-M) * Dh where Dh = D^{(1/2)}= diag(sqrt(D_{1,1}),...,sqrt(D_{n1,n1})) and M = Dh^{-1}*N*Dh^{-1}

lambdaThus | lambda_{1}(M) = min_{v!=0}v'*M*v / v'*v >= min_{v!=0}-|v|'*M*|v| / v'*v since M >= 0 = -max_{v!=0}|v|'*M*|v| / v'*v >= -max_{v!=0}v'*M*v / v'*v = -lambda_{n1}(M) > -1

Y = (eps*I + A11)is nonnegative, since M and M^{-1}= Dh^{-1}* (I-M)^{-1}* Dh^{-1}= Dh^{-1}* ( sum_{i=0,...,infinity}M^{i}) * Dh^{-1}

Multiplying equation (2) by Y yields

v1 + Y*A13*v3 = Y*(eps+lambdaMultiplying by v1' yields_{2})*v1

v1'*v1 + v1'*Y*A13*v3 = (eps+lambdaso by Lemma 2_{2}) * v1'*Y*v1

(eps+lambdaAs stated above, A13*v3 >= 0 and is nonzero. Since Y>0, Y*A13*v3 > 0, and so v1'*Y*A13*v3 > 0. Thus_{2}) * lambda_{n1}(Y) = max_{v!=0}(eps+lambda_{2}) * v'*Y*v / v'*v >= (eps+lambda_{2})* v1'*Y*v1 / v1'*v1 = (v1'*v1 + v1'*Y*A13*v3) / v1'*v1 = 1 + v1'*Y*A13*v3 / v1'*v1

(eps+lambdaSince the eigenvalues of Y are positive and the reciprocals of the eigenvalues of eps*I + A11, we get_{2}) * lambda_{n1}(Y) > 1

(eps+lambdaSince lambda_{2}) / lambda_{1}(eps*I + A11) > 1

lambdaThe same logic apples to A22, so lambda_{1}(A11) < lambda_{2}

[ A11 0 ] [ 0 A22 ]has two eigenvalues less than lambda