Skip to main content

Section 28.6 Theory

Subsection 28.6.1 Properties of block-diagonal form

First, we'll record some of the properties of block-diagonal matrices explored in the latter part of Discovery guide 28.2. As we have sufficiently explored most of these patterns in Discovery guide 28.2, we state these properties without proof.

Subsection 28.6.2 Invariant subspaces

As usual, to understand a subspace it is enough to understand a spanning set for the subspace, and this is true for invariant subspaces as well.

Recall that with an “if and only if” statement, there are two things to prove.

(\(\Rightarrow\)).

Assume that \(W\) is \(A\)-invariant. Then, by definition, for every vector \(\uvec{w}\) in \(W\) the vector \(A\uvec{w}\) is again in \(W\text{.}\) Since each of the spanning vectors \(\uvec{w}_1,\uvec{w}_2,\dotsc,\uvec{w}_\ell\) lies in \(W\text{,}\) then clearly each of the vectors \(A\uvec{w}_1,A\uvec{w}_2,\dotsc,\uvec{w}_\ell\) lies again in \(W\text{.}\)

(\(\Leftarrow\)).

Assume that each of the vectors \(A\uvec{w}_1,A\uvec{w}_2,\dotsc,\uvec{w}_\ell\) lies in \(W\text{.}\) We must prove that for every vector \(\uvec{w}\) in \(W\text{,}\) the vector \(A\uvec{w}\) will again lie in \(W\text{.}\) Since \(\{\uvec{w}_1,\dotsc,\uvec{w}_\ell\}\) is a spanning set for \(W\text{,}\) a vector \(\uvec{w}\) in \(W\) can be expressed as a linear combination of these spanning vectors, say

\begin{equation*} \uvec{w} = c_1 \uvec{w}_1 + c_2 \uvec{w}_2 + \dotsb + c_\ell \uvec{w}_\ell \text{.} \end{equation*}

Then,

\begin{align*} A \uvec{w} \amp = A (c_1\uvec{w}_1 + c_2\uvec{w}_2 + \dotsb + c_\ell\uvec{w}_\ell) \\ \amp = c_1 A \uvec{w}_1 + c_2 A \uvec{w}_2 + \dotsb + c_\ell A \uvec{w}_\ell \text{.} \end{align*}

Above we have expressed \(A \uvec{w}\) as a linear combination of the vectors \(A \uvec{w}_1, A \uvec{w}_2, \dotsc, A \uvec{w}_\ell\text{,}\) each of which, by assumption, lies in \(W\text{.}\) Therefore, since \(W\) is a subspace, it is closed under vector operations, and so we also have \(A\uvec{w}\) in \(W\text{.}\)

Here is a special example of an invariant subspace that will become important in our further study of matrix forms. It also helps explain diagonal form as a special case of block-diagonal form.

The point that must be verified is that if \(\uvec{w}\) is a vector in the eigenspace \(E_\lambda (A)\text{,}\) then the transformed vector \(\uvec{u} = A \uvec{w}\) is also in that eigenspace. Use the definition of eigenvector (i.e. \(A \uvec{x} = \lambda \uvec{x}\)), both applied to \(\uvec{x} = \uvec{w}\text{,}\) and as the condition to be verified in the case \(\uvec{x} = \uvec{u}\text{.}\)

Subsection 28.6.3 Independent subspaces

Here is a special case of testing independence of subspaces, in the case of two subspaces.

Again, with an “if and only if” statement, there are two things to prove. In both cases, begin with bases

\begin{align*} \basisfont{B}_1 \amp = \{ \uvec{u}_1, \uvec{u}_2, \dotsc, \uvec{u}_{d_1} \} \text{,} \\ \basisfont{B}_2 \amp = \{ \uvec{w}_1, \uvec{w}_2, \dotsc, \uvec{w}_{d_2} \} \text{,} \end{align*}

for \(W_1\) and \(W_2\text{,}\) respectively, where \(d_1 = \dim W_1\) and \(d_2 = \dim W_2\text{.}\)

(⇒) 

Assume that \(W_1\) and \(W_2\) are independent. By definition, this means that the combined set

\begin{equation*} \{ \uvec{u}_1, \dotsc, \uvec{u}_{d_1}, \uvec{w}_1, \dotsc, \uvec{w}_{d_2} \} \end{equation*}

is linearly independent.

We must prove that \(W_1 \cap W_2 = \{\zerovec\}\text{.}\) So suppose \(\uvec{v}\) lies in the intersection \(W_1 \cap W_2\text{.}\) If we are to prove that \(W_1\cap W_2\) consists of only the zero vector, then we must prove that \(\uvec{v} = \zerovec\text{.}\)

Since \(\uvec{v}\) is in \(W_1\text{,}\) there exist scalars \(a_1, a_2, \dotsc,a_{d_1}\) such that

\begin{equation*} \uvec{v} = a_1 \uvec{u}_1 + a_2 \uvec{u}_2 + \dotsb + a_{d_1} \uvec{u}_{d_1} \text{.} \end{equation*}

But \(\uvec{v}\) also lies in \(W_2\text{,}\) and so there exist scalars \(b_1, b_2, \dotsc, b_{d_2}\) such that

\begin{equation*} \uvec{v} = b_1 \uvec{w}_1 + b_2 \uvec{w}_2 + \dotsb + b_{d_2} \uvec{w}_{d_2} \text{.} \end{equation*}

Now, from the vector space identity \(\uvec{v} - \uvec{v} = \zerovec\text{,}\) we can subtract these two linear combination expressions for \(\uvec{v}\) to get

\begin{equation*} a_1 \uvec{u}_1 + a_2 \uvec{u}_2 + \dotsb + a_{d_1} \uvec{u}_{d_1} + (-b_1) \uvec{w}_1 + (-b_2) \uvec{w}_2 + \dotsb + (-b_{d_2}) \uvec{w}_{d_2} = \zerovec\text{.} \end{equation*}

Linear independence of the combined set of basis vectors says that all of the scalars above are zero, so that

\begin{equation*} \uvec{v} = 0 \uvec{u}_1 + 0 \uvec{u}_2 + \dotsb + 0 \uvec{u}_{d_1} = 0 \uvec{w}_1 + 0 \uvec{w}_2 + \dotsb + 0 \uvec{w}_{d_2} = \zerovec\text{,} \end{equation*}

as desired.

(⇐) 

Assume that \(W_1 \cap W_2 = \{\zerovec\}\text{.}\) We must prove that \(W_1,W_2\) are independent. That is, we need to verify that the combined collection \(\{ \uvec{u}_1, \dotsc, \uvec{u}_{d_1}, \uvec{w}_1, \dotsc, \uvec{w}_{d_2} \}\) remains linearly independent.

Setting up the test for linear independence, we assume that

\begin{gather} a_1 \uvec{u}_1 + a_2 \uvec{u}_2 + \dotsb + a_{d_1} \uvec{u}_{d_1} + b_1 \uvec{w}_1 + b_2 \uvec{w}_2 + \dotsb + b_{d_2} \uvec{w}_{d_2} = \zerovec\label{equation-block-diag-theory-indep-subsp-iff-zero-intersect-test}\tag{\(\star\)} \end{gather}

Set

\begin{equation*} \uvec{v} = a_1 \uvec{u}_1 + a_2 \uvec{u}_2 + \dotsb + a_{d_1} \uvec{u}_{d_1} \text{.} \end{equation*}

Then \(\uvec{v}\) is in \(W_1\text{.}\) But by moving all the \(\uvec{w}_j\) terms to the other side of (\(\star\)), we obtain a second expression for \(\uvec{v}\text{,}\)

\begin{equation*} \uvec{v} = - b_1 \uvec{w}_1 - b_2 \uvec{w}_2 - \dotsb - b_{d_2} \uvec{w}_{d_2} \text{,} \end{equation*}

so that \(\uvec{v}\) is also in \(W_2\text{.}\) Therefore, \(\uvec{v} \in W_1 \cap W_2\text{.}\) By assumption, this intersection contains only the zero vector, so we must have \(\uvec{v} = \zerovec\text{.}\) Our two different expressions for \(\uvec{v}\) as linear combinations lead to

\begin{gather*} a_1 \uvec{u}_1 + a_2 \uvec{u}_2 + \dotsb + a_{d_1} \uvec{u}_{d_1} = \zerovec \text{,} \\ (-b_1) \uvec{w}_1 + (-b_2) \uvec{w}_2 + \dotsb + (-b_{d_2}) \uvec{w}_{d_2} = \zerovec \text{.} \end{gather*}

These expressions are essentially the test for linear independence, one for basis \(\basisfont{B}_1\) and one for basis \(\basisfont{B}_2\text{.}\) As each of these bases must be an independent set, these two tests for independence lead to the conclusions

\begin{gather*} a_1 = a_2 = \dotsb = a_{d_1} \text{,} \\ b_1 = b_2 = \dotsb = b_{d_2} \text{.} \end{gather*}

So, in fact, all of the coefficients in (\(\star\)) must be zero, and the test for independence has been confirmed.

Warning 28.6.5.

The condition \(W_1 \cap W_2 = \{\zerovec\}\) is not saying that the intersection of \(W_1\) and \(W_2\) is empty! In fact, it is impossible for two subspaces of any vector space to be disjoint (i.e. have empty intersection), since every subspace of a vector space contains the zero vector. The condition \(W_1 \cap W_2 = \{\zerovec\}\) is saying that the intersection of \(W_1\) and \(W_2\) contains only the zero vector.

Can independent subsets always be “completed” to a complete set of independent subspaces? In the finite-dimensional case, the answer is yes, for essentially the same reason that Proposition 20.5.4 is true.

Choose a basis for \(W\text{,}\) and use Proposition 20.5.4 to enlarge it to a basis for all of \(V\text{.}\) Take \(W'\) to be the span of the new vectors that were used to enlarge the initial basis for \(W\text{,}\) and verify that the pair \(W,W'\) satisfies the definition of complete set of independent subspaces. It might be convenient to use Theorem 28.6.4 to check independence of the pair.

Apply Proposition 28.6.6, taking \(W\) to be the span of the initial collection of subspaces \(W_1,W_2,\dotsc,W_m\) all together.

Similar to how a matrix can be broken into blocks, a complete set of independent subspaces lets us break up a vector in the space into constituent parts.

Choose a collection of bases \(\basisfont{B}_1,\basisfont{B}_2,\dotsc,\basisfont{B}_m\) for the subspaces \(W_1,W_2,\dotsc,W_m\text{,}\) and express \(\uvec{w}\) as a linear combination of these basis vectors taken all together.

As mention in Subsubsection 28.4.4.1 the concept of independent subspaces is in fact a generalization of the concept of linearly independent set of vectors. We'll formally state this fact now.

We leave this proof to you, the reader.

Finally, here is a special example of collection of independent subspaces that will become important in our further study of matrix forms. Along with Proposition 28.6.3, it helps explain diagonal form as a special case of block-diagonal form.