Section 22.5 Examples
Subsection 22.5.1 Carrying out the diagonalization procedure
Let’s start with some examples from Discovery guide 22.1.
Example 22.5.1. A diagonalizable matrix.
From Discovery 22.2. We want to compute a basis for each eigenspace, using the same method as in the examples of Section 21.5. First, we form the matrix
\begin{equation*}
\lambda I - A = \begin{bmatrix}
\lambda+1 \amp -9 \amp 0 \\
0 \amp \lambda-2 \amp 0 \\
0 \amp 3 \amp \lambda+1
\end{bmatrix},
\end{equation*}
and compute its determinant,
\begin{equation*}
c_A(\lambda)
= \det(\lambda I - A)
= (\lambda+1)\bigl[(\lambda-2)(\lambda+1) - 0\cdot 3 \bigr]
= (\lambda+1)^2(\lambda-2),
\end{equation*}
to obtain the characteristic polynomial of \(A\text{.}\) The eigenvalues are \(\lambda_1 = -1\) and \(\lambda_2 = 2\text{.}\) The first eigenvalue is repeated as a root of the characteristic polynomial, while the second eigenvalue is not, so we have algebraic multiplicities \(m_1 = 2\) for \(\lambda_1\) and \(m_2 = 1\) for \(\lambda_2\text{.}\) Therefore, we will need two linearly independent eigenvectors from \(E_{\lambda_1}(A)\) and one more from \(E_{\lambda_2}(A)\text{.}\)
We determine eigenspace bases by row reducing, assigning parameters, and extracting basis vectors. Here are the results for this matrix:
\begin{align*}
(-1)I - A \amp=
\left[\begin{array}{rrr}
0 \amp -9 \amp 0 \\
0 \amp -3 \amp 0 \\
0 \amp 3 \amp 0
\end{array}\right]
\amp \amp\rowredarrow \amp
\amp\begin{bmatrix}
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0
\end{bmatrix}
\end{align*}
\begin{equation*}
\implies \qquad
E_{\lambda_1}(A) =
\Span\left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix} \right\},
\end{equation*}
\begin{align*}
2I - A \amp=
\left[\begin{array}{rrr}
3 \amp -9 \amp 0 \\
0 \amp 0 \amp 0 \\
0 \amp 3 \amp 3
\end{array}\right]
\amp \amp\rowredarrow \amp
\amp\begin{bmatrix}
1 \amp 0 \amp 3 \\
0 \amp 1 \amp 1 \\
0 \amp 0 \amp 0
\end{bmatrix}
\end{align*}
\begin{equation*}
\implies \qquad
E_{\lambda_2}(A) = \Span\left\{ \left[\begin{array}{r}-3 \\ -1 \\ 1\end{array}\right] \right\}.
\end{equation*}
Notice that \(\dim\bbrac{E_{\lambda_1}(A)} = 2\text{,}\) so geometric multiplicity equals algebraic multiplicity for \(\lambda_1\text{.}\) Also, \(\dim\bbrac{E_{\lambda_2}(A)} = 1\text{,}\) so again geometric multiplicity equals algebraic multiplicity for \(\lambda_2\text{.}\)
Let’s pause to consider the result for eigenvalue \(\lambda_2\text{.}\) We should have expected the result for the geometric multiplicity of eigenvalue \(\lambda_2\) from the relationship between algebraic and geometric multiplicites stated in Subsection 22.4.2. If we believe that a geometric multiplicity can never be greater than the corresponding algebraic multiplicity, then eigenspace \(E_{\lambda_2}(A)\) in this example could never have dimension greater than \(1\text{.}\) On the other hand, an eigenspace should never have dimension \(0\) because the definition of eigenvalue requires the existence of nonzero eigenvectors. So this forces the dimension of \(E_{\lambda_2}(A)\) to be \(1\text{,}\) without actually checking.
Returning to our procedure, we can see by inspection that the eigenspace basis vector for \(\lambda_2\) is linearly independent from the ones for \(\lambda_1\text{,}\) so when we form the transition matrix
\begin{equation*}
P = \left[\begin{array}{rrr} 1 \amp 0 \amp -3 \\ 0 \amp 0 \amp -1 \\ 0 \amp 1 \amp 1 \end{array}\right],
\end{equation*}
it will be invertible because its columns are linearly independent. And we can determine the diagonal form matrix \(\inv{P}AP\) without calculating \(\inv{P}\text{,}\) because its diagonal entries should be precisely the eigenvalues of \(A\text{,}\) with the same multiplicities and order as the corresponding columns of \(P\text{.}\) In this case,
\begin{equation*}
\inv{P}AP = \left[\begin{array}{rrr} -1 \amp 0 \amp 0 \\ 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp 2 \end{array}\right].
\end{equation*}
Finally, note that we could have analyzed the eigenvalues in the opposite order, in which case we would have formed transition matrix
\begin{equation*}
Q = \left[\begin{array}{rrr} -3 \amp 1 \amp 0 \\ -1 \amp 0 \amp 0 \\ 1 \amp 0 \amp 1 \end{array}\right],
\end{equation*}
obtaining diagonal form matrix
\begin{equation*}
\inv{Q}AQ = \left[\begin{array}{rrr} 2 \amp 0 \amp 0 \\ 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp -1 \end{array}\right].
\end{equation*}
Example 22.5.2. A non-diagonalizable matrix.
From Discovery 22.4. This matrix is upper triangular, so we can see directly that the eigenvalues are \(\lambda_1 = -1\) with algebraic multiplicity \(2\text{,}\) and \(\lambda_2 = 2\) with algebraic multiplicity \(1\text{.}\) Analyze the eigenspaces:
\begin{align*}
(-1)I - A \amp=
\left[\begin{array}{rrr}
0 \amp -1 \amp 0 \\
0 \amp 0 \amp 0 \\
0 \amp 0 \amp -3
\end{array}\right]
\amp \amp\rowredarrow \amp
\amp\begin{bmatrix}
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 1 \\
0 \amp 0 \amp 0
\end{bmatrix}
\end{align*}
\begin{equation*}
\implies \qquad
E_{\lambda_1}(A) =
\Span\left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} \right\},
\end{equation*}
\begin{align*}
2I - A \amp=
\left[\begin{array}{rrr}
3 \amp -1 \amp 0 \\
0 \amp 3 \amp 0 \\
0 \amp 0 \amp 0
\end{array}\right]
\amp \amp\rowredarrow \amp
\amp\begin{bmatrix}
1 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 0
\end{bmatrix}
\end{align*}
\begin{equation*}
\implies \qquad
E_{\lambda_2}(A) = \Span\left\{ \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\}.
\end{equation*}
We could have stopped after our analysis of \(\lambda_1\text{,}\) since its geometric multiplicity is only \(1\text{,}\) whereas we needed it to be equal to the algebraic multiplicity \(2\text{.}\) Since we cannot obtain enough linearly independent eigenvectors from these two eigenspaces to fill out a \(3 \times 3\) transition matrix \(P\text{,}\) matrix \(A\) is not diagonalizable.
Remark 22.5.3.
The matrices in the two examples above had the same eigenvalues with the same algebraic multiplicities, but one matrix was diagonalizable and the other was not. The difference was in the geometric multiplicities of the eigenvalues, which plays a crucial role in determining whether a matrix is diagonalizable.
Subsection 22.5.2 Determining diagonalizability from multiplicities
Here is an example where we only concern ourselves with the question of whether a matrix is diagonalizable, without attempting to build a transition matrix \(P\text{.}\)
Is
\begin{equation*}
A =
\left[\begin{array}{rrrr}
-1 \amp 0 \amp -12 \amp 0 \\
0 \amp 1 \amp -8 \amp 0 \\
0 \amp 0 \amp 5 \amp 0 \\
4 \amp 0 \amp 4 \amp 3
\end{array}\right]
\end{equation*}
diagonalizable? Compute the characteristic polynomial:
\begin{align*}
\det(\lambda I - A) \amp =
\begin{vmatrix}
\lambda+1 \amp 0 \amp 12 \amp 0 \\
0 \amp \lambda-1 \amp 8 \amp 0 \\
0 \amp 0 \amp \lambda-5 \amp 0 \\
-4 \amp 0 \amp -4 \amp \lambda-3
\end{vmatrix}
\\
\amp = (\lambda-5)(\lambda+1)(\lambda-1)(\lambda-3) \text{.}
\end{align*}
So the eigenvalues are \(\lambda = -1,1,3,5\text{,}\) each with algebraic multiplicity \(1\text{.}\) But an eigenspace must contain nonzero eigenvectors, so eigenvalues always have geometric multiplicity at least \(1\text{.}\) Since we will be able to obtain an eigenvector from each of the four eigenvalues, we’ll be able to fill the four columns of the transition matrix \(P\) with linearly independent eigenvectors. Therefore, \(A\) is diagonalizable.
Remark 22.5.4.
The analysis used in the above example only works for eigenvalues of algebraic multiplicity \(1\text{.}\) If an eigenvalue has algebraic multiplicity greater than \(1\text{,}\) then we still must row reduce \(\lambda I - A\) to determine the geometric multiplicity of the eigenvalue. However, if all we are concerned with is the question of diagonalizability, then we don’t need to carry out the full procedure — we can stop row reducing as soon as we can see how many parameters will be required, since this tells us the dimension of the eigenspace.
Subsection 22.5.3 A different kind of example
Is
\begin{equation*}
A = \left[\begin{array}{rr} 0 \amp -1 \\ 1 \amp 0 \end{array}\right]
\end{equation*}
diagonalizable? Compute the characteristic polynomial:
\begin{equation*}
\det(\lambda I - A)
= \left\lvert\begin{array}{rr} \lambda \amp 1 \\ -1 \amp \lambda \end{array}\right\rvert
= \lambda^2 + 1.
\end{equation*}
But \(\lambda^2+1=0\) does not have real solutions, so \(A\) does not have eigenvalues, and cannot be diagonalizable.
A look ahead.
In future studies in linear algebra you may study matrix forms in more detail, in which case you will likely work with complex vector spaces, where scalars are allowed to be both real and imaginary numbers. In that context, the matrix in the last example above does have eigenvalues, and in fact can be diagonalized.