Example 21.5.1. A \(2 \times 2\) example.
First, we form the matrix
\begin{equation*}
\lambda I - A = \begin{bmatrix} \lambda-7 \amp -8 \\ 4 \amp \lambda+5 \end{bmatrix}.
\end{equation*}
Then we compute its determinant, to obtain the characteristic polynomial of \(A\text{:}\)
\begin{align*}
c_A(\lambda) \amp = \det(\lambda I - A) \\
\amp = (\lambda-7)(\lambda+5) + 32 \\
\amp = \lambda^2 - 2\lambda - 3 \\
\amp = (\lambda+1)(\lambda-3).
\end{align*}
The eigenvalues are the roots of the characteristic polynomial, so we have two eigenvalues \(\lambda_1 = -1\) and \(\lambda_2 = 3\text{.}\)
The eigenspace \(E_{\lambda_1}(A)\) is the same as the null space of the matrix \(\lambda_1 I - A\text{,}\) so we determine a basis for the eigenspace by row reducing:
\begin{equation*}
(-1)I - A = \left[\begin{array}{rr} -8 \amp -8 \\ 4 \amp 4 \end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rr} 1 \amp 1 \\ 0 \amp 0 \end{array}\right].
\end{equation*}
This system requires one parameter to solve, as \(x_2\) is free. Setting \(x_2=t\text{,}\) the general solution in parametric form is
\begin{equation*}
\uvec{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}
= \begin{bmatrix} -t \\ t \end{bmatrix}
= t \left[\begin{array}{r} -1 \\ 1 \end{array}\right].
\end{equation*}
Associated to the single parameter we get a single basis vector, so that
\begin{equation*}
\dim\bbrac{E_{\lambda_1}(A)} = 1 \text{.}
\end{equation*}
In particular, we have
\begin{equation*}
E_{\lambda_1}(A) = \Span\left\{\left[\begin{array}{r} -1 \\ 1 \end{array}\right]\right\}\text{.}
\end{equation*}
Now move on to the next eigenvalue. Again, we determine a basis for \(E_{\lambda_2}(A)\) by row reducing \(\lambda_2 I - A\text{:}\)
\begin{equation*}
3 I - A = \left[\begin{array}{rr} -4 \amp -8 \\ 4 \amp 8 \end{array}\right]
\qquad\rowredarrow\qquad
\begin{bmatrix}1 \amp 2 \\ 0 \amp 0\end{bmatrix}.
\end{equation*}
Again, \(x_2\) is free. One parameter means one basis vector, so again
\begin{equation*}
\dim\bbrac{E_{\lambda_2}(A)} = 1 \text{.}
\end{equation*}
The first row of the reduced matrix says \(x_1 = -2x_2\text{,}\) so we have
\begin{equation*}
E_{\lambda_2}(A) = \Span\left\{\left[\begin{array}{r} -2 \\ 1 \end{array}\right]\right\}\text{.}
\end{equation*}
Example 21.5.2. A \(3 \times 3\) example.
Start with
\begin{equation*}
\lambda I - A
= \left[\begin{array}{rrr}
\lambda-2 \amp 4 \amp -4 \\
0 \amp \lambda+6 \amp -8 \\
0 \amp 6 \amp \lambda-8
\end{array}\right],
\end{equation*}
and compute the characteristic polynomial,
\begin{align*}
c_A(\lambda) \amp = \det(\lambda I - A) \\
\amp = (\lambda-2)\bigl[(\lambda+6)(\lambda-8) + 48\bigr] \\
\amp = (\lambda-2)(\lambda^2-2\lambda) \\
\amp = \lambda(\lambda-2)^2.
\end{align*}
The eigenvalues are \(\lambda_1 = 0\) and \(\lambda_2 = 2\text{.}\)
The eigenspace \(E_{\lambda_1}(A)\) is the null space of \(0I-A = -A\text{,}\) so row reduce:
\begin{equation*}
0I - A = \left[\begin{array}{rrr}
-2 \amp 4 \amp -4 \\
0 \amp 6 \amp -8 \\
0 \amp 6 \amp -8
\end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rrr} 1 \amp 0 \amp -2/3 \\ 0 \amp 1 \amp -4/3 \\ 0 \amp 0 \amp 0 \end{array}\right].
\end{equation*}
Notice that the null space of \(0I-A = -A\) is the same as the null space of \(A\text{,}\) since our first step in row reducing \(-A\) could be to multiply each row by \(-1\text{.}\) Since this homogeneous system has nontrivial solutions, \(A\) must be singular.
The homogeneous system \((\lambda_1I - A)\uvec{x}=\zerovec\) requires one parameter, so
\begin{equation*}
\dim\bbrac{E_{\lambda_1}(A)} = 1 \text{.}
\end{equation*}
The variable \(x_3\) is free, and the nonzero rows of the reduced matrix tell us \(x_1 = (2/3)x_3\) and \(x_2 = (4/3)x_3\text{.}\) Setting \(x_3=t\text{,}\) our general solution in parametric form is
\begin{equation*}
\uvec{x}
= \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}
= \begin{bmatrix} (2/3)t \\ (4/3)t \\ t \end{bmatrix}
= t\begin{bmatrix} 2/3 \\ 4/3 \\ 1 \end{bmatrix}.
\end{equation*}
However, to avoid fractions in our basis vector, we may wish to pull out an additional scalar:
\begin{equation*}
\uvec{x}
= \frac{t}{3}\begin{bmatrix} 2 \\ 4 \\ 3 \end{bmatrix}\text{,}
\end{equation*}
giving us
\begin{equation*}
E_{\lambda_1}(A) = \Span\left\{\begin{bmatrix} 2 \\ 4 \\ 3 \end{bmatrix}\right\}\text{.}
\end{equation*}
Now row reduce \(\lambda_2 I - A\text{:}\)
\begin{equation*}
2I - A = \left[\begin{array}{rrr} 0 \amp 4 \amp -4 \\ 0 \amp 8 \amp -8 \\ 0 \amp 6 \amp -6 \end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rrr} 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{array}\right].
\end{equation*}
This time we have two free variables, so \(\dim\bbrac{E_{\lambda_2}(A)} = 2\text{.}\) Setting \(x_1 = s\) and \(x_3 = t\text{,}\) the general solution in parametric form is
\begin{equation*}
\uvec{x}
= \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}
= \begin{bmatrix} s \\ t \\ t \end{bmatrix}
= s\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}
+ t\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix},
\end{equation*}
giving us
\begin{equation*}
E_{\lambda_2}(A)
= \Span\left\{
\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix},
\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}
\right\}\text{.}
\end{equation*}
Example 21.5.3. A diagonal example.
This time our matrix is diagonal, so its eigenvalues are precisely the diagonal entries, \(\lambda_1 = 1\text{,}\) \(\lambda_2 = 2\text{,}\) \(\lambda_3 = 3\text{.}\)
As usual, analyze each eigenvalue in turn.
For \(\lambda = 1\text{:}\)
\begin{align*}
1I-A = \left[\begin{array}{rrr} 0 \amp 0 \amp 0 \\ 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp -2 \end{array}\right]
\qquad \rowredarrow \qquad
\begin{bmatrix} 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix}\\
\\
\implies \qquad
E_{\lambda_1}(A) = \Span\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\right\}.
\end{align*}
For \(\lambda = 2\text{:}\)
\begin{align*}
2I-A = \left[\begin{array}{rrr} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp -1 \end{array}\right]
\qquad \rowredarrow \qquad
\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0\end{bmatrix}\\
\\
\implies \qquad
E_{\lambda_2}(A) = \Span\left\{\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\right\}.
\end{align*}
For \(\lambda = 3\text{:}\)
\begin{align*}
3I-A = \left[\begin{array}{rrr} 2 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{array}\right]
\qquad \rowredarrow \qquad
\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix}\\
\\
\implies \qquad
E_{\lambda_3}(A) = \Span\left\{\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right\}.
\end{align*}
The fact that the eigenvectors of our diagonal matrix are standard basis vectors shouldn’t be too surprising, since a matrix times a standard basis vector is equal to the corresponding column of the matrix, and the columns of a diagonal matrix are scalar multiples of the standard basis vectors.
Example 21.5.4. An upper triangular example.
Our final example matrix is upper triangular, so again its eigenvalues are precisely the diagonal entries, \(\lambda_1 = 2\) and \(\lambda_2 = -1\text{.}\)
Note that we don’t count the repeated diagonal entry \(2\) as two separate eigenvalues — that eigenvalue is just repeated as a root of the characteristic polynomial. (But this repetition will become important in the next chapter.)
Once again we determine eigenspaces by row reducing, one at a time.
For \(\lambda_1 = 2\text{:}\)
\begin{align*}
2I-A = \left[\begin{array}{rrr} 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 3 \end{array}\right]
\qquad \rowredarrow \qquad
\begin{bmatrix} 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix}\\
\\
\implies \qquad
E_{\lambda_1}(A) = \Span\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\right\}.
\end{align*}
For \(\lambda_2 = -1\text{:}\)
\begin{align*}
(-1)I-A = \left[\begin{array}{rrr} -3 \amp -1 \amp 0 \\ 0 \amp -3 \amp 0 \\ 0 \amp 0 \amp 0 \end{array}\right]
\qquad \rowredarrow \qquad
\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix}\\
\\
\implies \qquad
E_{\lambda_2}(A) = \Span\left\{\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right\}.
\end{align*}
Example 21.5.5. Using row operations to help.
Don’t forget that we can use row operations to help compute determinants!
Let’s do a \(4 \times 4\) example to demonstrate. Consider
\begin{equation*}
A = \left[\begin{array}{rrrr}
5 \amp -4 \amp -27 \amp 46 \\
2 \amp -1 \amp -12 \amp 20 \\
2 \amp -2 \amp -8 \amp 14 \\
1 \amp -1 \amp -3 \amp 5
\end{array}\right].
\end{equation*}
To obtain the characteristic polynomial, we want to compute the determinant of
\begin{equation*}
\lambda I - A =
\begin{bmatrix}
\lambda - 5 \amp 4 \amp 27 \amp -46 \\
-2 \amp \lambda + 1 \amp 12 \amp -20 \\
-2 \amp 2 \amp \lambda + 8 \amp -14 \\
-1 \amp 1 \amp 3 \amp \lambda - 5
\end{bmatrix}.
\end{equation*}
Let’s row reduce a bit first:
\begin{align*}
\amp \begin{bmatrix}
\lambda - 5 \amp 4 \amp 27 \amp -46 \\
-2 \amp \lambda + 1 \amp 12 \amp -20 \\
-2 \amp 2 \amp \lambda + 8 \amp -14 \\
-1 \amp 1 \amp 3 \amp \lambda - 5
\end{bmatrix}
\begin{array}{l} \phantom{x} \\ R_1 \leftrightarrow -R_4 \\ \phantom{x} \end{array}\\
\\
\longrightarrow
\amp \begin{bmatrix}
1 \amp -1 \amp -3 \amp 5 - \lambda \\
-2 \amp \lambda + 1 \amp 12 \amp -20 \\
-2 \amp 2 \amp \lambda + 8 \amp -14 \\
\lambda - 5 \amp 4 \amp 27 \amp -46 \\
\end{bmatrix}
\begin{array}{l} \phantom{x} \\ R_2 + 2R_1 \\ R_3 + 2R_1 \\ R_4 - (\lambda - 5)R_1 \end{array}\\
\\
\longrightarrow
\amp \begin{bmatrix}
1 \amp -1 \amp -3 \amp 5 - \lambda \\
0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\
0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\
0 \amp \lambda - 1 \amp 3(\lambda + 4) \amp \lambda^2 - 10 \lambda - 21
\end{bmatrix}.
\end{align*}
In our first step above, we performed two operations: swapping rows and multiplying a row by \(-1\text{.}\) Both of these operations change the determinant by a factor of \(-1\text{,}\) so the two effects cancel out. Our other operations in the second step above do not affect the determinant, so the determinant of this third matrix above will be equal to the characteristic polynomial of \(A\text{.}\)
Now, we cannot divide a row by zero. So we should not divide either the second or fourth rows by \(\lambda - 1\) in an attempt to obtain the next leading one, because we would inadvertently be dividing by zero in the case \(\lambda = 1\text{.}\) However, we can still simplify one step further, even without a leading one:
\begin{align}
\amp \begin{bmatrix}
1 \amp -1 \amp -3 \amp 5 - \lambda \\
0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\
0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\
0 \amp \lambda - 1 \amp 3(\lambda + 4) \amp \lambda^2 - 10 \lambda - 21
\end{bmatrix}
\begin{array}{l} \phantom{x} \\ \phantom{x} \\ \phantom{x} \\ R_4 - R_2 \end{array}\notag\\
\notag\\
\longrightarrow
\amp \begin{bmatrix}
1 \amp -1 \amp -3 \amp 5 - \lambda \\
0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\
0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\
0 \amp 0 \amp 3(\lambda + 2) \amp \lambda^2 - 8 \lambda - 11
\end{bmatrix}.\tag{✶}
\end{align}
This last matrix is not quite upper triangular, but it’s close enough that we can proceed by cofactors from here.
\begin{align*}
c_A(\lambda) \amp =
\begin{vmatrix}
1 \amp -1 \amp -3 \amp 5 - \lambda \\
0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\
0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\
0 \amp 0 \amp 3(\lambda + 2) \amp \lambda^2 - 8 \lambda - 11
\end{vmatrix}\\
\\
\amp =
1 \cdot
\begin{vmatrix}
\lambda - 1 \amp 6 \amp -2(\lambda + 5) \\
0 \amp \lambda + 2 \amp -2(\lambda + 2) \\
0 \amp 3(\lambda + 2) \amp \lambda^2 - 8 \lambda - 11
\end{vmatrix}\\
\\
\amp =
(\lambda - 1) \cdot
\begin{vmatrix}
\lambda + 2 \amp -2(\lambda + 2) \\
3(\lambda + 2) \amp \lambda^2 - 8 \lambda - 11
\end{vmatrix}\\
\\
\amp = (\lambda - 1) \bigl( (\lambda + 2) (\lambda^2 - 8 \lambda - 11) + 6(\lambda + 2)^2 \bigr) \\
\amp = (\lambda - 1) (\lambda + 2) \bigl( (\lambda^2 - 8 \lambda - 11) + 6(\lambda + 2) \bigr) \\
\amp = (\lambda - 1) (\lambda + 2) (\lambda^2 - 2 \lambda + 1) \\
\amp = (\lambda - 1) (\lambda + 2) (\lambda - 1)^2 \\
\amp = (\lambda - 1)^3 (\lambda + 2).
\end{align*}
We now see that the eigenvalues are \(\lambda_1 = 1\) and \(\lambda_2 = -2\text{.}\)
To determine bases for eigenspaces, we usually reduce the matrix
\(\lambda I - A\) with the various eigenvalues substituted in for
\(\lambda\text{.}\) But we have already partially reduced
\(\lambda I - A\) with
\(\lambda\) left variable to help us determine the eigenvalues. So we can begin from
(✶) for both eigenvalues.
For \(\lambda_1 = 1\text{:}\)
\begin{align*}
\left[\begin{array}{rrrr}
1 \amp -1 \amp -3 \amp 4 \\
0 \amp 0 \amp 6 \amp -12 \\
0 \amp 0 \amp 3 \amp - 6 \\
0 \amp 0 \amp 9 \amp -18
\end{array}\right]
\qquad \rowredarrow \qquad
\left[\begin{array}{rrrr}
1 \amp -1 \amp 0 \amp -2 \\
0 \amp 0 \amp 1 \amp -2 \\
0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 0
\end{array}\right]\\
\\
\implies \qquad
E_{\lambda_1}(A) = \Span\left\{
\begin{bmatrix} 2 \\ 0 \\ 2 \\ 1 \end{bmatrix},
\begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}
\right\}.
\end{align*}
For
\(\lambda_2 = -2\text{,}\) again starting from
(✶):
\begin{align*}
\left[\begin{array}{rrrr}
1 \amp -1 \amp -3 \amp 7 \\
0 \amp -3 \amp 6 \amp -6 \\
0 \amp 0 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 9
\end{array}\right]
\qquad \rowredarrow \qquad
\left[\begin{array}{rrrr}
1 \amp 0 \amp -5 \amp 0 \\
0 \amp 1 \amp -2 \amp 0 \\
0 \amp 0 \amp 0 \amp 1 \\
0 \amp 0 \amp 0 \amp 0
\end{array}\right]\\
\\
\implies \qquad
E_{\lambda_2}(A) = \Span\left\{ \begin{bmatrix} 5 \\ 2 \\ 1 \\ 0 \end{bmatrix} \right\}.
\end{align*}