Skip to main content
Logo image

Discover Linear Algebra

Section 24.5 Examples

Here we will compute eigenvalues and a basis for each corresponding eigenspace for the matrices in Discovery 24.3.

Example 24.5.1. A \(2 \times 2\) example.

First, we form the matrix
\begin{equation*} \lambda I - A = \begin{bmatrix} \lambda-7 \amp -8 \\ 4 \amp \lambda+5 \end{bmatrix}. \end{equation*}
Then we compute its determinant, to obtain the characteristic polynomial of \(A\text{:}\)
\begin{align*} c_A(\lambda) \amp = \det(\lambda I - A) \\ \amp = (\lambda-7)(\lambda+5) + 32 \\ \amp = \lambda^2 - 2\lambda - 3 \\ \amp = (\lambda+1)(\lambda-3) \text{.} \end{align*}
The eigenvalues are the roots of the characteristic polynomial, so we have two eigenvalues \(\lambda_1 = -1\) and \(\lambda_2 = 3\text{.}\) We will analyze each of these eigenvalues in turn.

Case \(\lambda_1 = -1\).

The eigenspace \(E_{\lambda_1}(A)\) is the same as the null space of the matrix \(\lambda_1 I - A\text{,}\) so we determine a basis for this eigenspace by row reducing:
\begin{equation*} (-1) I - A = \begin{abmatrix}{rr} -8 \amp -8 \\ 4 \amp 4 \end{abmatrix} \qquad\rowredarrow\qquad \begin{abmatrix}{rr} 1 \amp 1 \\ 0 \amp 0 \end{abmatrix}\text{.} \end{equation*}
This system requires one parameter to solve, as \(x_2\) is free. Setting \(x_2 = t\text{,}\) the general solution in parametric form is
\begin{equation*} \uvec{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -t \\ t \end{bmatrix} = t \begin{abmatrix}{r} -1 \\ 1 \end{abmatrix}\text{.} \end{equation*}
Associated to the single parameter we get a single basis vector, so that
\begin{equation*} \dim \bbrac{E_{\lambda_1}(A)} = 1 \text{.} \end{equation*}
In particular, we have
\begin{equation*} E_{\lambda_1}(A) = \Span\left\{\begin{abmatrix}{r} -1 \\ 1 \end{abmatrix}\right\} \text{.} \end{equation*}

Case \(\lambda_2 = 3\).

Again, we determine a basis for \(E_{\lambda_2}(A)\) by row reducing \(\lambda_2 I - A\text{:}\)
\begin{equation*} 3 I - A = \begin{abmatrix}{rr} -4 \amp -8 \\ 4 \amp 8 \end{abmatrix} \qquad\rowredarrow\qquad \begin{bmatrix}1 \amp 2 \\ 0 \amp 0\end{bmatrix}\text{.} \end{equation*}
In this case we again have that \(x_2\) is free. One parameter means one basis vector, so here also
\begin{equation*} \dim \bbrac{E_{\lambda_2}(A)} = 1 \text{.} \end{equation*}
The first row of the reduced matrix says \(x_1 = -2 x_2\text{,}\) so we have
\begin{equation*} E_{\lambda_2}(A) = \Span\left\{\begin{abmatrix}{r} -2 \\ 1 \end{abmatrix}\right\} \text{.} \end{equation*}

Example 24.5.2. A \(3 \times 3\) example.

Start with
\begin{equation*} \lambda I - A = \begin{abmatrix}{rrr} \lambda-2 \amp 4 \amp -4 \\ 0 \amp \lambda+6 \amp -8 \\ 0 \amp 6 \amp \lambda-8 \end{abmatrix}, \end{equation*}
and compute the characteristic polynomial,
\begin{align*} c_A(\lambda) \amp = \det(\lambda I - A) \\ \amp = (\lambda-2)\bigl[(\lambda+6)(\lambda-8) + 48\bigr] \\ \amp = (\lambda-2)(\lambda^2-2\lambda) \\ \amp = \lambda(\lambda-2)^2. \end{align*}
The eigenvalues are \(\lambda_1 = 0\) and \(\lambda_2 = 2\text{.}\) Again, we analyze each of these eigenvalues in turn.

Case \(\lambda_1 = 0\).

The eigenspace \(E_{\lambda_1}(A)\) is the null space of \(\lambda_1 I - A\text{,}\) so row reduce:
\begin{equation*} 0 I - A = \begin{abmatrix}{rrr} -2 \amp 4 \amp -4 \\ 0 \amp 6 \amp -8 \\ 0 \amp 6 \amp -8 \end{abmatrix} \qquad\rowredarrow\qquad \begin{abmatrix}{rrr} 1 \amp 0 \amp -2/3 \\ 0 \amp 1 \amp -4/3 \\ 0 \amp 0 \amp 0 \end{abmatrix}\text{.} \end{equation*}
(Notice that the null space of \(0 I - A = -A\) is the same as the null space of \(A\text{,}\) since our first step in row reducing \(-A\) could be to multiply each row by \(-1\text{.}\)) And since this homogeneous system has nontrivial solutions, \(A\) must be singular.
The homogeneous system \((\lambda_1 I - A) \uvec{x} = \zerovec\) requires one parameter, so
\begin{equation*} \dim\bbrac{E_{\lambda_1}(A)} = 1 \text{.} \end{equation*}
The variable \(x_3\) is free, and the nonzero rows of the reduced matrix tell us \(x_1 = (2/3)x_3\) and \(x_2 = (4/3)x_3\text{.}\) Setting \(x_3=t\text{,}\) our general solution in parametric form is
\begin{equation*} \uvec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} (2/3)t \\ (4/3)t \\ t \end{bmatrix} = t\begin{bmatrix} 2/3 \\ 4/3 \\ 1 \end{bmatrix}. \end{equation*}
However, to avoid fractions in our basis vector, we may wish to pull out an additional scalar:
\begin{equation*} \uvec{x} = \frac{t}{3}\begin{bmatrix} 2 \\ 4 \\ 3 \end{bmatrix}\text{,} \end{equation*}
giving us
\begin{equation*} E_{\lambda_1}(A) = \Span\left\{\begin{bmatrix} 2 \\ 4 \\ 3 \end{bmatrix}\right\}\text{.} \end{equation*}

Case \(\lambda_2 = 2\).

Now row reduce \(\lambda_2 I - A\text{:}\)
\begin{equation*} 2I - A = \begin{abmatrix}{rrr} 0 \amp 4 \amp -4 \\ 0 \amp 8 \amp -8 \\ 0 \amp 6 \amp -6 \end{abmatrix} \qquad\rowredarrow\qquad \begin{abmatrix}{rrr} 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{abmatrix}\text{.} \end{equation*}
This time we have two free variables, so \(\dim \bbrac{E_{\lambda_2}(A)} = 2 \text{.}\) Setting \(x_1 = s\) and \(x_3 = t\text{,}\) the general solution in parametric form is
\begin{equation*} \uvec{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} s \\ t \\ t \end{bmatrix} = s\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} + t\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}\text{,} \end{equation*}
giving us
\begin{equation*} E_{\lambda_2}(A) = \Span\left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \right\}\text{.} \end{equation*}

Example 24.5.3. A diagonal example.

This time our matrix is diagonal, so its eigenvalues are precisely the diagonal entries, \(\lambda_1 = 1\text{,}\) \(\lambda_2 = 2\text{,}\) \(\lambda_3 = 3\text{.}\) (See Subsection 24.4.2.) As usual, analyze each eigenvalue in turn.

Case \(\lambda_1 = 1\).

\begin{equation*} 1 I - A = \begin{abmatrix}{rrr} 0 \amp 0 \amp 0 \\ 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp -2 \end{abmatrix} \qquad \rowredarrow \qquad \begin{bmatrix} 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_1}(A) = \Span\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\right\} \text{.} \end{equation*}

Case \(\lambda_2 = 2\).

\begin{equation*} 2 I - A = \begin{abmatrix}{rrr} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp -1 \end{abmatrix} \qquad \rowredarrow \qquad \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0\end{bmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_2}(A) = \Span\left\{\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}\right\} \text{.} \end{equation*}

Case \(\lambda_3 = 3\).

\begin{equation*} 3 I - A = \begin{abmatrix}{rrr} 2 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{abmatrix} \qquad \rowredarrow \qquad \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_3}(A) = \Span\left\{\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right\} \text{.} \end{equation*}
The fact that the eigenvectors of our diagonal matrix are standard basis vectors shouldn’t be too surprising, since a matrix times a standard basis vector is equal to the corresponding column of the matrix, and the columns of a diagonal matrix are scalar multiples of the standard basis vectors.

Example 24.5.4. An upper triangular example.

Our final example matrix is upper triangular, so again its eigenvalues are precisely the diagonal entries, \(\lambda_1 = 2\) and \(\lambda_2 = -1\text{.}\) (See Subsection 24.4.2.) Note that we don’t count the repeated diagonal entry \(2\) as two separate eigenvalues — that eigenvalue is merely repeated as a root of the characteristic polynomial. (But this repetition will become important in the next chapter.)
Once again we determine eigenspaces by row reducing, one eigenvalue at a time.

Case \(\lambda_1 = 2\).

\begin{equation*} 2I-A = \begin{abmatrix}{rrr} 0 \amp -1 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 3 \end{abmatrix} \qquad \rowredarrow \qquad \begin{bmatrix} 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_1}(A) = \Span\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\right\} \text{.} \end{equation*}

Case \(\lambda_2 = -1\).

\begin{equation*} (-1)I-A = \begin{abmatrix}{rrr} -3 \amp -1 \amp 0 \\ 0 \amp -3 \amp 0 \\ 0 \amp 0 \amp 0 \end{abmatrix} \qquad \rowredarrow \qquad \begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_2}(A) = \Span\left\{\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right\} \text{.} \end{equation*}

Example 24.5.5. Using row operations to help.

Don’t forget that we can use row operations to help compute determinants! (See Subsection 9.3.1 for examples.)
Let’s do a \(4 \times 4\) example to demonstrate how that method can help calculate characteristic polynomials. Consider
\begin{equation*} A = \begin{abmatrix}{rrrr} 5 \amp -4 \amp -27 \amp 46 \\ 2 \amp -1 \amp -12 \amp 20 \\ 2 \amp -2 \amp -8 \amp 14 \\ 1 \amp -1 \amp -3 \amp 5 \end{abmatrix}\text{.} \end{equation*}
To obtain the characteristic polynomial, we want to compute the determinant of
\begin{equation*} \lambda I - A = \begin{bmatrix} \lambda - 5 \amp 4 \amp 27 \amp -46 \\ -2 \amp \lambda + 1 \amp 12 \amp -20 \\ -2 \amp 2 \amp \lambda + 8 \amp -14 \\ -1 \amp 1 \amp 3 \amp \lambda - 5 \end{bmatrix}. \end{equation*}
Let’s row reduce a bit first:
\begin{align*} \amp \begin{bmatrix} \lambda - 5 \amp 4 \amp 27 \amp -46 \\ -2 \amp \lambda + 1 \amp 12 \amp -20 \\ -2 \amp 2 \amp \lambda + 8 \amp -14 \\ -1 \amp 1 \amp 3 \amp \lambda - 5 \end{bmatrix} \begin{array}{l} \phantom{x} \\ R_1 \leftrightarrow -R_4 \\ \phantom{x} \end{array}\\ \\ \longrightarrow \amp \begin{bmatrix} 1 \amp -1 \amp -3 \amp 5 - \lambda \\ -2 \amp \lambda + 1 \amp 12 \amp -20 \\ -2 \amp 2 \amp \lambda + 8 \amp -14 \\ \lambda - 5 \amp 4 \amp 27 \amp -46 \\ \end{bmatrix} \begin{array}{l} \phantom{x} \\ R_2 + 2R_1 \\ R_3 + 2R_1 \\ R_4 - (\lambda - 5)R_1 \end{array}\\ \\ \longrightarrow \amp \begin{bmatrix} 1 \amp -1 \amp -3 \amp 5 - \lambda \\ 0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\ 0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\ 0 \amp \lambda - 1 \amp 3(\lambda + 4) \amp \lambda^2 - 10 \lambda - 21 \end{bmatrix}\text{.} \end{align*}
In our first step above, we performed two operations: swapping rows and multiplying a row by \(-1\text{.}\) Both of these operations change the determinant by a factor of \(-1\text{,}\) so the two effects cancel out. Our other operations in the second step above do not affect the determinant, so the determinant of this third matrix above will be equal to the characteristic polynomial of \(A\text{.}\)
Now, we cannot divide a row by zero. So we should not divide either the second or fourth rows by \(\lambda - 1\) in an attempt to obtain the next leading one, because we would inadvertently be dividing by zero in the case \(\lambda = 1\text{.}\) However, we can still simplify one step further, even without a leading one:
\begin{align} \amp \begin{bmatrix} 1 \amp -1 \amp -3 \amp 5 - \lambda \\ 0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\ 0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\ 0 \amp \lambda - 1 \amp 3(\lambda + 4) \amp \lambda^2 - 10 \lambda - 21 \end{bmatrix} \begin{array}{l} \phantom{x} \\ \phantom{x} \\ \phantom{x} \\ R_4 - R_2 \end{array}\notag\\ \notag\\ \longrightarrow \amp \begin{bmatrix} 1 \amp -1 \amp -3 \amp 5 - \lambda \\ 0 \amp \lambda - 1 \amp 6 \amp -2(\lambda + 5) \\ 0 \amp 0 \amp \lambda + 2 \amp -2(\lambda + 2) \\ 0 \amp 0 \amp 3(\lambda + 2) \amp \lambda^2 - 8 \lambda - 11 \end{bmatrix}\text{.}\tag{✶} \end{align}
This last matrix is not quite upper triangular, but it’s close enough that we can proceed by cofactors from here.
\begin{align*} c_A(\lambda) \amp = \begin{vmatrix} 1 \amp -1 \amp -3 \amp 5 - \lambda \\ 0 \amp \lambda - 1 \amp 6 \amp -2 (\lambda + 5) \\ 0 \amp 0 \amp \lambda + 2 \amp -2 (\lambda + 2) \\ 0 \amp 0 \amp 3 (\lambda + 2) \amp \lambda^2 - 8 \lambda - 11 \end{vmatrix}\\ \\ \amp = 1 \cdot \begin{vmatrix} \lambda - 1 \amp 6 \amp -2 (\lambda + 5) \\ 0 \amp \lambda + 2 \amp -2 (\lambda + 2) \\ 0 \amp 3 (\lambda + 2) \amp \lambda^2 - 8 \lambda - 11 \end{vmatrix}\\ \\ \amp = (\lambda - 1) \cdot \begin{vmatrix} \lambda + 2 \amp -2 (\lambda + 2) \\ 3 (\lambda + 2) \amp \lambda^2 - 8 \lambda - 11 \end{vmatrix}\\ \\ \amp = (\lambda - 1) \bigl( (\lambda + 2) (\lambda^2 - 8 \lambda - 11) + 6(\lambda + 2)^2 \bigr) \\ \amp = (\lambda - 1) (\lambda + 2) \bigl( (\lambda^2 - 8 \lambda - 11) + 6(\lambda + 2) \bigr) \\ \amp = (\lambda - 1) (\lambda + 2) (\lambda^2 - 2 \lambda + 1) \\ \amp = (\lambda - 1) (\lambda + 2) (\lambda - 1)^2 \\ \amp = (\lambda - 1)^3 (\lambda + 2) \text{.} \end{align*}
We now see that the eigenvalues are \(\lambda_1 = 1\) and \(\lambda_2 = -2\text{.}\)
To determine bases for eigenspaces, we usually reduce the matrix \(\lambda I - A\) with the various eigenvalues substituted in for \(\lambda\text{.}\) But we have already partially reduced \(\lambda I - A\) with \(\lambda\) left variable to help us determine the eigenvalues. So we can begin from (✶) for both eigenvalues.

Case \(\lambda_1 = 1\).

\begin{equation*} \begin{abmatrix}{rrrr} 1 \amp -1 \amp -3 \amp 4 \\ 0 \amp 0 \amp 6 \amp -12 \\ 0 \amp 0 \amp 3 \amp - 6 \\ 0 \amp 0 \amp 9 \amp -18 \end{abmatrix} \qquad \rowredarrow \qquad \begin{abmatrix}{rrrr} 1 \amp -1 \amp 0 \amp -2 \\ 0 \amp 0 \amp 1 \amp -2 \\ 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{abmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_1}(A) = \Span\left\{ \begin{bmatrix} 2 \\ 0 \\ 2 \\ 1 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix} \right\}\text{.} \end{equation*}

Case \(\lambda_2 = -2\).

\begin{equation*} \begin{abmatrix}{rrrr} 1 \amp -1 \amp -3 \amp 7 \\ 0 \amp -3 \amp 6 \amp -6 \\ 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 9 \end{abmatrix} \qquad \rowredarrow \qquad \begin{abmatrix}{rrrr} 1 \amp 0 \amp -5 \amp 0 \\ 0 \amp 1 \amp -2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0 \end{abmatrix} \end{equation*}
\begin{equation*} \implies \qquad E_{\lambda_2}(A) = \Span\left\{ \begin{bmatrix} 5 \\ 2 \\ 1 \\ 0 \end{bmatrix} \right\}\text{.} \end{equation*}