Discovery guide 24.1 Discovery guide
In Chapter 21, we began to see how the interaction between a matrix and column vectors can be used to understand the matrix. Here we will find that for each square matrix there are certain column vectors that are particularly well-suited to the task.
Discovery 24.1.
Consider the matrix and column vectors
(a)
Compute \(A\uvec{u}\text{.}\) Carefully compare vectors \(\uvec{u}\) and \(A\uvec{u}\) — what do you notice? Now repeat for \(\uvec{v}\) and \(A\uvec{v}\text{.}\)
(b)
Verify that \(\{\uvec{u},\uvec{v}\}\) is a basis for \(\R^2\text{.}\)
(c)
Because these vectors form a basis for \(\R^2\text{,}\) every vector in \(\R^2\) can be expressed in one unique way as a linear combination of these basis vectors. We can use this fact, along with some matrix algebra and the patterns you noticed in Task a, to develop a simple way to compute products \(A\uvec{x}\) without actually performing matrix multiplication:
From Discovery 24.1, it seems that pairs consisting of a scalar \(\lambda\) and (nonzero) vector \(\uvec{x}\) such that \(A\uvec{x} = \lambda \uvec{x}\) are important to understanding how matrix \(A\) “operates” on all vectors by multiplication. For such a pair, the scalar \(\lambda\) is called an eigenvalue of \(A\text{,}\) and the corresponding vector \(\uvec{x}\) is called an eigenvector for \(A\text{.}\)
It turns out that it is easier to determine potential eigenvalues for a matrix first, and to look for corresponding eigenvectors afterwards. In the next discovery activity we will develop a method to determine all eigenvalues of a matrix, independently of determining eigenvectors.
Discovery 24.2.
For \(\lambda\) to be an eigenvalue for \(A\text{,}\) there must be at least one nontrivial solution \(\uvec{x}\) to the matrix equation \(A\uvec{x} = \lambda\uvec{x}\text{.}\)
(a)
Use matrix algebra to turn the equation \(A\uvec{x} = \lambda\uvec{x}\) into a homogeneous condition: \(\bbrac{\;\underline{\hspace{1.818181818181818em}}\;}\;\uvec{x} = \zerovec\text{.}\)
(b)
We want nontrivial solutions to exist. Combine some knowledge from Chapter 6 and Chapter 10 to complete the statement below.
The homogeneous system from Task a has nontrivial solutions if and only if \(\det\bbrac{\;\underline{\hspace{1.818181818181818em}}\;}\) is .
We will see that the computation of the determinant you identified in Discovery 24.2.b always results in a degree \(n\) polynomial in the variable \(\lambda\text{,}\) where \(n\) is the size of the matrix. We will call this polynomial the characteristic polynomial of \(A\text{.}\) The eigenvalues of \(A\) are then precisely the roots of its characteristic polynomial.
Discovery 24.3.
For each of the following matrices, compute its characteristic polynomial, and then use it to determine the eigenvalues of each matrix. Make sure to write your eigenvalue answers down, you will need them in Discovery 24.6.
Algebra help.
When we solve for the roots of a polynomial by hand, our main method is factoring. So when computing a characteristic polynomial, keep it in factored form as much as possible — do not expand brackets unless you need to in order to be able to collect like terms and then factor further.
(a)
\(\left[\begin{array}{rr} 7 \amp 8 \\ -4 \amp -5 \end{array}\right] \)
(b)
\(\left[\begin{array}{rrr} 2 \amp -4 \amp 4 \\ 0 \amp -6 \amp 8 \\ 0 \amp -6 \amp 8 \end{array}\right] \)
(c)
\(\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp 3 \end{bmatrix} \)
(d)
\(\left[\begin{array}{rrr} 2 \amp 1 \amp 0 \\ 0 \amp 2 \amp 0 \\ 0 \amp 0 \amp -1 \end{array}\right] \)
Discovery 24.4.
Complete each statement for the special type of matrix involved.
- The eigenvalues of a diagonal matrix are .
- The eigenvalues of an upper triangular matrix are .
- The eigenvalues of a lower triangular matrix are .
Once we have determined the eigenvalues of a matrix, the next step is to determine corresponding eigenvectors. We do this for one eigenvalue at a time. Fortunately, we will ultimately find ourselves in familiar territory when we go looking for eigenvectors.
Discovery 24.5.
For an eigenvalue \(\lambda\) of a matrix \(A\text{,}\) the corresponding eigenvectors are the nonzero solutions to the homogeneous system . Therefore, if we include the zero vector in with the collection of all eigenvectors for \(A\) that correspond to a particular eigenvalue \(\lambda\text{,}\) this collection is a subspace of \(\R^n\) because it is equal to the space of matrix .
For an eigenvalue \(\lambda\) of a matrix \(A\text{,}\) the subspace of \(\R^n\) consisting of all eigenvectors of \(A\) that correspond to \(\lambda\) (along with the zero vector) is called the eigenspace of \(A\) corresponding to \(\lambda\).
Discovery 24.6.
For each of the matrices in Discovery 24.3, determine a basis for each eigenspace by row reducing the matrix \(\lambda I - A\text{,}\) assigning parameters, and extracting null space basis vectors from the general parametric solution as usual.
Note. Substitute the actual eigenvalue in for variable \(\lambda\) before row reducing — do not row reduce with the variable \(\lambda\) still in there.
Discovery 24.7.
From the initial definition of eigenvalue/eigenvector in the paragraph following Discovery 24.1, a matrix \(A\) has \(\lambda=0\) as an eigenvalue if and only if there are nonzero solutions to \(A\uvec{x} = \underline{\hspace{0.909090909090909em}}\text{.}\)
So from our previous study of matrices, we can conclude that \(A\) has \(\lambda=0\) as an eigenvalue precisely when \(A\) is .