In Chapter 20, we began to see how the interaction between a matrix and column vectors can be used to understand the matrix. Here we will find that for each square matrix there are certain column vectors that are particularly well-suited to the task.
Compute \(A\uvec{u}\text{.}\) Carefully compare vectors \(\uvec{u}\) and \(A\uvec{u}\) — what do you notice? Now repeat for \(\uvec{v}\) and \(A\uvec{v}\text{.}\)
(b)
Verify that \(\{\uvec{u},\uvec{v}\}\) is a basis for \(\R^2\text{.}\)
Because these vectors form a basis for \(\R^2\text{,}\) every vector in \(\R^2\) can be expressed in one unique way as a linear combination of these basis vectors. We can use this fact, along with some matrix algebra and the patterns you noticed in Task a, to develop a simple way to compute products \(A\uvec{x}\) without actually performing matrix multiplication:
From Discovery 21.1, it seems that pairs consisting of a scalar \(\lambda\) and (nonzero) vector \(\uvec{x}\) such that \(A\uvec{x} = \lambda \uvec{x}\) are important to understanding how matrix \(A\) “operates” on all vectors by multiplication. For such a pair, the scalar \(\lambda\) is called an eigenvalue of \(A\text{,}\) and the corresponding vector \(\uvec{x}\) is called an eigenvector for \(A\text{.}\)
It turns out that it is easier to determine potential eigenvalues for a matrix first, and to look for corresponding eigenvectors afterwards. In the next discovery activity we will develop a method to determine all eigenvalues of a matrix, independently of determining eigenvectors.
Discovery21.2.
For \(\lambda\) to be an eigenvalue for \(A\text{,}\) there must be at least one nontrivial solution \(\uvec{x}\) to the matrix equation \(A\uvec{x} = \lambda\uvec{x}\text{.}\)
(a)
Use matrix algebra to turn the equation \(A\uvec{x} = \lambda\uvec{x}\) into a homogeneous condition: \(\bbrac{\;\fillinmath{XXXX}\;}\;\uvec{x} = \zerovec\text{.}\)
(b)
We want nontrivial solutions to exist. Combine some knowledge from Chapter 6 and Chapter 10 to complete the statement below.
The homogeneous system from Task a has nontrivial solutions if and only if \(\det\bbrac{\;\fillinmath{XXXX}\;}\) is .
We will see that the computation of the determinant you identified in Discovery 21.2.b always results in a degree \(n\) polynomial in the variable \(\lambda\text{,}\) where \(n\) is the size of the matrix. We will call this polynomial the characteristic polynomial of \(A\text{.}\) The eigenvalues of \(A\) are then precisely the roots of its characteristic polynomial.
Discovery21.3.
For each of the following matrices, compute its characteristic polynomial, and then use it to determine the eigenvalues of each matrix. Make sure to write your eigenvalue answers down, you will need them in Discovery 21.6.
Algebra help.
When we solve for the roots of a polynomial by hand, our main method is factoring. So when computing a characteristic polynomial, keep it in factored form as much as possible — do not expand brackets unless you need to in order to be able to collect like terms and then factor further.
Once we have determined the eigenvalues of a matrix, the next step is to determine corresponding eigenvectors. We do this for one eigenvalue at a time. Fortunately, we will ultimately find ourselves in familiar territory when we go looking for eigenvectors.
Discovery21.5.
For an eigenvalue \(\lambda\) of a matrix \(A\text{,}\) the corresponding eigenvectors are the nonzero solutions to the homogeneous system . Therefore, if we include the zero vector in with the collection of all eigenvectors for \(A\) that correspond to a particular eigenvalue \(\lambda\text{,}\) this collection is a subspace of \(\R^n\) because it is equal to the space of matrix .
For an eigenvalue \(\lambda\) of a matrix \(A\text{,}\) the subspace of \(\R^n\) consisting of all eigenvectors of \(A\) that correspond to \(\lambda\) (along with the zero vector) is called the eigenspace of \(A\) corresponding to \(\lambda\).
Discovery21.6.
For each of the matrices in Discovery 21.3, determine a basis for each eigenspace by row reducing the matrix \(\lambda I - A\text{,}\) assigning parameters, and extracting null space basis vectors from the general parametric solution as usual.
Note. Substitute the actual eigenvalue in for variable \(\lambda\)before row reducing — do not row reduce with the variable \(\lambda\) still in there.
Discovery21.7.
From the initial definition of eigenvalue/eigenvector in the paragraph following Discovery 21.1, a matrix \(A\) has \(\lambda=0\) as an eigenvalue if and only if there are nonzero solutions to \(A\uvec{x} = \fillinmath{XX}\text{.}\)
So from our previous study of matrices, we can conclude that \(A\) has \(\lambda=0\) as an eigenvalue precisely when \(A\) is .