Section 21.4 Concepts
Subsection 21.4.1 Determining eigenvalues
To determine eigenvectors and their corresponding eigenvalues for a specific matrix we need to solve the matrix equation for both the unknown eigenvector and the unknown eigenvalue This is not like any matrix equation we’ve tried to solve before — the right-hand side involves unknown times unknown, making the equation nonlinear. However, as in Discovery 21.2, we can use some matrix algebra to turn this equation into something more familiar:
A particular scalar will be an eigenvalue of if and only if the above homogeneous system has nontrivial solutions.
A homogeneous system with square coefficient matrix has nontrivial solutions precisely when that coefficient matrix is not invertible, which is the case precisely when the determinant of that coefficient matrix is equal to zero (Theorem 10.5.3). So there will exist eigenvectors of corresponding to a particular scalar precisely when is a root of the characteristic equation .
Procedure 21.4.1. To determine all eigenvalues of a square matrix .
Determine the roots of the characteristic equation
Remark 21.4.2.
Because calculating only involves multiplication, addition, and subtraction, its result is always a polynomial in the variable In fact, this polynomial will always be a monic polynomial of degree (where is ).
This is the reason we moved to the right-hand side to obtain in our algebraic manipulations above, instead of moving to the left-hand side to obtain — if we had chosen this second option, the characteristic polynomial would have a leading coefficient of depending on whether was even or odd.
Subsection 21.4.2 Eigenvalues for special forms of matrices
In Discovery 21.4, we considered the eigenvalue procedure for diagonal and triangular matrices. Suppose is such a matrix, with values down its main diagonal. Then is of the same special form as (diagonal or triangular), with entries down its main diagonal. Since we know that the determinant of a diagonal or triangular matrix is equal to the product of its diagonal entries (Statement 1 of Proposition 8.5.2), the characteristic polynomial for will be
and so the eigenvalues of will be precisely its diagonal entries.
Subsection 21.4.3 Determining eigenvectors
Once we know all possible eigenvalues of a square matrix we can substitute those values into the matrix equation one at a time. With a value for substituted in, this matrix equation is no longer nonlinear and can be solved for all corresponding eigenvectors But the homogeneous version is more convenient to work with, since to solve this system we just need to row reduce the coefficient matrix
Procedure 21.4.3. To determine all eigenvectors of a square matrix that correspond to a specific eigenvalue .
Compute the matrix Then the eigenvectors corresponding to are precisely the nontrivial solutions of the homogeneous system which can be solved by row reducing as usual.
Subsection 21.4.4 Eigenspaces
Determining eigenvectors is the same as solving the homogeneous system so the eigenvectors of corresponding to a specific eigenvalue are precisely the nonzero vectors in the null space of In particular, since a null space is a subspace of we see that the collection of all eigenvectors of that correspond to a specific eigenvalue creates a subspace of once we also include the zero vector in the collection. This subspace is called the eigenspace of for eigenvalue and we write for it.
Remark 21.4.4.
Since determining eigenvectors is the same as determining a null space, the typical result of carrying out Procedure 21.4.3 for a particular eigenvalue of a matrix will be to obtain a basis for the corresponding eigenspace, by row reducing, assigning parameters, and then extracting basis vectors from the general parametric solution as usual.
Subsection 21.4.5 Connection to invertibility
Recall that we do not call the zero vector an eigenvector of a square matrix because it would not correspond to one specific eigenvalue — the equality is true for all scalars However, the scalar can (possibly) be an eigenvalue for a matrix and we explored this possibility in Discovery 21.7.
In the case of the matrix equation turns into the homogeneous system And for to actually be an eigenvalue of there needs to be nontrivial solutions to this equation — which we know will occur precisely when is not invertible (Theorem 6.5.2).
Subsection 21.4.6 The geometry of eigenvectors
Multiplication of column vectors by a particular matrix can be thought of as a sort of function, i.e. an input-output process. But unlike the types of functions you are probably used to encountering, where the input is a number and the output is a number this matrix-multiplication sort of function has a column vector as input and a column vector as output.
When the particular matrix used to form such a function is square, then the input and output vectors live in the same space (i.e. where is the size of the matrix), so we can think of the matrix transforming an input vector into its corresponding output vector geometrically. See Figure 21.4.5 for an example of this geometric transformation point of view.
Example of matrix multiplication as a function, considered geometrically in the plane.
When the input vector is an eigenvector of the transformation matrix then the output vector is a scalar multiple of (where the scale factor is the corresponding eigenvalue). See Figure 21.4.6 for a geometric example of this view of eigenvectors.
Example of a matrix transformation of an eigenvector and a noneigenvector.
Example of a matrix transformation of an eigenvector and a noneigenvector.
Geometrically, one vector is a scalar multiple of another if and only if the two vectors are parallel. So we can say that a vector is an eigenvector of a matrix precisely when it is transformed to a parallel vector when multiplied by the matrix.