Skip to main content
Logo image

Section 22.5 Examples

Subsection 22.5.1 Carrying out the diagonalization procedure

Let’s start with some examples from Discovery guide 22.1.

Example 22.5.1. A diagonalizable matrix.

From Discovery 22.2. We want to compute a basis for each eigenspace, using the same method as in the examples of Section 21.5. First, we form the matrix
λIA=[λ+1900λ2003λ+1],
and compute its determinant,
cA(λ)=det(λIA)=(λ+1)[(λ2)(λ+1)03]=(λ+1)2(λ2),
to obtain the characteristic polynomial of A. The eigenvalues are λ1=1 and λ2=2. The first eigenvalue is repeated as a root of the characteristic polynomial, while the second eigenvalue is not, so we have algebraic multiplicities m1=2 for λ1 and m2=1 for λ2. Therefore, we will need two linearly independent eigenvectors from Eλ1(A) and one more from Eλ2(A).
We determine eigenspace bases by row reducing, assigning parameters, and extracting basis vectors. Here are the results for this matrix:
(1)IA=[090030030]reducerow[010000000]
Eλ1(A)=Span{[100],[001]},
2IA=[390000033]reducerow[103011000]
Eλ2(A)=Span{[311]}.
Notice that dim(Eλ1(A))=2, so geometric multiplicity equals algebraic multiplicity for λ1. Also, dim(Eλ2(A))=1, so again geometric multiplicity equals algebraic multiplicity for λ2.
Let’s pause to consider the result for eigenvalue λ2. We should have expected the result for the geometric multiplicity of eigenvalue λ2 from the relationship between algebraic and geometric multiplicites stated in Subsection 22.4.2. If we believe that a geometric multiplicity can never be greater than the corresponding algebraic multiplicity, then eigenspace Eλ2(A) in this example could never have dimension greater than 1. On the other hand, an eigenspace should never have dimension 0 because the definition of eigenvalue requires the existence of nonzero eigenvectors. So this forces the dimension of Eλ2(A) to be 1, without actually checking.
Returning to our procedure, we can see by inspection that the eigenspace basis vector for λ2 is linearly independent from the ones for λ1, so when we form the transition matrix
P=[103001011],
it will be invertible because its columns are linearly independent. And we can determine the diagonal form matrix P1AP without calculating P1, because its diagonal entries should be precisely the eigenvalues of A, with the same multiplicities and order as the corresponding columns of P. In this case,
P1AP=[100010002].
Finally, note that we could have analyzed the eigenvalues in the opposite order, in which case we would have formed transition matrix
Q=[310100101],
obtaining diagonal form matrix
Q1AQ=[200010001].

Example 22.5.2. A nondiagonalizable matrix.

From Discovery 22.4. This matrix is upper triangular, so we can see directly that the eigenvalues are λ1=1 with algebraic multiplicity 2, and λ2=2 with algebraic multiplicity 1. Analyze the eigenspaces:
(1)IA=[010000003]reducerow[010001000]
Eλ1(A)=Span{[100]},
2IA=[310030000]reducerow[100010000]
Eλ2(A)=Span{[001]}.
We could have stopped after our analysis of λ1, since its geometric multiplicity is only 1, whereas we needed it to be equal to the algebraic multiplicity 2. Since we cannot obtain enough linearly independent eigenvectors from these two eigenspaces to fill out a 3×3 transition matrix P, matrix A is not diagonalizable.

Remark 22.5.3.

The matrices in the two examples above had the same eigenvalues with the same algebraic multiplicities, but one matrix was diagonalizable and the other was not. The difference was in the geometric multiplicities of the eigenvalues, which plays a crucial role in determining whether a matrix is diagonalizable.

Subsection 22.5.2 Determining diagonalizability from multiplicities

Here is an example where we only concern ourselves with the question of whether a matrix is diagonalizable, without attempting to build a transition matrix P.
A=[10120018000504043]
diagonalizable? Compute the characteristic polynomial:
det(λIA)=|λ+101200λ18000λ50404λ3|=(λ5)(λ+1)(λ1)(λ3).
So the eigenvalues are λ=1,1,3,5, each with algebraic multiplicity 1. But an eigenspace must contain nonzero eigenvectors, so eigenvalues always have geometric multiplicity at least 1. Since we will be able to obtain an eigenvector from each of the four eigenvalues, we’ll be able to fill the four columns of the transition matrix P with linearly independent eigenvectors. Therefore, A is diagonalizable.

Remark 22.5.4.

The analysis used in the above example only works for eigenvalues of algebraic multiplicity 1. If an eigenvalue has algebraic multiplicity greater than 1, then we still must row reduce λIA to determine the geometric multiplicity of the eigenvalue. However, if all we are concerned with is the question of diagonalizability, then we don’t need to carry out the full procedure — we can stop row reducing as soon as we can see how many parameters will be required, since this tells us the dimension of the eigenspace.

Subsection 22.5.3 A different kind of example

A=[0110]
diagonalizable? Compute the characteristic polynomial:
det(λIA)=|λ11λ|=λ2+1.
But λ2+1=0 does not have real solutions, so A does not have eigenvalues, and cannot be diagonalizable.

A look ahead.

In future studies in linear algebra you may study matrix forms in more detail, in which case you will likely work with complex vector spaces, where scalars are allowed to be both real and imaginary numbers. In that context, the matrix in the last example above does have eigenvalues, and in fact can be diagonalized.