When \(A\uvec{x} = \lambda \uvec{x}\text{,}\) column vector \(\uvec{x}\) is called an eigenvector of \(A\) and scalar \(\lambda\) is called the corresponding eigenvalue of \(A\text{.}\)
Suppose \(3\times 3\) matrices \(A,P,D\) are related by \(\inv{P}AP = D\text{.}\) (Remember, order matters in matrix multiplication, so in general \(\inv{P}AP \neq A\text{.}\))
As an example, consider the diagonal matrix \(D\) below. We will leave \(A\) and \(P\) unspecified for now, but think of \(P\) as a collection of column vectors.
Our goal is to understand how \(\inv{P} A P = D\) might be possible, but multiplying on the left by \(P\) yields \(A P = P D\text{,}\) and equivalent equality that will be simpler to analyze.
In Task c, we have identified a condition for \(AP = PD\) to be true. But to get from \(AP = PD\) back to the original equation \(\inv{P}AP = D\text{,}\) we also need \(P\) to be invertible, so we need the columns of \(P\) to be .
For each eigenvalue of \(A\text{,}\) determine a basis for the corresponding eigenspace. That is, determine a basis for the null space of \(\lambda I - A\) by row reducing.
If you succeeded in meeting both conditions in the previous step, then \(\inv{P}AP\) will be a diagonal matrix. Is it possible to know what diagonal matrix \(\inv{P}AP\) will be without actually computing \(\inv{P}\) and multiplying out \(\inv{P}AP\text{?}\)
Summarize the patterns you’ve determined in the first two activities of this discovery guide by completing the following statements in the case that \(D = \inv{P}AP\) is diagonal.
Compare the results of Discovery 22.2 and Discovery 22.4 by filling in the chart in Figure 22.1.1 below. We will call the number of times an eigenvalue is “repeated” as a root of the characteristic polynomial its algebraic multiplicity , and we will call the dimension of the eigenspace corresponding to an eigenvalue its geometric multiplicity .
After completing the chart, discuss: What can you conclude about algebraic and geometric multiplicities of eigenvalues with respect to attempting to find a suitable \(P\) to make \(\inv{P}AP\) diagonal?
When we attempt to form a transition matrix \(P\) to make \(\inv{P}AP\) diagonal, we need its columns to satisfy both conditions identified in Task c and Task d of Discovery 22.1.
In particular, consider the second of these two conditions. When you determine a basis for a particular eigenspace, these vectors are automatically linearly independent from each other (since they form a basis for a subspace of \(\R^n\)). However, unless \(A\) has only a single eigenvalue, you will need to include eigenvectors from different eigenvalues together in filling out the columns of \(P\). How can we be sure that the collection of all columns of \(P\) will satisfy the condition identified in Task d of Discovery 22.1?
Suppose \(\{\uvec{v}_1,\uvec{v}_2,\uvec{v}_3\}\) is a linearly independent set of eigenvectors of \(A\text{,}\) corresponding to eigenvalue \(\lambda_1\text{,}\) and suppose \(\uvec{w}\) is an eigenvector of \(A\) corresponding to a different eigenvalue \(\lambda_2\text{.}\) (So \(\lambda_2\neq\lambda_1\text{.}\))
Set up the vector equation to begin the test for independence for the set \(\{\uvec{v}_1,\uvec{v}_2,\uvec{v}_3,\uvec{w}\}\text{.}\) Call this equation (1).