Skip to main content

Section B.3 Diagonal form

In this example we apply the diagonalization procedure to a \(3 \times 3\)

\begin{equation*} A = \begin{bmatrix} 7 \amp -12 \amp -4 \\ 4 \amp -9 \amp -4 \\ -4 \amp 12 \amp 7 \end{bmatrix} \text{.} \end{equation*}
Eigenvalues.

We can calculate the characteristic polynomial ourselves.

Or we can have Sage do it.

So the eignevalues are \(\lambda = -1\) and \(\lambda = 3\text{.}\) But we could have had Sage do this for us.

Notice that the output is a list of eigenvalues, with each eigenvalue repeated in the list according to its algebraic multiplicity.

Eigenvectors.

Let's analyze \(\lambda = 3\text{,}\) since this eigenvalue had algebraic multiplicity \(2\text{.}\) We also need geometric multiplicity \(2\) in order for matrix \(A\) to be diagonalizable; if this geometric multiplicity is only \(1\text{,}\) then analyzing \(\lambda = -1\) would be a waste of time.

As desired, two parameters are required, so we may proceed. Assigning \(x_2 = s\) and \(x_3 = t\) leads to independent eigenvectors

\begin{align*} \uvec{p}_1 \amp = (3,1,0) \text{,} \amp \uvec{p}_2 \amp = (1,0,1) \text{.} \end{align*}

Notice that \(A \uvec{p}_j = 3 \uvec{p}_j\) for each of these two vectors, as expected.

Now let's analyze \(\lambda = -1\text{.}\)

We need one parameter, as expected from the algebraic multiplicity. Assigning \(x_3 = t\) leads to eigenvector \(\uvec{p}_3 = (-1,-1,1)\text{.}\)

Notice that \(A \uvec{p}_3 = - \uvec{p}_3\text{,}\) as expected.

The transition matrix.

We now have our basis of independent eigenvectors to form the transition matrix \(P\text{.}\)

The rank computation verifies that the columns of \(P\) are linearly independent (but our theory also tells us this: eigenvectors from different eigenspaces are always independent).

The diagonal matrix.

Let's compute \(\inv{P} A P\) in two different ways. First, directly.

Notice the order of the eigenvalues down the diagonal — this is due to the order we placed the eigenvectors as columns in \(P\text{.}\) If we create a different transition matrix, we may get a different diagonal matrix.

Now let's use the idea in Subsection 26.4.2 to affirm that \(\inv{P} A P\) can also be computed by row reduction: we augment \(P\) with the result of \(A P\text{,}\) and then reducing the \(P\) part on the left to identity simultaneously applies \(\inv{P}\) to the \(A P\) part on the right.

\begin{equation*} \left[\begin{array}{c|c} P \amp AP \end{array}\right] \qquad \rowredarrow \qquad \left[\begin{array}{c|c} I \amp \inv{P}(AP) \end{array}\right] \end{equation*}

As expected, there's our diagonal matrix on the right. We can use Python-ic list comprehension to extract it, if we like.

The first list-range specifier : says “take all rows.” The second list-range specifier 3:6 says “take columns with index starting at 3 and ending before 6.” Remember that Python uses 0-based indexing, so the column with index 3 is actually the fourth column. And the last column has index 5, so we want to stop before 6 to include it.