If \(P\) is a square matrix, write \(\uvec{p}_1, \uvec{p}_2, \dotsc, \uvec{p}_n\) for the columns of \(P\text{,}\) so that \(P = \begin{bmatrix} \uvec{p}_1 \amp \uvec{p}_2 \amp \cdots \amp \uvec{p}_n \end{bmatrix} \text{.}\)
Similarity relation \(\inv{P} A P = B\) holds if and only if each column of \(B\) consists of coefficients for expressing the corresponding transformed vector \(A \uvec{p}_j\) as a linear combination of the columns of \(P\) (Subsection 26.3.2).
Vector \(\uvec{x}\) in \(\R^n\) is an eigenvector of \(A\) if \(A\uvec{x} = \lambda \uvec{x}\text{.}\) Equivalently, \(\uvec{x}\) is an eigenvector of \(A\) if \((\lambda I - A) \uvec{x} = \uvec{0}\text{.}\)
If \(A\) is similar to a triangular matrix \(T\) (upper or lower, whichever you like), how many entries of \(T\) can you be sure about? Write down an example matrix \(T\) to demonstrate your answer.
Let \(P\) be a transition matrix that realizes the similarity \(\inv{P}AP = B\text{.}\) As usual, we would like to determine the conditions on the columns of \(P\) that create the similarity relationship between \(A\) and \(B\text{.}\)
Remembering that the \(\ast\) entries in \(B\) are “don’t care” values, could you use the same kind of vector for \(\uvec{p}_2\) as for \(\uvec{p}_1\text{,}\) and still get the proper form for the second column of \(B\text{?}\)
Rearrange your equality expressing \(A \uvec{p}_3\) as a linear combination into an expression
\begin{equation*}
(\lambda I - A) \uvec{p}_3 = \fillinmath{XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX} \text{,}
\end{equation*}
where \(\lambda\) is the shared single eigenvalue of \(A\) and \(B\text{.}\) (Did you realize you did something similar to compute \(\uvec{p}_1\text{?}\) See the reminder about eigenvalues and eigenvectors in the introduction to this worksheet.)
From the property of \(\uvec{w}\) you identified, this vector must satisfy the homogeneous matrix equation . (Again, see the introduction of this worksheet.)
We found \((\lambda I - A) \uvec{p}_3\) needed to be in the span of \(\uvec{p}_1\) and \(\uvec{p}_2\text{.}\) What span does \((\lambda I - A) \uvec{p}_4\) need to be in? Will it be okay if it is forced to also be in the span of just\(\uvec{p}_1\) and \(\uvec{p}_2\) instead? (Remember that the \(\ast\) entries of \(B\) are “don’t care” values.)
If you’ve made it this far, repeat the kind of reasoning that we used to determine \(\uvec{p}_3\) in Task d to figure out how to solve for a suitable \(\uvec{p}_5\text{.}\)
The collection of vectors that are in the null space of \((\lambda I - A)^k\) for at least one positive exponent \(k\) is called the generalized eigenspace of \(A\) for the eigenvalue \(\lambda\text{,}\) and is denoted \(G_{\lambda}(A)\text{.}\)
Careful: When you check closure under addition, you cannot assume that both arbitrary vectors from \(G_{\lambda}(A)\) are in the null space of the same power of \((\lambda I - A)\text{.}\)
Use the pattern of similarity described in the introduction of this discovery guide to show that if \(U\) is an upper triangular matrix of the same size as \(J\text{,}\) then \(\inv{J}UJ\) is lower triangular.
Instead of reworking your argument from Task b to handle this case, you can save yourself some work by noticing that \(\utrans{J} = J\text{,}\) and then directly using the result of Task b.