Skip to main content

Section 46.3 Examples

Subsection 46.3.1 Using a “natural” basis for an operator

Example 46.3.1. Projection onto a line in \(\R^2\).

Let \(\funcdef{T}{\R^2}{\R^2}\) be the operator that orthogonally projects vectors onto the line

\begin{equation*} \ell = \Span \{ (1,2) \} \text{.} \end{equation*}

A “natural” basis to use to analyze the action of \(T\) on \(\R^2\) is an orthogonal basis consisting of one vector parallel to and one vector orthogonal to \(\ell\text{.}\) So let's take

\begin{equation*} \basisfont{B} = \{ (1,2), (-2,1) \} \text{.} \end{equation*}

From calculations

\begin{gather*} T(1,2) = (1,2) = 1 \, (1,2) + 0 \, (-2,1) \text{,} \\ T(-2,1) = \zerovec = 0 \, (1,2) + 0 \, (-2,1) \text{,} \end{gather*}

we obtain

\begin{equation*} \matrixOf{T}{B} = \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} \text{.} \end{equation*}

As this is a diagonal matrix, we can say that \(T\) is a diagonalizable operator.

If we would like to obtain the matrix \(\matrixOf{T}{S}\) relative to the standard basis \(\basisfont{S}\text{,}\) we can use a change of basis matrix. First form the transition matrix

\begin{equation*} \ucobmtrx{B}{S} = \left[\begin{array}{cr} 1 \amp -2 \\ 2 \amp 1 \end{array}\right] \end{equation*}

and calculate the reverse transition matrix

\begin{equation*} \ucobmtrx{S}{B} = \uinvcobmtrx{B}{S} = \frac{1}{5} \; \left[\begin{array}{rc} 1 \amp 2 \\ - 2 \amp 1 \end{array}\right] \text{.} \end{equation*}

Then we have

\begin{align*} \matrixOf{T}{S} \amp = \ucobmtrx{B}{S} \matrixOf{T}{B} \ucobmtrx{S}{B} \\ \amp = \left[\begin{array}{cr} 1 \amp -2 \\ 2 \amp 1 \end{array}\right] \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} \left(\frac{1}{5} \; \left[\begin{array}{rc} 1 \amp 2 \\ - 2 \amp 1 \end{array}\right]\right)\\ \amp = \frac{1}{5} \; \begin{bmatrix} 1 \amp 2 \\ 2 \amp 4 \end{bmatrix} \text{.} \end{align*}

Note that even though this is not a diagonal matrix, it does not contradict the fact the \(T\) is diagonalizable, as we only require that the matrix of \(T\) be diagonal relative to at least one basis of the domain space, not all bases.

Example 46.3.2. Transpose of matrices.

Consider \(\funcdef{T}{\matrixring_n(\R)}{\matrixring_n(\R)}\) by

\begin{equation*} T(A) = \utrans{A} \text{.} \end{equation*}

As usual, let

\begin{equation*} \basisfont{S} = \{ E_{ij} \} \end{equation*}

represent the standard basis of \(\matrixring_n(\R)\text{,}\) where \(E_{ij}\) is the \(n \times n\) matrix with a \(1\) in the \((i,j)\) entry and \(0\) in every other entry. Then

\begin{equation*} T(E_{ij}) = E_{ji} \text{,} \end{equation*}

so the matrix \(\matrixOf{T}{S}\) is not difficult to calculate.

However, we can also use these standard basis vectors to create a basis of \(\matrixring_n(\R)\) consisting of eigenvectors. Write

\begin{equation*} \uvec{d}_i = E_{ii} \end{equation*}

for the diagonal matrices in \(\basisfont{S}\text{,}\) write

\begin{equation*} \uvec{s}_{ij} = E_{ij} + E_{ji} \quad (j \gt i) \end{equation*}

for certain symmetric combinations of matrices in \(\basisfont{S}\text{,}\) and write

\begin{equation*} \uvec{n}_{ij} = E_{ij} - E_{ji} \quad (j \gt i) \end{equation*}

for certain skew-symmetric combinations of matrices in \(\basisfont{S}\text{.}\) All these matrices together form a basis \(\basisfont{B}\) of \(\matrixring_n(\R)\text{,}\) and we can verify

\begin{align*} T(\uvec{d}_i) \amp = \uvec{d}_i \text{,} \amp T(\uvec{s}_{ij}) \amp = \uvec{s}_{ij} \text{,} \amp T(\uvec{n}_{ij}) \amp = - \uvec{n}_{ij}\text{.} \end{align*}

Relative to this basis of eigenvectors, we will indeed obtain a diagonal matrix for \(T\text{:}\)

\begin{equation*} \matrixOf{T}{B} = \begin{bmatrix} I_a \\ \amp - I_b \end{bmatrix} \text{,} \end{equation*}

where \(I_a,I_b\) are identity matrices of sizes

\begin{align*} a \amp = \frac{n^2 + n}{2} \text{,} \amp b \amp = \frac{n^2 - n}{2} \text{.} \end{align*}

So \(T\) is a diagonalizable operator, since a basis for the domain space consisting of eigenvectors for the operator can be found.

Subsection 46.3.2 Computing determinant, eigenvalues, and eigenvectors of operators

Example 46.3.3. Computing the determinant of an operator.

Consider the “reversal” operator \(\funcdef{T}{\poly_3(\R)}{\poly_3(\R)}\) defined by

\begin{equation*} T(a x^3 + b x^2 + c x + d) = d x^3 + c x^2 + b x + a \text{.} \end{equation*}

Relative to the standard basis \(\basisfont{S} = \{ x^3, x^2, x, 1 \}\text{,}\) we compute the matrix for \(T\) by first calculating the image vectors

\begin{align*} T(x^3) \amp = 1 \text{,} \amp T(x^2) \amp = x \text{,} \amp T(x) \amp = x^2 \text{,} \amp T(1) \amp = x^3\text{.} \end{align*}

Then

\begin{equation*} \matrixOf{T}{S} = \begin{bmatrix} \amp \amp \amp 1 \\ \amp \amp 1 \\ \amp 1 \\ 1 \end{bmatrix} \text{,} \end{equation*}

and we can compute

\begin{equation*} \det T = \det \matrixOf{T}{S} = 1 \text{.} \end{equation*}

Using Corollary 45.5.5, we can conclude that \(T\) must be an isomorphism. (Though in this case that should have been obvious, as \(T\) is clearly its own inverse.)

Example 46.3.4. Computing eigenvalues and eigenvectors of an operator.

For \(p(x)\) in \(\poly_3(\R)\text{,}\) write \(\tilde{p}(x)\) for the “reversal” of \(p(x)\text{,}\) as in Example 46.3.3 above, but now define \(\funcdef{T}{\poly_3(\R)}{\poly_3(\R)}\) by

\begin{equation*} T\bigl(p(x)\bigr) = p(x) + \tilde{p}(x) \text{.} \end{equation*}

Compute the image vectors

\begin{align*} T(x^3) \amp = x^3 + 1 \text{,} \amp T(x^2) \amp = x^2 + x \text{,} \amp T(x) \amp = x + x^2 \text{,} \amp T(1) \amp = 1 + x^3\text{,} \end{align*}

from which we obtain the matrix of \(T\) relative to the standard basis \(\basisfont{S} = \{ x^3, x^2, x, 1 \}\text{:}\)

\begin{equation*} \matrixOf{T}{S} = \begin{bmatrix} 1 \amp 0 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \amp 0 \\ 0 \amp 1 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \amp 1 \end{bmatrix} \text{.} \end{equation*}

From

\begin{equation*} \lambda I - \matrixOf{T}{S} = \begin{bmatrix} \lambda - 1 \amp 0 \amp 0 \amp -1 \\ 0 \amp \lambda - 1 \amp -1 \amp 0 \\ 0 \amp -1 \amp \lambda - 1 \amp 0 \\ -1 \amp 0 \amp 0 \amp \lambda - 1 \end{bmatrix}\text{,} \end{equation*}

we can calculate

\begin{equation*} c_T(\lambda) = \det \bigl(\lambda I - \matrixOf{T}{S}\bigr) = \lambda^2 (\lambda - 2)^2 \text{,} \end{equation*}

so that \(T\) has eigenvalues \(\lambda = 0, 2\text{.}\)

As usual, row reduce the matrices \(0 I - \matrixOf{T}{S}\) and \(2 I - \matrixOf{T}{S}\) to find that

\begin{align*} E_0\bigl(\matrixOf{T}{S}\bigr) \amp = \Span \left\{ \left[\begin{array}{r} 0 \\ -1 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{r} -1 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\} \text{,} \amp E_2\bigl(\matrixOf{T}{S}\bigr) \amp = \Span \left\{ \begin{bmatrix} 0 \\ 1 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \right\}\text{.} \end{align*}

To convert these from eigenvectors of \(\matrixOf{T}{S}\) into eigenvectors of \(T\text{,}\) simply convert them from coordinate vectors relative to \(\basisfont{S}\) into \(\poly_3(\R)\) vectors:

\begin{align*} E_0(T) \amp = \Span \{ -x^2 + x, -x^3 + 1 \} \text{,} \amp E_2(T) \amp = \Span \{ x^2 + x, x^3 + 1 \}\text{.} \end{align*}

We could verify that these are eigenvectors of \(T\) directly, without appeal to properties of \(\matrixOf{T}{S}\text{.}\) For example,

\begin{equation*} T(x^2 + x) = (x^2 + x) + (x + x^2) = 2 (x^2 + x) \text{,} \end{equation*}

as required by the eigenvalue-eigenvector pattern.