Section 44.4 Examples
Subsection 44.4.1 Computing with compositions
Example 44.4.1. Computing the image of a composition.
Consider
\begin{align*}
\amp\funcdef{T}{\poly_3(\R)}{\matrixring_2(\R)} \text{,} \amp
\amp\funcdef{S}{\matrixring_2(\R)}{\R}
\end{align*}
defined by
\begin{align*}
T(a x^3 + b x^2 + c x + d)
\amp = \begin{bmatrix} a + b \amp b + c \\ c + d \amp a - d \end{bmatrix} \text{,}
\amp
S(A) \amp = \trace A\text{.}
\end{align*}
Then the composition is
\begin{equation*}
\funcdef{ST}{\poly_3}{\R} \text{.}
\end{equation*}
What is the image vector under the composition for
\begin{equation*}
p(x) = 3 x^3 - 2 x^2 + x - 5 \text{?}
\end{equation*}
Compute:
\begin{gather*}
T\bigl(p(x)\bigr)
= \begin{bmatrix} 3 + (-2) \amp (-2) + 1 \\ 1 + (-5) \amp 3 - (-5) \end{bmatrix}
= \begin{bmatrix} 1 \amp -1 \\ -4 \amp 8 \end{bmatrix}
\text{,}\\
\\
S\left( \begin{bmatrix} 1 \amp -1 \\ -4 \amp 8 \end{bmatrix} \right) = 1 + 8 = 9 \text{.}
\end{gather*}
So
\begin{equation*}
ST(3 x^3 - 2 x^2 + x - 5) = 9 \text{.}
\end{equation*}
Example 44.4.2. Computing input-output formulas for a composition.
Consider
\begin{align*}
\amp\funcdef{D}{\poly_3(\R)}{\poly_2(\R)} \text{,} \amp
\amp\funcdef{E_3}{\poly_2(\R)}{\R} \text{,} \amp
\amp\funcdef{T}{\R}{\R^3}\text{,}
\end{align*}
where \(D\) is differentiation, \(E_3\) is evaluation at \(x = 3\text{,}\) and \(T\) is defined by
\begin{equation*}
T(y) = (y, 0, -y) \text{.}
\end{equation*}
We can compute an input-output formula for the composition \(S = T E_3 D\) by
\begin{align*}
S(a x^3 + b x^2 + c x + d) \amp = T\Bigl(E _3\bigl(D(a x^3 + b x^2 + c x + d)\bigr)\Bigr) \\
\amp = T\bigl(E _3(3 a x^2 + 2 b x + c)\bigr) \\
\amp = T(27 a + 3 b + c) \\
\amp = (27 a + 3 b + c, 0, - 27 a - 3 b - c) \text{.}
\end{align*}
Subsection 44.4.2 Checking and computing inverses
Example 44.4.3. Determining whether a transformation is invertible.
Consider \(\funcdef{T}{\R^4}{\matrixring_{2\times 3}(\R)}\) defined by
\begin{equation*}
T(a,b,c,d) = \begin{bmatrix} a + d \amp b + c \amp a + d \\ b - c \amp a - d \amp b - c \end{bmatrix} \text{.}
\end{equation*}
Does \(T\) have an inverse transformation \(funcdef{T}{\im T}{\R^4}\text{?}\) We can use the kernel to investigate:
\begin{gather*}
\phantom{\implies} T(a,b,c,d) = \zerovec \\
\implies
\begin{bmatrix} a + d \amp b + c \amp a + d \\ b - c \amp a - d \amp b - c \end{bmatrix}
= \begin{bmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{bmatrix}\\
\implies \begin{sysofeqns}{ccccl}
a \amp + \amp d \amp = \amp 0 \text{,} \\
b \amp + \amp c \amp = \amp 0 \text{,} \\
a \amp + \amp d \amp = \amp 0 \text{,} \\
b \amp - \amp c \amp = \amp 0 \text{,} \\
a \amp - \amp d \amp = \amp 0 \text{,} \\
b \amp - \amp c \amp = \amp 0 \text{.}
\end{sysofeqns}
\end{gather*}
We could use row reducing to solve this homogeneous system, but it should be clear that we cannot have both \(a = d\) and \(a = - d\) simultaneously unless both \(a,d\) are zero, and similarly for \(b\) and \(c\text{.}\)
Therefore, since \(\ker T = \{\zerovec\}\text{,}\) \(T\) is one-to-one and an inverse \(\funcdef{\inv{T}}{\im T}{\R^4}\) exists.
Example 44.4.4. Computing inverse images.
Let’s continue with the invertible transformation \(T\) from Example 44.4.3. What is the inverse image
\begin{equation*}
\inv{T}\left(\begin{abmatrix}{crc} 6 \amp 1 \amp 6 \\ 5 \amp -4 \amp 5 \end{abmatrix}\right) \text{?}
\end{equation*}
First, recall that \(\inv{T}\) is only defined on \(\im T\text{.}\) Is the matrix used as input to \(\inv{T}\) above actually in \(\im T\text{?}\) Attempting to compute its inverse image will answer the question for us — if we are able to come to a solution, then it must be in \(\im T\text{,}\) and if we find that there is no solution, then it must not be in \(\im T\text{.}\) As usual, attempting to solve for the inverse image will lead to a linear system, and we know how to determine when a system has solutions or not (Proposition 2.5.6).
The inverse image we would like to calculate (if it exists) will be precisely the \(\R^4\) vector that satisfies
\begin{align*}
T(a,b,c,d) \amp = \begin{abmatrix}{crc} 6 \amp 1 \amp 6 \\ 5 \amp -4 \amp 5 \end{abmatrix}
\amp \amp \Longleftrightarrow \amp
\begin{bmatrix} a + d \amp b + c \amp a + d \\ b - c \amp a - d \amp b - c \end{bmatrix}
\amp = \begin{abmatrix}{crc} 6 \amp 1 \amp 6 \\ 5 \amp -4 \amp 5 \end{abmatrix}\text{,}
\end{align*}
from which we obtain linear system
\begin{align*}
\begin{sysofeqns}{ccccrl}
a \amp + \amp d \amp = \amp 6 \amp \text{,} \\
b \amp + \amp c \amp = \amp 1 \amp \text{,} \\
a \amp + \amp d \amp = \amp 6 \amp \text{,} \\
b \amp - \amp c \amp = \amp 5 \amp \text{,} \\
a \amp - \amp d \amp = \amp -4 \amp \text{,} \\
b \amp - \amp c \amp = \amp 5 \amp \text{.}
\end{sysofeqns}
\end{align*}
This system can be converted to an augmented matrix and reduced, leading to one unique solution
\begin{align*}
a \amp = 1 \text{,} \amp b \amp = 3 \text{,} \amp c \amp = -2 \text{,} \amp d \amp = 5 \text{,}
\end{align*}
hence
\begin{equation*}
\inv{T}\left(\begin{abmatrix}{crc} 6 \amp 1 \amp 6 \\ 5 \amp -4 \amp 5 \end{abmatrix}\right) = (1,3,-2,5) \text{.}
\end{equation*}
Subsection 44.4.3 Checking surjectivity
The easiest way to determine surjectivity of a transformation is to use the Dimension Theorem (Theorem 43.5.4) to establish the dimension of the image.
Example 44.4.5. Determining whether a transformation is surjective.
In Example 44.4.3, we found that a transformation \(\funcdef{T}{\R^4}{\matrixring_{2\times 3}(\R)}\) had trivial kernel. Using the Dimension Theorem, we must then have
\begin{equation*}
\dim (\im T) = \dim (\R^4) - \dim (\ker T) = 4 - 0 = 4 \text{,}
\end{equation*}
and so
\begin{equation*}
\dim (\im T) \lt \dim \bigl(\matrixring_{2 \times 3}(\R)\bigr) \text{.}
\end{equation*}
Since \(\im T\) is a subspace of the codomain space \(\matrixring_{2 \times 3}(\R)\text{,}\) Statement 3 of Proposition 20.5.8 tells us that it is not possible for \(T\) to be surjective.
Subsection 44.4.4 Defining isomorphisms by choice of bases
Example 44.4.6. Creating an isomorphism by sending a domain space basis to a codomain space basis.
Let’s create an isomorphsm \(\funcdef{T}{\R^4}{\matrixring_2(\R)}\) by sending the standard basis for \(\R^4\) to some nonstandard basis of \(\matrixring_2(\R)\text{:}\)
\begin{align*}
T(1,0,0,0) \amp = \begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix} \text{,} \amp
T(0,0,1,0) \amp = \begin{abmatrix}{rc} 0 \amp 1 \\ -1 \amp 0 \end{abmatrix} \text{,}\\
T(0,1,0,0) \amp = \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} \text{,} \amp
T(0,0,0,1) \amp = \begin{abmatrix}{cr} 1 \amp 0 \\ 0 \amp -1 \end{abmatrix} \text{.}
\end{align*}
We can determine an input-ouput formula for \(T\) by expanding an arbitrary input as a linear combination of the standard basis of \(\R^4\text{,}\) and using the linearity of \(T\text{:}\)
\begin{align*}
T(a,b,c,d) \amp = a \, T(1,0,0,0) + b \, T(0,1,0,0) + c \, T(0,0,1,0) + d \, T(0,0,0,1) \\
\amp
= a \begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix}
+ b \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix}
+ c \begin{abmatrix}{rc} 0 \amp 1 \\ -1 \amp 0 \end{abmatrix}
+ d \begin{abmatrix}{cr} 1 \amp 0 \\ 0 \amp -1 \end{abmatrix}\\
\amp = \begin{bmatrix} a + d \amp b + c \\ b - c \amp a - d \end{bmatrix} \text{.}
\end{align*}
Similarly to Example 44.4.3, it is easy to check that this transformation is one-to-one by verifying that the kernel is trivial. Then, using the Dimension Theorem (Theorem 43.5.4), we must have
\begin{equation*}
\dim (\im T) = \dim (\R^4) - \dim (\ker T) = 4 - 0 = 4 = \dim \bigl(\matrixring_2(\R)\bigr) \text{,}
\end{equation*}
from which we can conclude that \(T\) must be surjective as well. Together, trivial kernel and full image tells us that \(T\) is indeed an isomorphism.
Example 44.4.7. A geometric example: rotation in \(\R^3\).
Rotation around an axis should be an isomorphism of \(\R^3\) onto \(\R^3\text{,}\) as every vector in \(\R^3\) can be realized as the rotated image of one unique “input” vector.
Let \(\funcdef{T}{\R^3}{\R^3}\) represent counter-clockwise rotation by \(\pi/2\) around the axis through the origin and parallel to the vector
\begin{equation*}
\uvec{n} = (2,-1,3) \text{,}
\end{equation*}
where “counter-clockwise” is considered as one looks “downward” along \(\uvec{n}\) back toward the origin. Let’s set up an orthogonal basis of \(\R^3\) that will help describe \(T\text{,}\) beginning with the axis \(\uvec{n}\text{.}\) It will be easiest to analyze rotation around \(\uvec{n}\) using vectors in the plane normal to \(\uvec{n}\text{.}\) So let’s guess a vector
\begin{equation*}
\uvec{v}_1 = (1,2,0)
\end{equation*}
that is parallel to this plane (which we can verify by computing \(\dotprod{\uvec{n}}{\uvec{v}_1} = 0\)). Now, since \(T\) rotates by \(\pi/2\text{,}\) it will be convenient to take another vector in the normal plane that is also orthogonal to \(\uvec{v}_1\text{.}\) We could use Gram-Schmidt to do this, but instead we’ll use the cross product to be able to use the right-hand rule to control the direction (“positive” or “negative”) of the result. Use the right-hand rule to convince yourself that
\begin{equation*}
\uvec{v}_2 = \crossprod{\uvec{n}}{\uvec{v}_1} = (-6, 3, 5)
\end{equation*}
will be counter-clockwise (relative to \(\uvec{n}\) as described above) from \(\uvec{v}_1\text{.}\)
Finally, we need to adjust \(\uvec{v}_1, \uvec{v}_2\) to be the same length, as vectors shouldn’t change length as they rotate. However, instead of normalizing both to unit vectors, we can compute that \(\uvec{v}_2\) is \(\sqrt{14}\) times as long as \(\uvec{v}_1\text{,}\) so let’s redefine \(\uvec{v}_1\) to be
\begin{equation*}
\uvec{v}_1 = (\sqrt{14}, 2 \sqrt{14}, 0) \text{.}
\end{equation*}
We now have an orthogonal basis \(\basisfont{B} = \{ \uvec{n}, \uvec{v}_1, \uvec{v}_2 \} \) of \(\R^3\text{.}\) Since \(T\) is uniquely defined by the collection of image vectors \(T(\basisfont{B})\text{,}\) we know \(T\) completely by knowing the images
\begin{align*}
T(\uvec{n}) \amp = \uvec{n} \text{,} \amp
T(\uvec{v}_1) \amp = \uvec{v}_2 \text{,} \amp
T(\uvec{v}_2) \amp = - \uvec{v}_1\text{.}
\end{align*}

