Example42.4.1.Linear formulas create linear transformations.
Define \(\funcdef{T}{\matrixring_{2 \times 2}(\R)}{\poly_2(\R)}\) by
\begin{equation*}
T\left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)
= (a + b) + (c - 2d) x + (3b - c + d) x^2\text{.}
\end{equation*}
To verify that \(T\) is linear, check the linearity properties.
Additivity.
We have
\begin{align*}
\amp
T\left(
\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
+
\begin{bmatrix} e \amp f \\ g \amp h \end{bmatrix}
\right)\\
\amp \phantom{T} =
T\left( \begin{bmatrix} a + e \amp b + f \\ c + g \amp d + h \end{bmatrix} \right)\\
\amp \phantom{T}
= \bigl( (a + e) + (b + f) \bigr)
+ \bigl( (c + g) - 2(d + h) \bigr) x
+ \bigl( 3(b + f) - (c + g) + (d + h) \bigr) x^2
\text{,}\\
\\
\amp
T\left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)
+ T\left( \begin{bmatrix} e \amp f \\ g \amp h \end{bmatrix} \right)\\
\amp \phantom{T}
= (a + b) + (c - 2d) x + (3b - c + d) x^2
+ (e + f) + (g - 2h) x + (3f - g + h) x^2\\
\amp \phantom{T}
= \bigl( (a + b) + (e + f) \bigr)
+ \bigl( (c - 2d) + (g - 2h) \bigr) x
+ \bigl( (3b - c + d) + (3f - g + h) \bigr) x^2\text{.}
\end{align*}
Comparing the results of the two calculations, we see they are the same.
Homogeneity.
We have
\begin{align*}
T\left( k \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)
\amp =
T\left( \begin{bmatrix} k a \amp k b \\ k c \amp k d \end{bmatrix} \right)\\
\amp = (k a + k b) + \bigl(k c - 2 (k d)\bigr) x + \bigl(3(k b) - k c + k d\bigr) x^2
\text{,}\\
\\
k \; T\left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)
\amp = k \bigl( (a + b) + (c - 2d) x + (3b - c + d) x^2 \bigr)\\
\amp = k (a + b) + k (c - 2d) x + k (3b - c + d) x^2\text{.}
\end{align*}
By distributing the scalar \(k\) a second time into each set of brackets in the last line above, we can make the two results identical.
Example42.4.2.Nonhomogeneous linear formulas do not create linear transformations.
Let's modify Example 42.4.1 slightly: define \(\funcdef{S}{\matrixring_{2 \times 2}(\R)}{\poly_2(\R)}\) by
\begin{equation*}
S\left( \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \right)
= (a + b + 4) + (c - 2d) x + (3b - c + d - 2) x^2\text{.}
\end{equation*}
The definition of \(S\) still involves using linear formulas in the entries of the input matrix to specify coefficients of the output polynomial, except that the constant and quadratic terms have extra constant parts to the coefficient formulas. Comparing with the linear transformation \(T\) from Example 42.4.1, we could write
We saw in Discovery 42.3.c that the process of translation by a fixed vector is not a linear transformation, so we expect that combining the linear transformation \(T\) with a translation will not be linear.
Instead of working directly with the formula-based definition of \(S\text{,}\) we can instead relate it to \(T\) to check the linearity properties.
where in the calculation on the left we use the linearity of \(T\) that was verified in Example 42.4.1. We can see that these results will not be equal in general, so \(S\) is not a linear transformation.
As we've already seen in Subsection 42.3.5, many of our familiar processes will be linear, but others will not.
Example42.4.3.Transpose is linear.
Consider \(\funcdef{T}{\matrixring_{m \times n}(\R)}{\matrixring_{n \times m}(\R)}\) defined by
But there is no need to get down to the level of the individual entries of matrices here, we already know that these formulas are valid from Rule 5.b and Rule 5.c of Proposition 4.5.1.
Example42.4.4.Complex adjoint is not linear.
Consider \(\funcdef{T}{\matrixring_{m \times n}(\C)}{\matrixring_{n \times m}(\C)}\) defined by
However, this time only the first formula is valid (Rule 2.d of Proposition 11.4.1). The correct version of the second formula would require a conjugate on the scalar \(k\) (Rule 2.e of Proposition 11.4.1). So the adjoint process does not create a linear transformation.
Subsection42.4.2The standard matrix of a transformation \(\R^n \to \R^m\)
Example42.4.5.From linear formulas.
Consider the system of linear input-output formulas in Discovery 42.1.b:
Since dot product is linear, this creates a linear transformation. (Check!) We could fairly easily determine linear formulas for each component of the output vector expression for \(T\text{,}\) but let's use the pattern of (\(\dagger\)) in Subsection 42.3.4.
Notice how the vectors \(\uvec{a}_1,\uvec{a}_2\) ended up as rows in \(\stdmatrixOf{T}\text{,}\) so that multiplying a vector in \(\R^3\) by this matrix effectively computes the two dot products against \(\uvec{a}_1,\uvec{a}_2\text{,}\) as in the definition of \(T\text{.}\) For example,
Subsection42.4.3Linear transformations via spanning image vectors
In Discovery 42.4 and Subsection 42.3.3, we found that a linear transformation can be completely determined by how it transforms a spanning set for the domain space. Here is an example of using this fact.
Example42.4.7.
Suppose \(\funcdef{T}{\poly_2(\R)}{\R^2}\) is a linear transformation, and it's known that
as every polynomial can be decomposed as a linear combination of these spanning vectors: for
\begin{equation*}
p(x) = a x^2 + b x + c \text{,}
\end{equation*}
we have
\begin{equation*}
p(x) = a (x^ - x) + (a + b) (x - 1) + (a + b + c) \cdot 1 \text{.}
\end{equation*}
Then, similar to the example calculation above, we have
\begin{align*}
T(a x^2 + b x + c) \amp = T \bigl( a (x^2 - x) + (a + b) (x - 1) + (a + b + c) \cdot 1 \bigr) \\
\amp = a T(x^2 - x) + (a + b) T(x - 1) + (a + b + c) T(1) \\
\amp = a (1,1) + (a + b) (0,-1) + (a + b + c) (3,6) \\
\amp = (4 a + 3 b + 3 c, 6 a + 5 b + 6 c) \text{.}
\end{align*}
So, from knowledge of the image vectors on a spanning set of the domain space, along with the pattern of how vectors in the domain space decompose into linear combinations, we can recover a description of the linear transformation as a set of linear input-output formulas.
and then use the linearity properties of \(T\) to extend \(T\) to all linear combinations of these domain space basis vectors. The fact that only explicitly specifying \(T\) on these basis vectors fully defines \(T\) on the whole domain is backed up by the fact that were able to recover general input-output linear formulas for \(T\) from these three image vectors.
But as noted in Remark 42.3.2, it's important to use a basis in this method of defining a linear transformation. Here is an example illustrating this fact.
Example42.4.8.Defining a transformation via dependent spanning vectors could fail.
Suppose we attempt to define \(\funcdef{T}{\R^2}{\R^2}\) by setting