Skip to main content

Section 43.4 Examples

Subsection 43.4.1 Kernel and image of a matrix transformation

Example 43.4.1.

Using matrix

\begin{equation*} A = \left[\begin{array}{rrrrr} -1 \amp 0 \amp -2 \amp -2 \amp 7 \\ -5 \amp 1 \amp -7 \amp -3 \amp 16 \\ 1 \amp 0 \amp 2 \amp 1 \amp -4 \\ 2 \amp -2 \amp -2 \amp -1 \amp -3 \end{array}\right]\text{,} \end{equation*}

create the matrix transformation \(\funcdef{T_A}{\R^5}{\R^4}\) by defining \(T_A(\uvec{x}) = A \uvec{x}\text{,}\) as usual. The RREF of \(A\) is

\begin{equation*} \left[\begin{array}{rrrrr} 1 \amp 0 \amp 2 \amp 0 \amp -1 \\ 0 \amp 1 \amp 3 \amp 0 \amp 2 \\ 0 \amp 0 \amp 0 \amp 1 \amp -3 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0 \end{array}\right]\text{.} \end{equation*}

(This is the same RREF as in Discovery 43.1.a and Discovery 43.4.c.)

To determine a basis for \(\ker T_A\text{,}\) we solve the homogeneous system \(A \uvec{x} = \zerovec\text{.}\) The RREF indicates that we should assign parameters to variables \(x_3\) and \(x_5\text{.}\) This assignment of parameters leads to general solution

\begin{equation*} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix} = \begin{bmatrix} -2 s + t \\ -3 s - 2 t \\ s \\ 3 t \\ t \end{bmatrix} = s \left[\begin{array}{r} -2 \\ -3 \\ 1 \\ 0 \\ 0 \end{array}\right] + t \left[\begin{array}{r} 1 \\ -2 \\ 0 \\ 3 \\ 1 \end{array}\right]\text{,} \end{equation*}

and so

\begin{equation*} \ker T_A = \Span \left\{ \left[\begin{array}{r} -2 \\ -3 \\ 1 \\ 0 \\ 0 \end{array}\right], \left[\begin{array}{r} 1 \\ -2 \\ 0 \\ 3 \\ 1 \end{array}\right] \right\}\text{.} \end{equation*}

To determine a basis of \(\im T_A\text{,}\) we need to determine a basis for the column space of \(A\text{,}\) which can be carried out using Procedure 21.3.2. We have already reduced \(A\) to RREF above, so identifying leading ones in the first, second, and fourth columns of \(\RREF(A)\) leads to

\begin{equation*} \im T_A = \Span \left\{ \left[\begin{array}{r} -1 \\ -5 \\ 1 \\ 2 \end{array}\right], \left[\begin{array}{r} 0 \\ 1 \\ 0 \\ -2 \end{array}\right], \left[\begin{array}{r} -2 \\ -3 \\ 1 \\ -1 \end{array}\right] \right\}\text{.} \end{equation*}

Finally, notice that

\begin{equation*} \dim (\ker T_A) + \dim (\im T_A) = 2 + 3 = 5 \text{,} \end{equation*}

the number of columns of \(A\text{,}\) as expected.

Subsection 43.4.2 Kernel and image of a linear transformation

Example 43.4.2. Symmetric and skew-symmetric matrices.

Consider \(\funcdef{T}{\matrixring_2(\R)}{\matrixring_2(\R)}\) by

\begin{equation*} T(A) = A - \utrans{A}\text{,} \end{equation*}

as considered in Discovery 43.2.a and Discovery 43.3.b. In those discovery activities, we identified the kernel as consisting of symmetric matrices. In the \(2 \times 2\) case, an arbitrary symmetric matrix is of the form

\begin{equation*} \begin{bmatrix} a \amp b \\ b \amp d \end{bmatrix} = a \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + b \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix} + d \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}\text{,} \end{equation*}

so that

\begin{equation*} \ker T = \Span \left\{ \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right\}\text{.} \end{equation*}

In the notation of Procedure 43.3.1, we take

\begin{equation*} \basisfont{K} = \left\{ \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right\}\text{.} \end{equation*}

Since \(\dim \bigl(\matrixring_2(\R)\bigr) = 4\text{,}\) we only need one more vector to enlarge to a basis for the domain space \(\matrixring_2(\R)\text{.}\) The standard basis vector

\begin{equation*} \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} \end{equation*}

is not symmetric, and so does not lie in \(\ker T\text{.}\) Therefore, taking

\begin{equation*} \basisfont{K}' = \left\{ \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} \right\} \text{,} \end{equation*}

the vectors in \(\basisfont{K}\) and \(\basisfont{K}'\) together form a basis for \(\matrixring_2(\R)\text{.}\) To obtain a basis for \(\im T\text{,}\) just apply \(T\) to the vector in \(\basisfont{K}'\text{:}\)

\begin{equation*} T\left(\begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}\right) = \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} - \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} = \left[\begin{array}{rc} 0 \amp 1 \\ -1 \amp 0 \end{array}\right]\text{,} \end{equation*}

so that

\begin{equation*} \im T = \Span T(\basisfont{K}') = \Span \left\{ \left[\begin{array}{rc} 0 \amp 1 \\ -1 \amp 0 \end{array}\right] \right\}\text{.} \end{equation*}

Notice that \(\im T\) consists of the skew-symmetric \(2 \times 2\) matrices, defined by the “skewed” symmetry condition \(\utrans{A} = - A\text{.}\)

And also notice that

\begin{equation*} \dim (\ker T) + \dim (\im T) = 3 + 1 = 4 = \dim \bigl(\matrixring_2(\R)\bigr) \text{,} \end{equation*}

as expected.

Example 43.4.3. Left multiplication of \(2 \times 2\) matrices.

Consider \(\funcdef{L_B}{\matrixring_{2}(\R)}{\matrixring_{2}(\R)}\) defined by \(L_B(A) = B A\text{,}\) where

\begin{equation*} B = \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} \text{,} \end{equation*}

as in Discovery 43.3.c. An arbitray \(2 \times 2\) matrix

\begin{equation*} \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} \end{equation*}

is in \(\ker L_B\) when

\begin{equation*} L_B\left(\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\right) = \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} = \begin{bmatrix} a + c \amp b + d \\ 0 \amp 0 \end{bmatrix} = \begin{bmatrix} 0 \amp 0 \\ 0 \amp 0 \end{bmatrix}\text{,} \end{equation*}

which occurs when \(c = -a\) and \(d = -b\text{.}\) Inserting these two conditions into the arbitrary matrix above, we have

\begin{equation*} \left[\begin{array}{rr} a \amp b \\ -a \amp -b \end{array}\right] = a \left[\begin{array}{rr} 1 \amp 0 \\ -1 \amp 0 \end{array}\right] + b \left[\begin{array}{rr} 0 \amp 1 \\ 0 \amp -1 \end{array}\right]\text{,} \end{equation*}

so that

\begin{equation*} \ker L_B = \Span \left\{ \left[\begin{array}{rr} 1 \amp 0 \\ -1 \amp 0 \end{array}\right], \left[\begin{array}{rr} 0 \amp 1 \\ 0 \amp -1 \end{array}\right] \right\}\text{.} \end{equation*}

In the notation of Procedure 43.3.1, we take

\begin{equation*} \basisfont{K} = \left\{ \left[\begin{array}{rr} 1 \amp 0 \\ -1 \amp 0 \end{array}\right], \left[\begin{array}{rr} 0 \amp 1 \\ 0 \amp -1 \end{array}\right] \right\}\text{.} \end{equation*}

Since \(\dim \bigl(\matrixring_2(\R)\bigr) = 4\text{,}\) we need two more vectors to enlarge to a basis for the domain space \(\matrixring_2(\R)\text{.}\) Neither of the standard basis vectors

\begin{equation*} \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}, \; \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \end{equation*}

is in \(\ker T\text{,}\) and the four vectors in \(\basisfont{K}\) and

\begin{equation*} \basisfont{K}' = \left\{ \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} \right\} \end{equation*}

remain independent when taken all together, and so form a basis for the domain space \(\matrixring_2(\R)\text{.}\) Putting each of the vectors in \(\basisfont{K}'\) through \(L_B\text{,}\) we get

\begin{align*} L_B\left(\begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}\right) \amp = \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} = \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} \text{,}\\ \\ L_B\left(\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}\right) \amp = \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix} = \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}\text{,} \end{align*}

so that

\begin{equation*} \im L_B = \Span \left\{ \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}, \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} \right\}\text{.} \end{equation*}

This agrees with our earlier calculation of arbitrary input result

\begin{equation*} L_B\left(\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}\right) = \begin{bmatrix} a + c \amp b + d \\ 0 \amp 0 \end{bmatrix} = \begin{bmatrix} e \amp f \\ 0 \amp 0 \end{bmatrix} = e \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + f \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}\text{,} \end{equation*}

where we have replaced the top two entries in the result first result with new arbitrary entries \(e,f\) to emphasize that these entries are independently arbitrary through choice of \(a,c\) values versus \(b,d\) values.

And, once again, we have

\begin{equation*} \dim (\ker L_B) + \dim (\im L_B) = 2 + 2 = 4 = \dim \bigl(\matrixring_2(\R)\bigr) \text{,} \end{equation*}

as expected.

Example 43.4.4. Differentiation of polynomials.

Consider \(\funcdef{\ddx}{\poly_n(\R)}{\poly_n(\R)}\) by \(\ddx\bigl(p(x)\bigr) = p'(x)\text{.}\) For simplicity, write \(D\) in place of the differential operator \(\ddx\text{.}\)

Similarly to Discovery 43.2.c, \(\ker D\) consists of the constant polynomials, so that

\begin{equation*} \ker D = \Span \{ 1 \} \text{.} \end{equation*}

In the notation of Procedure 43.3.1, we take

\begin{equation*} \basisfont{K} = \{ 1 \} \end{equation*}

as a basis for \(\ker D\text{.}\) It is straightforward to enlarge this to a basis for the domain space \(\poly_n(\R)\text{,}\) as the constant polynomial \(1\) is the first vector in the standard basis

\begin{equation*} \{ 1, x, x^2, \dotsc, x^n \} \text{.} \end{equation*}

So we may take

\begin{equation*} \basisfont{K}' = \{ x, x^2, \dotsc, x^n \} \text{,} \end{equation*}

and differentiate each of these vectors to get

\begin{equation*} D(\basisfont{K}') = \{ 1, 2 x, 3 x^2, \dotsc, n x^{n-1} \} \text{.} \end{equation*}

So we have

\begin{equation*} \im D = \Span \{ 1, 2 x, 3 x^2, \dotsc, n x^{n-1} \} \text{.} \end{equation*}

But since scalar multiples do not affect linear independence, we could instead take

\begin{equation*} \im D = \Span \{ 1, x, x^2, \dotsc, x^{n-1} \} = \poly_{n-1}(\R) \text{.} \end{equation*}

This result should not be surprising, as our knowledge of antidifferentiation leads us to expect that every polynomial in \(\poly_{n-1}(\R)\) is the derivative of at least one polynomial in \(\poly_n(\R)\text{.}\)

And, yet again, we have

\begin{equation*} \dim (\ker D) + \dim (\im D) = 1 + (n - 1) = n = \dim \bigl(\poly_n(\R)\bigr) \text{,} \end{equation*}

as expected.

Subsection 43.4.3 Special examples

Let's look back at some of the examples from Subsection 42.3.5.

Example 43.4.5. Kernel and image of a zero transformation.

Consider \(\funcdef{\zerovec_{V,W}}{V}{W}\) defined by \(\zerovec_{V,W}(\uvec{v}) = \zerovec_W\) for all \(\uvec{v}\) in the domain space \(V\text{,}\) where \(\zerovec_W\) is the zero vector in the codomain space \(W\text{.}\) Then clearly

\begin{align*} \ker \zerovec_{V,W} \amp = V \text{,} \amp \im \zerovec_{V,W} \amp = \{ \zerovec_W \}\text{,} \end{align*}

and we have

\begin{equation*} \dim (\ker \zerovec_{V,W}) + \dim (\im \zerovec_{V,W}) = \dim V + 0 = \dim V \text{,} \end{equation*}

as expected.

Example 43.4.6. Kernel and image of an identity operator.

Consider \(\funcdef{I_V}{V}{V}\) defined by \(I_V(\uvec{v}) = \uvec{v}\) for all \(\uvec{v}\) in \(V\text{.}\) Then clearly

\begin{align*} \ker I_V \amp = \{ \zerovec \} \text{,} \amp \im I_V \amp = V\text{,} \end{align*}

and we have

\begin{equation*} \dim (\ker I_V) + \dim (\im I_V) = 0 + \dim V = \dim V \text{,} \end{equation*}

as expected.

Example 43.4.7. Kernel and image of a coordinate map.

For finite-dimensional space \(V\) with basis \(\basisfont{B}\text{,}\) consider \(\funcdef{\coordmap{B}}{V}{\R^n}\) or \(\funcdef{\coordmap{B}}{V}{\C^n}\text{,}\) depending on whether \(V\) is a real or complex space, where

\begin{equation*} \coordmap{B}(\uvec{v}) = \matrixOf{\uvec{v}}{B} \end{equation*}

for each \(\uvec{v}\) in \(V\text{.}\) Since coordinate vectors are unique (Theorem 19.5.3), the only vector in \(\ker \coordmap{B}\) is the zero vector. And since every column vector of scalars can be traced back to a vector in \(V\) by using those scalars as coefficients in a linear combination, as in

\begin{equation*} \uvec{x} = \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \quad \mapsto \quad \uvec{v} = x_1 \uvec{v}_1 + \dotsb + \uvec{v}_n \end{equation*}

(where the \(\uvec{v}_j\) are the basis vectors in \(\basisfont{B}\)), we must have \(\im \coordmap{B}\) equal to \(\R^n\) or \(\C^n\text{,}\) as appropriate.

Example 43.4.8. Kernel and image of pairing with a fixed vector.

Suppose that \(V\) is a finite-dimensional inner product space, and \(\uvec{u}_0\) is a fixed choice of (nonzero) vector in \(V\text{.}\) Let \(\funcdef{T_{\uvec{u}_0}}{V}{\R^1}\) represent the linear transformation defined by pairing with \(\uvec{u}_0\text{:}\)

\begin{equation*} T_{\uvec{u}_0}(\uvec{x}) = \inprod{\uvec{x}}{\uvec{u}_0} \end{equation*}

for all \(\uvec{x}\) in \(V\text{.}\)

Then to be in \(\ker T_{\uvec{u}_0}\text{,}\) a vector \(\uvec{x}\) must be orthogonal to \(\uvec{u}_0\text{.}\) This implies that \(\ker T_{\uvec{u}_0} = \orthogcmp{U}\) for \(U = \Span \{\uvec{u}_0\}\text{.}\)

On the one hand, we know that

\begin{equation*} \dim U + \dim \orthogcmp{U} = \dim V \text{,} \end{equation*}

since a subspace and its orthogonal complement always form a complete set of independent subspaces of an inner product space (Theorem 37.5.16). On the other hand, the Dimension Theorem says that

\begin{equation*} \dim (\im T_{\uvec{u}_0}) + \dim (\ker T_{\uvec{u}_0}) = \dim V \text{.} \end{equation*}

From \(\ker T_{\uvec{u}_0} = \orthogcmp{U}\text{,}\) we may conclude that

\begin{equation*} \dim (\im T_{\uvec{u}_0}) = \dim U = \dim (\Span \{\uvec{u}_0\}) = 1 \text{.} \end{equation*}

But \(\im T_{\uvec{u}_0}\) is a subspace of the codomain space \(\R^1\text{,}\) which itself has dimension \(1\text{,}\) so we must have \(\im T_{\uvec{u}_0} = \R^1\text{.}\)