Section 43.4 Examples
In this section.
Subsection 43.4.1 Kernel and image of a matrix transformation
Example 43.4.1.
Using matrix
create the matrix transformation TA:R5→R4 by defining TA(x)=Ax, as usual. The RREF of A is
(This is the same RREF as in Discovery 43.1.a and Discovery 43.4.c.)
To determine a basis for kerTA, we solve the homogeneous system Ax=0. The RREF indicates that we should assign parameters to variables x3 and x5. This assignment of parameters leads to general solution
and so
To determine a basis of imTA, we need to determine a basis for the column space of A, which can be carried out using Procedure 21.3.2. We have already reduced A to RREF above, so identifying leading ones in the first, second, and fourth columns of RREF(A) leads to
Finally, notice that
the number of columns of A, as expected.
Subsection 43.4.2 Kernel and image of a linear transformation
Example 43.4.2. Symmetric and skew-symmetric matrices.
Consider T:M2(R)→M2(R) by
as considered in Discovery 43.2.a and Discovery 43.3.b. In those discovery activities, we identified the kernel as consisting of symmetric matrices. In the 2×2 case, an arbitrary symmetric matrix is of the form
so that
In the notation of Procedure 43.3.1, we take
Since dim(M2(R))=4, we only need one more vector to enlarge to a basis for the domain space M2(R). The standard basis vector
is not symmetric, and so does not lie in kerT. Therefore, taking
the vectors in K and K′ together form a basis for M2(R). To obtain a basis for imT, just apply T to the vector in K′:
so that
Notice that imT consists of the skew-symmetric 2×2 matrices, defined by the “skewed” symmetry condition AT=−A.
And also notice that
as expected.
Example 43.4.3. Left multiplication of 2×2 matrices.
Consider LB:M2(R)→M2(R) defined by LB(A)=BA, where
as in Discovery 43.3.c. An arbitray 2×2 matrix
is in kerLB when
which occurs when c=−a and d=−b. Inserting these two conditions into the arbitrary matrix above, we have
so that
In the notation of Procedure 43.3.1, we take
Since dim(M2(R))=4, we need two more vectors to enlarge to a basis for the domain space M2(R). Neither of the standard basis vectors
is in kerT, and the four vectors in K and
remain independent when taken all together, and so form a basis for the domain space M2(R). Putting each of the vectors in K′ through LB, we get
so that
This agrees with our earlier calculation of arbitrary input result
where we have replaced the top two entries in the result first result with new arbitrary entries e,f to emphasize that these entries are independently arbitrary through choice of a,c values versus b,d values.
And, once again, we have
as expected.
Example 43.4.4. Differentiation of polynomials.
Consider ddx:Pn(R)→Pn(R) by ddx(p(x))=p′(x). For simplicity, write D in place of the differential operator ddx.
Similarly to Discovery 43.2.c, kerD consists of the constant polynomials, so that
In the notation of Procedure 43.3.1, we take
as a basis for kerD. It is straightforward to enlarge this to a basis for the domain space Pn(R), as the constant polynomial 1 is the first vector in the standard basis
So we may take
and differentiate each of these vectors to get
So we have
But since scalar multiples do not affect linear independence, we could instead take
This result should not be surprising, as our knowledge of antidifferentiation leads us to expect that every polynomial in Pn−1(R) is the derivative of at least one polynomial in Pn(R).
And, yet again, we have
as expected.
Subsection 43.4.3 Special examples
Let's look back at some of the examples from Subsection 42.3.5.Example 43.4.5. Kernel and image of a zero transformation.
Consider 0V,W:V→W defined by 0V,W(v)=0W for all v in the domain space V, where 0W is the zero vector in the codomain space W. Then clearly
and we have
as expected.
Example 43.4.6. Kernel and image of an identity operator.
Consider IV:V→V defined by IV(v)=v for all v in V. Then clearly
and we have
as expected.
Example 43.4.7. Kernel and image of a coordinate map.
For finite-dimensional space V with basis B, consider CB:V→Rn or CB:V→Cn, depending on whether V is a real or complex space, where
for each v in V. Since coordinate vectors are unique (Theorem 19.5.3), the only vector in kerCB is the zero vector. And since every column vector of scalars can be traced back to a vector in V by using those scalars as coefficients in a linear combination, as in
(where the vj are the basis vectors in B), we must have imCB equal to Rn or Cn, as appropriate.
Example 43.4.8. Kernel and image of pairing with a fixed vector.
Suppose that V is a finite-dimensional inner product space, and u0 is a fixed choice of (nonzero) vector in V. Let Tu0:V→R1 represent the linear transformation defined by pairing with u0:
for all x in V.
Then to be in kerTu0, a vector x must be orthogonal to u0. This implies that kerTu0=U⊥ for U=Span{u0}.
On the one hand, we know that
since a subspace and its orthogonal complement always form a complete set of independent subspaces of an inner product space (Theorem 37.5.16). On the other hand, the Dimension Theorem says that
From kerTu0=U⊥, we may conclude that
But imTu0 is a subspace of the codomain space R1, which itself has dimension 1, so we must have imTu0=R1.