Section 44.3 Concepts
In this section.
- Subsection 44.3.1 Composition of linear transformations
- Subsection 44.3.2 Composition of matrix transformations
- Subsection 44.3.3 Inverse transformations
- Subsection 44.3.4 Invertibility conditions
- Subsection 44.3.5 Inverses of matrix transformations
- Subsection 44.3.6 Constructing invertible transformations
- Subsection 44.3.7 Isomorphisms
- Subsection 44.3.8 Constructing isomorphisms
- Subsection 44.3.9 Important isomorphisms
Subsection 44.3.1 Composition of linear transformations
As linear transformations are functions between vector spaces, they can be composed just like functions. That is, for transformations \funcdef{T}{U}{V} and \funcdef{S}{V}{W}\text{,} where the domain space of S is the same as the domain space of T\text{,} then we can define the composite \funcdef{ST}{U}{W} byWarning 44.3.1.
Just as for linear transformations, in general compositions ST and TS are not equal. In fact, more often than not, one of the two orders is not even defined as domains and codomains will not match up in both orders.
Subsection 44.3.2 Composition of matrix transformations
If \funcdef{T_A}{\R^n}{\R^m} and \funcdef{S_B}{\R^m}{\R^\ell} are the matrix transformations corresponding to m \times n matrix A and \ell \times m matrix B\text{,} then chaining the input-output processesSubsection 44.3.3 Inverse transformations
When a function is one-to-one, each output can be traced back to one particular input that produced it. So if \funcdef{T}{V}{W} is injective and \uvec{w} is a vector in \im T\text{,} there is one unique answer to the question: what vector \uvec{v} in the domain space V will have resultSubsection 44.3.4 Invertibility conditions
A function is invertible precisely when it is one-to-one, and by definition a function is one-to-one when pairs of distinct inputs always produce distinct outputs. If a linear transformation \funcdef{T}{V}{W} has a pair of distinct inputs \uvec{v}_1,\uvec{v}_2 that produce the same output, thenWarning 44.3.2.
While \ker T = \{\zerovec\} is a sufficient condition to conclude that T is one-to-one, \dim W \ge \dim V is not.
Subsection 44.3.5 Inverses of matrix transformations
If \funcdef{T_A}{\R^n}{\R^n} is the matrix transformation corresponding to n \times n matrix A\text{,} then \ker T_A is precisely the null space of A\text{.} And we know that a matrix is invertible precisely when its null space is trivial (Statement 9 of Theorem 21.5.5). Transformation T_A is defined by multiplication by A\text{,} and clearly we can reverse that input-output process through multiplication by \inv{A}\text{.} That is, T_A is invertible precisely when A is invertible, andA look ahead.
In Chapter 45, we will see that the question of invertibility of any linear transformation with a finite-dimensional domain space can be reduced to invertibility of a square matrix, but doing so will require choosing a basis for each of the domain space and the image.Subsection 44.3.6 Constructing invertible transformations
We know that a linear transformation V \to W can be defined by choosing a basis for the domain space V and then choosing a corresponding output vector in the codomain space W for each domain basis vector (Corollary 42.5.3). If we choose our output vectors to be linearly independent in W (assuming W has large enough dimension to do so) then those linearly independent vectors will span the image, and the rank of our transformation will be equal to \dim V\text{.} This forces the nullity to be zero by the Dimension Theorem, so, as in Subsection 44.3.4, the constructed transformation will then be invertible.Procedure 44.3.3. Using a domain basis to define an invertible linear transformation.
To create an invertible linear transformation \funcdef{T}{V}{W}\text{,} with V finite-dimensional and \dim W \ge \dim V = n\text{.}
- Choose a basis \basisfont{B} = \{\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_n\} for V\text{.}
- Choose linearly independent vectors \uvec{w}_1,\uvec{w}_2,\dotsc,\uvec{w}_n in W\text{.}
- Set T(\uvec{v}_j) = \uvec{w}_j\text{.}
Then every other image vector for T can be computed by
and we will have both \rank T = \dim V and \nullity T = 0\text{,} as required for T to be invertible.
Subsection 44.3.7 Isomorphisms
When a transformation \funcdef{T}{V}{W} is both one-to-one and onto (so that \im T = W), then T and \inv{T} create a one-for-one matching in each direction. And since both T and \inv{T} are linear, any calculation of the vector operations corresponds through T to a calculation in W\text{,} and vice versa through \inv{T}\text{:}A look ahead.
We will see that it also works the other way, so that finite-dimensional vector spaces with the same dimension are always isomorphic.- reflexive
a vector space is always isomorphic to itself;
- symmetric
if V is isomorphic to W\text{,} then W is isomorphic to V\text{;} and
- transitive
if U is isomorphic to V and V is isomorphic to W\text{,} then U is isomorphic to W\text{.}
Subsection 44.3.8 Constructing isomorphisms
In Procedure 44.3.3, we described how an invertible transformation can be defined by sending a basis for the domain space to a linearly independent set in the codomain space. If we would like to also have surjectivity, then we will need that independent image collection to also span the entire codomain space. In other words, to construct an isomorphism we should send a basis to a basis.Procedure 44.3.4. Using domain and codomain bases to define an isomorphism.
To create an isomorphism \funcdef{T}{V}{W}\text{,} with both V,W finite-dimensional and \dim W = \dim V = n\text{.}
- Choose a basis \basisfont{B} = \{\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_n\} for V\text{.}
- Choose a basis \basisfont{B}' = \{\uvec{w}_1,\uvec{w}_2,\dotsc,\uvec{w}_n\} for W\text{.}
- Set T(\uvec{v}_j) = \uvec{w}_j\text{.}
Then every other image vector for T can be computed by
and we will have both \rank T = \dim V and \nullity T = 0\text{,} as required for T to be invertible.
Subsection 44.3.9 Important isomorphisms
The identity operator.
It should be clear that the identity operator \funcdef{I_V}{V}{V} defined by I_V(\uvec{v}) = \uvec{v} is always an isomorphism, as its kernel is trivial and the dimensions of the domain and codomain are equal. This is the isomorphism that sends every basis of V to that same basis.Scalar operators.
Similar to the identity operator, every nonzero scalar operator \funcdef{m_a}{V}{V} defined by m_a(\uvec{v}) = a \uvec{v} is an isomorphism. In particular, the negative operator \funcdef{\neg_V}{V}{V} is an isomorphism. As nonzero scalar multiples do not affect independence, a scalar operator sends each basis of V to a scaled version of that same basis.A coordinate map relative to a basis.
As described in Subsection 42.3.5, a choice of basis \basisfont{B} for a finite-dimensional vector space V creates a coordinate map \funcdef{\coordmap{B}}{V}{\R^n} or \funcdef{\coordmap{B}}{V}{\C^n} (depending on whether V is a real or complex space), defined by \coordmap{B}(\uvec{v}) = \matrixOf{\uvec{v}}{B} for each \uvec{v} in V\text{.} In Example 43.4.7, we found that a coordinate map always has trivial kernel and full image. Therefore, every choice of basis for a finite-dimensional space creates an isomorphism to \R^n (real case) or \C^n (complex case). In particular, once the \basisfont{B} for the domain space V is chosen, the coordinate map is the unique isomorphism that sends \basisfont{B} to the standard basis of \R^n or \C^n\text{,} as appropriate.Remark 44.3.5.
You probably used coordinate maps relative to familiar bases to create the isomorphisms requested of you in Discovery 44.8.