Skip to main content
Contents Index
Dark Mode Prev Up Next
\(\require{cancel}
\newcommand{\bigcdot}{\mathbin{\large\boldsymbol{\cdot}}}
\newcommand{\basisfont}[1]{\mathcal{#1}}
\newcommand{\iddots}{{\mkern3mu\raise1mu{.}\mkern3mu\raise6mu{.}\mkern3mu \raise12mu{.}}}
\DeclareMathOperator{\RREF}{RREF}
\DeclareMathOperator{\adj}{adj}
\DeclareMathOperator{\proj}{proj}
\DeclareMathOperator{\matrixring}{M}
\DeclareMathOperator{\poly}{P}
\DeclareMathOperator{\Span}{Span}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\nullity}{nullity}
\DeclareMathOperator{\nullsp}{null}
\DeclareMathOperator{\uppermatring}{U}
\DeclareMathOperator{\trace}{trace}
\DeclareMathOperator{\dist}{dist}
\DeclareMathOperator{\negop}{neg}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\im}{im}
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\ci}{\mathrm{i}}
\newcommand{\cconj}[1]{\bar{#1}}
\newcommand{\lcconj}[1]{\overline{#1}}
\newcommand{\cmodulus}[1]{\left\lvert #1 \right\rvert}
\newcommand{\bbrac}[1]{\bigl(#1\bigr)}
\newcommand{\Bbrac}[1]{\Bigl(#1\Bigr)}
\newcommand{\irst}[1][1]{{#1}^{\mathrm{st}}}
\newcommand{\ond}[1][2]{{#1}^{\mathrm{nd}}}
\newcommand{\ird}[1][3]{{#1}^{\mathrm{rd}}}
\newcommand{\nth}[1][n]{{#1}^{\mathrm{th}}}
\newcommand{\leftrightlinesubstitute}{\scriptstyle \overline{\phantom{xxx}}}
\newcommand{\inv}[2][1]{{#2}^{-{#1}}}
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\newcommand{\degree}[1]{{#1}^\circ}
\newcommand{\blank}{-}
\newenvironment{sysofeqns}[1]
{\left\{\begin{array}{#1}}
{\end{array}\right.}
\newcommand{\iso}{\simeq}
\newcommand{\absegment}[1]{\overline{#1}}
\newcommand{\abray}[1]{\overrightarrow{#1}}
\newcommand{\abctriangle}[1]{\triangle #1}
\newcommand{\abcdquad}[1]{\square\, #1}
\newenvironment{abmatrix}[1]
{\left[\begin{array}{#1}}
{\end{array}\right]}
\newenvironment{avmatrix}[1]
{\left\lvert\begin{array}{#1}}
{\end{array}\right\rvert}
\newcommand{\mtrxvbar}{\mathord{|}}
\newcommand{\utrans}[1]{{#1}^{\mathrm{T}}}
\newcommand{\rowredarrow}{\xrightarrow[\text{reduce}]{\text{row}}}
\newcommand{\bidentmattwo}{\begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix}}
\newcommand{\bidentmatthree}{\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}}
\newcommand{\bidentmatfour}{\begin{bmatrix} 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0
\\ 0 \amp 0 \amp 0 \amp 1 \end{bmatrix}}
\newcommand{\uvec}[1]{\mathbf{#1}}
\newcommand{\zerovec}{\uvec{0}}
\newcommand{\bvec}[2]{#1\,\uvec{#2}}
\newcommand{\ivec}[1]{\bvec{#1}{i}}
\newcommand{\jvec}[1]{\bvec{#1}{j}}
\newcommand{\kvec}[1]{\bvec{#1}{k}}
\newcommand{\injkvec}[3]{\ivec{#1} - \jvec{#2} + \kvec{#3}}
\newcommand{\norm}[1]{\left\lVert #1 \right\rVert}
\newcommand{\unorm}[1]{\norm{\uvec{#1}}}
\newcommand{\dotprod}[2]{#1 \bigcdot #2}
\newcommand{\udotprod}[2]{\dotprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\crossprod}[2]{#1 \times #2}
\newcommand{\ucrossprod}[2]{\crossprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\uproj}[2]{\proj_{\uvec{#2}} \uvec{#1}}
\newcommand{\adjoint}[1]{{#1}^\ast}
\newcommand{\matrixOfplain}[2]{{\left[#1\right]}_{#2}}
\newcommand{\rmatrixOfplain}[2]{{\left(#1\right)}_{#2}}
\newcommand{\rmatrixOf}[2]{\rmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\matrixOf}[2]{\matrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\invmatrixOfplain}[2]{\inv{\left[#1\right]}_{#2}}
\newcommand{\invrmatrixOfplain}[2]{\inv{\left(#1\right)}_{#2}}
\newcommand{\invmatrixOf}[2]{\invmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\invrmatrixOf}[2]{\invrmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\stdmatrixOf}[1]{\left[#1\right]}
\newcommand{\ucobmtrx}[2]{P_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\uinvcobmtrx}[2]{\inv{P}_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\uadjcobmtrx}[2]{\adjoint{P}_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\coordmapplain}[1]{C_{#1}}
\newcommand{\coordmap}[1]{\coordmapplain{\basisfont{#1}}}
\newcommand{\invcoordmapplain}[1]{\inv{C}_{#1}}
\newcommand{\invcoordmap}[1]{\invcoordmapplain{\basisfont{#1}}}
\newcommand{\similar}{\sim}
\newcommand{\inprod}[2]{\left\langle\, #1,\, #2 \,\right\rangle}
\newcommand{\uvecinprod}[2]{\inprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\orthogcmp}[1]{{#1}^{\perp}}
\newcommand{\vecdual}[1]{{#1}^\ast}
\newcommand{\vecddual}[1]{{#1}^{\ast\ast}}
\newcommand{\change}[1]{\Delta #1}
\newcommand{\dd}[2]{\frac{d{#1}}{d#2}}
\newcommand{\ddx}[1][x]{\dd{}{#1}}
\newcommand{\ddt}[1][t]{\dd{}{#1}}
\newcommand{\dydx}{\dd{y}{x}}
\newcommand{\dxdt}{\dd{x}{t}}
\newcommand{\dydt}{\dd{y}{t}}
\newcommand{\intspace}{\;}
\newcommand{\integral}[4]{\int^{#2}_{#1} #3 \intspace d{#4}}
\newcommand{\funcdef}[3]{#1\colon #2\to #3}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\definecolor{fillinmathshade}{gray}{0.9}
\newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}}
\)
Discovery guide 42.1 Discovery guide
Discovery 42.1 .
An \(m \times n\) real matrix \(A\) creates a function \(\funcdef{T_A}{\R^n}{\R^m}\) by matrix multiplication:
\begin{equation*}
T_A(\uvec{x}) = A \uvec{x} \text{.}
\end{equation*}
We will call such a function a matrix transformation \(\R^n \to \R^m\text{.}\)
Aside: A look back.
In the case
\(m = n\text{,}\) we have already been informally considering transition matrices as geometric transformations of
\(\R^n\text{.}\)
(a)
Write out linear input-output component formulas for function \(T_A\) associated to matrix
\begin{equation*}
A = \begin{abmatrix}{rrr} 1 \amp 2 \amp -3 \\ 2 \amp -1 \amp 5 \end{abmatrix} \text{,}
\end{equation*}
so that \(\uvec{w} = T_A(\uvec{x})\text{.}\)
\begin{equation*}
\begin{sysofeqns}{rcrcrcr}
w_1 \amp = \amp \fillinmath{XX} x_1 \amp + \amp \fillinmath{XX} x_2 \amp + \amp \fillinmath{XX} x_3 \text{,} \\
w_2 \amp = \amp \fillinmath{XX} x_1 \amp + \amp \fillinmath{XX} x_2 \amp + \amp \fillinmath{XX} x_3 \text{.}
\end{sysofeqns}
\end{equation*}
(b)
(c)
Suppose you know that matrix transformation \(\funcdef{T_C}{\R^3}{\R^3}\) satisfies
\begin{align*}
T_C(\uvec{e}_1) \amp = \begin{abmatrix}{r} 2 \\ -3 \\ 5 \end{abmatrix} \text{,} \amp
T_C(\uvec{e}_2) \amp = \begin{abmatrix}{r} -7 \\ 11 \\ 13 \end{abmatrix} \text{,} \amp
T_C(\uvec{e}_3) \amp = \begin{abmatrix}{r} 17 \\ 19 \\ -23 \end{abmatrix} \text{.}
\end{align*}
Do you have enough information to determine matrix \(C\text{?}\)
A function between two “spaces” of the same kind is often referred to as a
morphism . Just as we used
\(\R^n\) as the model for the ten vector space axioms, and used the dot products on
\(\R^n\) and
\(\C^n\) as the models for the four inner product space axioms, we will use matrix transformations
\(\R^n \to \R^m\) as the model for the desired properties of
vector space morphisms .
Discovery 42.2 .
Suppose \(\funcdef{T_A}{\R^n}{\R^m}\) is the matrix transformation associated to \(m \times n\) matrix \(A\text{,}\) so that
\begin{equation*}
T_A(\uvec{x}) = A \uvec{x} \text{.}
\end{equation*}
(a)
How does
\(T_A\) interact with the vector operations of the
domain space \(\R^n\) and the
codomain space \(\R^m\text{?}\)
That is, how does
\(T_A\) interact with
(i)
(ii)
(iii)
(iv)
(v)
(b)
Which of the patterns from
Task a can be deduced from others of the patterns?
Based on this, which of these patterns should be designated as the basic
axioms of
vector space morphisms ?
A function
\(\funcdef{T}{V}{W}\) between abstract vector spaces
\(V,W\) that satisfies the axioms we have identified in
Task b of
Discovery 42.2 will be called a
linear transformation (or a
vector space homomorphism ).
Discovery 42.3 .
In each of the following, determine whether the provided vector space function is a linear transformation.
(a)
Left-multiplication by
\(m \times n\) matrix
\(A\text{:}\)
\(\funcdef{L_A}{\matrixring_{n \times \ell}(\R)}{\matrixring_{m \times \ell}(\R)}\) by
\(L_A(X) = A X\text{.}\)
(b)
Right-multiplication by
\(m \times n\) matrix
\(A\text{:}\)
\(\funcdef{R_A}{\matrixring_{\ell \times m}(\R)}{\matrixring_{\ell \times n}(\R)}\) by
\(R_A(X) = X A\text{.}\)
(c)
Translation by a fixed nonzero vector
\(\uvec{a}\) in vector space
\(V\text{:}\)
\(\funcdef{t_{\uvec{a}}}{V}{V}\) by
\(t_{\uvec{a}}(\uvec{v}) = \uvec{v} + \uvec{a} \text{.}\)
(d)
Multiplication by a fixed scalar
\(a\) in vector space
\(V\text{:}\)
\(\funcdef{m_a}{V}{V}\) by
\(m_a(\uvec{v}) = a \uvec{v} \text{.}\)
(e)
Evaluation of polynomials at fixed
\(x\) -value
\(x = a\text{:}\)
\(\funcdef{E_a}{\poly(\R)}{\R^1}\) by
\(E_a(p) = p(a) \text{.}\)
(f)
Determinant of square matrices:
\(\funcdef{\det}{\matrixring_n(\R)}{\R^1}\text{.}\)
(g)
Differentiation: let
\(F(a,b)\) represent the space of functions defined on the interval
\(a \lt x \lt b\text{,}\) and let
\(D(a,b)\) represent the subspace of
\(F(a,b)\) consisting of
differentiable functions.
Consider
\(\funcdef{\ddx}{D(a,b)}{F(a,b)}\) by
\(\ddx(f) = f'\text{.}\)
(h)
Integration: let
\(C[a,b]\) represent the space of
continuous functions defined on the interval
\(a \le x \le b\text{.}\)
Consider
\(\funcdef{I_{a,b}}{C[a,b]}{\R^1}\) by
\(I_{a,b}(f) = \integral{a}{b}{f(x)}{x}\text{.}\)
Discovery 42.4 .
Suppose \(V\) is a finite-dimensional vector space with
\begin{equation*}
V = \Span \{ \uvec{v}_1, \uvec{v}_2, \uvec{v}_3 \} \text{,}
\end{equation*}
and \(\funcdef{T}{V}{\R^2}\) is a linear transformation such that
\begin{align*}
T(\uvec{v}_1) \amp = (1,2) \text{,} \amp
T(\uvec{v}_2) \amp = (3,-5) \text{,} \amp
T(\uvec{v}_3) \amp = (0,4)\text{.}
\end{align*}
(a)
Based on this information, can you determine
\(T(3 \uvec{v}_1 - \uvec{v}_2 + 5 \uvec{v}_3)\text{?}\)
(b)
Would you be able to answer
Task a for other linear combinations of
\(\uvec{v}_1,\uvec{v}_2,\uvec{v}_3\text{?}\)
(c) Describe the pattern.
In order to be able to compute
every output of a linear transformation, the only output information required is
.
Discovery 42.5 .
Suppose \(\funcdef{T}{\R^3}{\R^3}\) is a linear transformation such that
\begin{align*}
T(\uvec{e}_1) \amp = \begin{abmatrix}{r} 2 \\ -3 \\ 5 \end{abmatrix} \text{,} \amp
T(\uvec{e}_2) \amp = \begin{abmatrix}{r} -7 \\ 11 \\ 13 \end{abmatrix} \text{,} \amp
T(\uvec{e}_3) \amp = \begin{abmatrix}{r} 17 \\ 19 \\ -23 \end{abmatrix} \text{.}
\end{align*}
(a)
Do you know any other linear transformation
\(\R^3 \to \R^3\) that has the same outputs for the standard basis vectors as inputs?
(b)
(c) Describe the pattern.
Every linear transformation
\(\R^n \to \R^m\) is effectively
.
Discovery 42.6 .
(a)
Suppose
\(\funcdef{T}{\R^n}{\R^1}\) is a linear transformation. What size of matrix would represent this linear transformation?
What word would we normally use to describe a matrix of those dimensions, instead of “matrix”?
(b) Describe the pattern.
Every linear transformation
\(\R^n \to \R^1\) corresponds to a
.
Discovery 42.7 .
For vector spaces
\(V,W\text{,}\) let
\(L(V,W)\) represent the collection of all linear transformations
\(V \to W\text{.}\)
(a)
How could transformations in
\(L(V,W)\) be added?
That is, if \(\funcdef{T_1,T_2}{V}{W}\) are objects in \(L(V,W)\text{,}\) what transformation should \(T_1 + T_2\) represent?
\begin{equation*}
(T_1 + T_2)(\uvec{v}) = \fillinmath{XXXXXXXXXXXXXXXXXXXX}
\end{equation*}
Is the sum transformation \(T_1 + T_2\) still in \(L(V,W)\text{?}\) (i.e. Is it still linear?)
(b)
How could transformations in
\(L(V,W)\) be scalar multiplied?
That is, if \(\funcdef{T}{V}{W}\) is an object in \(L(V,W)\text{,}\) what transformation should \(k T\) represent for scalar \(k\text{?}\)
\begin{equation*}
(k T)(\uvec{v}) = \fillinmath{XXXXXXXXXXXXXXXXXXXX}
\end{equation*}
Is the scaled transformation \(T\) still in \(L(V,W)\text{?}\) (i.e. Is it still linear?)
(c)
Is
\(L(V,W)\) a vector space under the operations of addition and scalar multiplication of linear transformations?