Skip to main content
Contents Index
Dark Mode Prev Up Next
\(\require{cancel}
\newcommand{\bigcdot}{\mathbin{\large\boldsymbol{\cdot}}}
\newcommand{\basisfont}[1]{\mathcal{#1}}
\newcommand{\iddots}{{\mkern3mu\raise1mu{.}\mkern3mu\raise6mu{.}\mkern3mu \raise12mu{.}}}
\DeclareMathOperator{\RREF}{RREF}
\DeclareMathOperator{\adj}{adj}
\DeclareMathOperator{\proj}{proj}
\DeclareMathOperator{\matrixring}{M}
\DeclareMathOperator{\poly}{P}
\DeclareMathOperator{\Span}{Span}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\nullity}{nullity}
\DeclareMathOperator{\nullsp}{null}
\DeclareMathOperator{\uppermatring}{U}
\DeclareMathOperator{\trace}{trace}
\DeclareMathOperator{\dist}{dist}
\DeclareMathOperator{\negop}{neg}
\DeclareMathOperator{\Hom}{Hom}
\DeclareMathOperator{\im}{im}
\newcommand{\R}{\mathbb{R}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\ci}{\mathrm{i}}
\newcommand{\cconj}[1]{\bar{#1}}
\newcommand{\lcconj}[1]{\overline{#1}}
\newcommand{\cmodulus}[1]{\left\lvert #1 \right\rvert}
\newcommand{\bbrac}[1]{\bigl(#1\bigr)}
\newcommand{\Bbrac}[1]{\Bigl(#1\Bigr)}
\newcommand{\irst}[1][1]{{#1}^{\mathrm{st}}}
\newcommand{\ond}[1][2]{{#1}^{\mathrm{nd}}}
\newcommand{\ird}[1][3]{{#1}^{\mathrm{rd}}}
\newcommand{\nth}[1][n]{{#1}^{\mathrm{th}}}
\newcommand{\leftrightlinesubstitute}{\scriptstyle \overline{\phantom{xxx}}}
\newcommand{\inv}[2][1]{{#2}^{-{#1}}}
\newcommand{\abs}[1]{\left\lvert #1 \right\rvert}
\newcommand{\degree}[1]{{#1}^\circ}
\newcommand{\blank}{-}
\newenvironment{sysofeqns}[1]
{\left\{\begin{array}{#1}}
{\end{array}\right.}
\newcommand{\iso}{\simeq}
\newcommand{\absegment}[1]{\overline{#1}}
\newcommand{\abray}[1]{\overrightarrow{#1}}
\newcommand{\abctriangle}[1]{\triangle #1}
\newcommand{\abcdquad}[1]{\square\, #1}
\newenvironment{abmatrix}[1]
{\left[\begin{array}{#1}}
{\end{array}\right]}
\newenvironment{avmatrix}[1]
{\left\lvert\begin{array}{#1}}
{\end{array}\right\rvert}
\newcommand{\mtrxvbar}{\mathord{|}}
\newcommand{\utrans}[1]{{#1}^{\mathrm{T}}}
\newcommand{\rowredarrow}{\xrightarrow[\text{reduce}]{\text{row}}}
\newcommand{\bidentmattwo}{\begin{bmatrix} 1 \amp 0 \\ 0 \amp 1 \end{bmatrix}}
\newcommand{\bidentmatthree}{\begin{bmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}}
\newcommand{\bidentmatfour}{\begin{bmatrix} 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0
\\ 0 \amp 0 \amp 0 \amp 1 \end{bmatrix}}
\newcommand{\uvec}[1]{\mathbf{#1}}
\newcommand{\zerovec}{\uvec{0}}
\newcommand{\bvec}[2]{#1\,\uvec{#2}}
\newcommand{\ivec}[1]{\bvec{#1}{i}}
\newcommand{\jvec}[1]{\bvec{#1}{j}}
\newcommand{\kvec}[1]{\bvec{#1}{k}}
\newcommand{\injkvec}[3]{\ivec{#1} - \jvec{#2} + \kvec{#3}}
\newcommand{\norm}[1]{\left\lVert #1 \right\rVert}
\newcommand{\unorm}[1]{\norm{\uvec{#1}}}
\newcommand{\dotprod}[2]{#1 \bigcdot #2}
\newcommand{\udotprod}[2]{\dotprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\crossprod}[2]{#1 \times #2}
\newcommand{\ucrossprod}[2]{\crossprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\uproj}[2]{\proj_{\uvec{#2}} \uvec{#1}}
\newcommand{\adjoint}[1]{{#1}^\ast}
\newcommand{\matrixOfplain}[2]{{\left[#1\right]}_{#2}}
\newcommand{\rmatrixOfplain}[2]{{\left(#1\right)}_{#2}}
\newcommand{\rmatrixOf}[2]{\rmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\matrixOf}[2]{\matrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\invmatrixOfplain}[2]{\inv{\left[#1\right]}_{#2}}
\newcommand{\invrmatrixOfplain}[2]{\inv{\left(#1\right)}_{#2}}
\newcommand{\invmatrixOf}[2]{\invmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\invrmatrixOf}[2]{\invrmatrixOfplain{#1}{\basisfont{#2}}}
\newcommand{\stdmatrixOf}[1]{\left[#1\right]}
\newcommand{\ucobmtrx}[2]{P_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\uinvcobmtrx}[2]{\inv{P}_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\uadjcobmtrx}[2]{\adjoint{P}_{\basisfont{#1} \to \basisfont{#2}}}
\newcommand{\coordmapplain}[1]{C_{#1}}
\newcommand{\coordmap}[1]{\coordmapplain{\basisfont{#1}}}
\newcommand{\invcoordmapplain}[1]{\inv{C}_{#1}}
\newcommand{\invcoordmap}[1]{\invcoordmapplain{\basisfont{#1}}}
\newcommand{\similar}{\sim}
\newcommand{\inprod}[2]{\left\langle\, #1,\, #2 \,\right\rangle}
\newcommand{\uvecinprod}[2]{\inprod{\uvec{#1}}{\uvec{#2}}}
\newcommand{\orthogcmp}[1]{{#1}^{\perp}}
\newcommand{\vecdual}[1]{{#1}^\ast}
\newcommand{\vecddual}[1]{{#1}^{\ast\ast}}
\newcommand{\change}[1]{\Delta #1}
\newcommand{\dd}[2]{\frac{d{#1}}{d#2}}
\newcommand{\ddx}[1][x]{\dd{}{#1}}
\newcommand{\ddt}[1][t]{\dd{}{#1}}
\newcommand{\dydx}{\dd{y}{x}}
\newcommand{\dxdt}{\dd{x}{t}}
\newcommand{\dydt}{\dd{y}{t}}
\newcommand{\intspace}{\;}
\newcommand{\integral}[4]{\int^{#2}_{#1} #3 \intspace d{#4}}
\newcommand{\funcdef}[3]{#1\colon #2\to #3}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\definecolor{fillinmathshade}{gray}{0.9}
\newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}}
\)
Discovery guide 18.1 Discovery guide
Suppose
\(V\) is a vector space and
\(S\) is a finite spanning set for
\(V\) (i.e.
\(V = \Span S\) ). In the previous chapter, we saw that if
\(S\) is linearly dependent, then (at least) one vector can be removed from
\(S\text{,}\) and the resulting smaller set will still be a spanning set (
Lemma 17.5.4 ). You can imagine repeating this process until finally you are left with a spanning set that is linearly independent. (See
Proposition 17.5.5 .)
This leads to the following definition.
basis for a vector space
a linearly independent spanning set for the space
Discovery 18.1 .
In each of the following, determine whether
\(S\) is a basis for
\(V\text{.}\) If it is not a basis, make sure you know which property
\(S\) violates, independence or spanning.
Aside: Note.
A specific example could violate both, but we only need to know it violates one of the two properties to know it’s not a basis.
(a)
\(V = \R^3\text{,}\) \(S = \{(1,0,0),(1,1,0),(1,1,1),(0,0,2)\}\text{.}\)
(b)
\(V = \R^3\text{,}\) \(S = \{(1,0,0),(1,1,0),(0,0,2)\}\text{.}\)
(c)
\(V = \matrixring_2(\R)\text{,}\) \(S = \left\{\;
\left[\begin{smallmatrix} 2 \amp 0 \\ 1 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 1 \amp 0 \\ 0 \amp -1 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{.}\)
(d)
\(V =\) the space of
\(2\times 2\) upper triangular matrices,
\(S = \left\{\;
\left[\begin{smallmatrix} 1 \amp 0 \\ 0 \amp 1 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 1 \amp 1 \\ 0 \amp 1 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 1 \amp 2 \\ 0 \amp 1 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 1 \amp 3 \\ 0 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{.}\)
(e)
\(V =\) the space of
\(3\times 3\) lower triangular matrices,
\(S = \left\{\;
\left[\begin{smallmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{.}\)
(f)
\(V = \poly_3(\R)\text{,}\) the space of all polynomials of degree
\(3\) or less,
\(S = \{1,x,x^2\}\text{.}\)
(g)
\(V = \poly_3(\R)\text{,}\) \(S = \{1,x,x^2,x^3\}\text{.}\)
As discussed in the introduction to this discovery guide above, a spanning set that is not linearly independent contains redundant information in the form of vectors that are not actually needed to form a spanning set. This redundancy manifests itself in other ways, as the next discovery activity will demonstrate.
Discovery 18.2 .
Consider the set
\(S = \{(1,0),(1,1),(1,-1)\}\) of vectors in
\(\R^2\text{.}\) This set spans
\(\R^2\) but is not linearly independent.
(a)
Since
\(S\) spans
\(\R^2\text{,}\) it is possible to express vector
\((3,-3)\) as a linear combination of the vectors in
\(S\text{.}\)
Demonstrate a way to do this:
\begin{equation*}
(3,-3)
\; = \; \fillinmath{XX}(1,0)
\; + \; \fillinmath{XX}(1,1)
\; + \; \fillinmath{XX}(1,-1)\text{.}
\end{equation*}
(b)
Here is the redundant part. Demonstrate a different way to express \((3,-3)\) as a linear combination of the vectors in \(S\text{:}\)
\begin{equation*}
(3,-3)
\; = \; \fillinmath{XX}(1,0)
\; + \; \fillinmath{XX}(1,1)
\; + \; \fillinmath{XX}(1,-1)\text{.}
\end{equation*}
(c)
How many different ways do you think there are to do this?
The next discovery activity will demonstrate that the redundancies of
Discovery 18.2 cannot happen for a
basis .
Discovery 18.3 .
Suppose
\(V\) is a vector space,
\(S = \{\uvec{v}_1,\uvec{v}_2,\uvec{v}_3\}\) is a basis for
\(V\text{,}\) and
\(\uvec{w}\) is a vector in
\(V\text{.}\)
Since \(S\) is a spanning set, there is a way to express \(\uvec{w}\) as linear combinations of the vectors in \(S\text{:}\)
\begin{equation*}
\uvec{w} = a_1\uvec{v}_1+a_2\uvec{v}_2+a_3\uvec{v}_3\text{.}
\end{equation*}
Suppose there were a different such expression:
\begin{equation*}
\uvec{w} = b_1\uvec{v}_1+b_2\uvec{v}_2+b_3\uvec{v}_3\text{.}
\end{equation*}
Use the vector identity
\begin{equation*}
\uvec{w}-\uvec{w} = \zerovec
\end{equation*}
and the two different expressions for \(\uvec{w}\) above to show that having these two different expressions violates the linear independence of \(S\text{.}\)
Discovery 18.3 shows that when we have a basis
\(S = \{\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_n\}\) for a vector space
\(V\text{,}\) each vector in
\(V\) has
one unique expression as a linear combination of the vectors in
\(S\text{.}\) For
\(\uvec{w} = c_1\uvec{v}_1 + c_2\uvec{v}_2 + \dotsb + c_n\uvec{v}_n\text{,}\) the coefficients
\(c_1,c_2,\dotsc,c_n\) are called the
coordinates of \(\uvec{w}\) relative to \(S\) . Since these coordinates consist of
\(n\) coefficients, we sometimes relate
\(\uvec{w}\) to a vector in
\(\R^n\) by collecting its coordinates into an
\(n\) -tuple:
\begin{equation*}
\rmatrixOfplain{\uvec{w}}{S} = (c_1,c_2,\dotsc,c_n)\text{.}
\end{equation*}
This is called the coordinate vector of \(\uvec{w}\) relative to \(S\) .
Discovery 18.4 .
In each of the following, determine the coordinate vector of
\(\uvec{w}\) relative to the provided basis
\(S\) for
\(V\text{.}\)
(a)
\(V = \matrixring_2(\R)\text{,}\) \(S = \left\{\;
\left[\begin{smallmatrix} 1 \amp 0 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 1 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 0 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{,}\) \(\uvec{w} = \left[\begin{smallmatrix} -1 \amp 5 \\ 3 \amp -2 \end{smallmatrix}\right]\text{.}\)
(b)
\(V = \matrixring_2(\R)\text{,}\) \(S = \left\{\;
\left[\begin{smallmatrix} 1 \amp 0 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 1 \amp 1 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{,}\) \(\uvec{w} = \left[\begin{smallmatrix} -1 \amp 5 \\ 3 \amp -2 \end{smallmatrix}\right]\text{.}\)
(c)
\(V = \poly_3(\R)\text{,}\) \(S = \{1,x,x^2,x^3\}\text{,}\) \(\uvec{w} = 3 + 4x - 5x^3\text{.}\)
(d)
\(V = \R^3\text{,}\) \(S = \{(-1,0,1),(0,2,0),(1,1,0)\}\text{,}\) \(\uvec{w} = (1,1,1)\text{.}\)
(e)
\(V = \R^3\text{,}\) \(S = \{(1,0,0),(0,1,0),(0,0,1)\}\text{,}\) \(\uvec{w} = (-2,3,-5)\text{.}\)
Discovery 18.5 .
In each of the following, determine which vector
\(\uvec{w}\) in
\(V\) has the given coordinate vector
\(\rmatrixOfplain{\uvec{w}}{S}\text{.}\)
(a)
\(V = \matrixring_2(\R)\text{,}\) \(S = \left\{\;
\left[\begin{smallmatrix} 1 \amp 0 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 1 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 0 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{,}\) \(\rmatrixOfplain{\uvec{w}}{S} = (3,-5,1,1)\text{.}\)
(b)
\(V = \matrixring_2(\R)\text{,}\) \(S = \left\{\;
\left[\begin{smallmatrix} 1 \amp 0 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 1 \amp 1 \\ 0 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 0 \end{smallmatrix}\right],\;\;
\left[\begin{smallmatrix} 0 \amp 0 \\ 1 \amp 1 \end{smallmatrix}\right]
\;\right\}
\text{,}\) \(\rmatrixOfplain{\uvec{w}}{S} = (3,-5,1,1)\text{.}\)
(c)
\(V = P_3\text{,}\) \(S = \{1,x,x^2,x^3\}\text{,}\) \(\rmatrixOfplain{\uvec{w}}{S} = (-3,1,0,3)\text{.}\)
(d)
\(V = \R^3\text{,}\) \(S = \{(-1,0,1),(0,2,0),(1,1,0)\}\text{,}\) \(\rmatrixOfplain{\uvec{w}}{S} = (1,1,1)\text{.}\)
(e)
\(V = \R^3\text{,}\) \(S = \{(1,0,0),(0,1,0),(0,0,1)\}\text{,}\) \(\rmatrixOfplain{\uvec{w}}{S} = (-2,3,-5)\text{.}\)
Discovery 18.6 .
Coordinate vectors let us transfer vector algebra in a space
\(V\) to the familiar space
\(\R^n\text{.}\)
For example, consider the basis
\begin{equation*}
S = \left\{\;
\begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix},\;\;
\begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix},\;\;
\begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix},\;\;
\begin{bmatrix} 0 \amp 0 \\ 1 \amp 1 \end{bmatrix}
\;\right\}
\end{equation*}
(a)
In
Task b of
Discovery 18.5 you have already determined the vector
\(\uvec{w}\) in
\(\matrixring_2(\R)\) that has coordinate vector
\(\rmatrixOfplain{\uvec{w}}{S} = (3,-5,1,1)\text{.}\) Now do the same to determine the vector
\(\uvec{v}\) in
\(\matrixring_2(\R)\) that has coordinate vector
\(\rmatrixOfplain{\uvec{v}}{S} = (-1,2,0,3)\text{.}\)
(b) Do some algebra in \(\matrixring_2(\R)\) .
Using your vector
\(\uvec{v}\) from
Task a and
\(\uvec{w}\) from
Task b of
Discovery 18.5 , compute the linear combination
\(2 \uvec{v} + \uvec{w}\text{.}\)
Note: Vectors
\(\uvec{v}\) and
\(\uvec{w}\) “live” in the space
\(\matrixring_2(\R)\text{,}\) so your computation in this task should involve
\(2 \times 2\) matrices, and should also result in a
\(2 \times 2\) matrix.
(c) Do the same algebra in \(\R^4\) .
Compute
\(2 \rmatrixOfplain{\uvec{v}}{S} + \rmatrixOfplain{\uvec{w}}{S}\text{,}\) using the coordinate vectors
\(\rmatrixOfplain{\uvec{v}}{S} \) and
\(\rmatrixOfplain{\uvec{w}}{S}\) provided to you in
Task a .
Note: These coordinate vectors “live” in the space
\(\R^4\text{,}\) so your computation in this task should involve four-dimensional vectors, and should also result in a four-dimensional vector.
(d) Compare your results.
Consider your four-dimensional result vector from
Task c as a coordinate vector for some vector in
\(\matrixring_2(\R)\) relative to
\(S\text{.}\) Similarly to your computations in
Task a of this discovery activity and
Task b of
Discovery 18.5 , determine the matrix in
\(\matrixring_2(\R)\) that has coordinate vector equal to your result vector from
Task c . Then compare with your result matrix from
Task b . Surprised?