Section 16.4 Examples
Subsection 16.4.1 The Subspace Test
First, let’s practise applying the Subspace Test.
Remark 16.4.1.
Since every vector space must have a zero vector (Axiom A 4), so too must a subspace. But since the vector operations of a subspace are the same as the operations of the larger space, it will turn out that the zero vector in a subspace must always be the same as the zero vector in the larger space (see Proposition 16.5.2). So often the best way to check the Nonempty clause of the Subspace Test is to verify that it contains the zero vector.
Here are some examples from Discovery guide 16.1.
Example 16.4.2. A plane through the origin in \(\R^3\).
In Discovery 16.2.a, we considered the subcollection of vectors in \(\R^3\) consisting of all vectors from the origin with terminal point in the \(xy\)-plane. Note that any such vector must have a \(0\) as its \(z\)-component.
Let’s apply the Subspace Test.
Nonempty.
We know that the \(xy\)-plane is nonempty; in particular, it contains the zero vector since it contains the origin.
Closed under vector addition.
If vectors \(\uvec{u}_1\) and \(\uvec{u}_2\) are both in the \(xy\)-plane, then their \(z\)-components are both zero. So we can write these vectors as
\begin{align*}
\uvec{u}_1 \amp= (x_1,y_1,0), \amp \uvec{u}_2 \amp= (x_2,y_2,0) \text{.}
\end{align*}
Then,
\begin{equation*}
\uvec{u}_1 + \uvec{u}_2 = (x_1+x_2,y_1+y_2,0) \text{.}
\end{equation*}
Since this sum vector also has zero \(z\)-component, it is again in the \(xy\)-plane, as required.
Closed under scalar multiplication.
If vector \(\uvec{u}\) is in the \(xy\)-plane, then its \(z\)-component is zero, so we can write it as
\begin{equation*}
\uvec{u} = (x,y,0) \text{.}
\end{equation*}
Then for every scalar \(k\text{,}\) we have
\begin{equation*}
k\uvec{u} = (kx,ky,0) \text{.}
\end{equation*}
Since this scaled vector also has zero \(z\)-component, it is again in the \(xy\)-plane, as required.
Conclusion.
Example 16.4.3. A subspace of matrices.
In Discovery 16.2.d, we considered the subcollection of \(\matrixring_{12}(\R)\) consisting of all those \(12 \times 12\) matrices that have a \(7\) in the \((1,1)\) entry.
Let’s apply the Subspace Test.
Nonempty.
This collection is clearly not empty, since the \(12 \times 12\) matrix with \(7\) in every entry is in the collection.
Closed under vector addition.
If matrices \(A_1\) and \(A_2\) are both in the subcollection, then they each have a \(7\) in the \((1,1)\) entry. But then \(A_1+A_2\) has \(14\) in the \((1,1)\) entry, not \(7\text{.}\) So the sum vector is not in the subcollection.
Conclusion.
There is no need to check the third clause of the Subspace Test, since the second has already failed to pass. But it shouldn’t be too difficult to see that scalar multiples of such a matrix will also fail to remain in the subcollection (except when the scalar is \(1\)).
Example 16.4.4. Restricting degree creates a subspace of polynomials.
In Discovery 16.2.f, we considered the subcollection of \(\poly(\R)\) consisting of those polynomials that have degree \(2\) or less.
Let’s apply the Subspace Test.
Nonempty.
Clearly this subcollection is nonempty, as any constant polynomial has degree \(0\text{,}\) which is less than \(2\text{.}\) In particular, the zero polynomial \(\zerovec(x)=0\) (which is the zero vector in this space) also has degree less than \(2\text{.}\)
Closed under vector addition.
Suppose \(p_1\) and \(p_2\) are two polynomials in this subcollection. Then the degree of each of these polynomials is \(2\) or less, so we can write
\begin{align*}
p_1(x) \amp= a_1 x^2 + b_1 x + c_1, \amp
p_2(x) \amp= a_2 x^2 + b_2 x + c_2.
\end{align*}
Then,
\begin{align*}
p_1(x) + p_2(x) \amp = (a_1 x^2 + b_1 x + c_1) + (a_2 x^2 + b_2 x + c_2) \\
\amp = (a_1 + a_2) x^2 + (b_1 + b_2) x + (c_1 + c_2) \text{.}
\end{align*}
Since this sum polynomial also has degree \(2\) (or less, since \(a_1\) and \(a_2\) could cancel or could both be zero), it is again in the subcollection, as required.
Closed under scalar multiplication.
Suppose \(p\) is a polynomial in this subcollection. Then the degree of this polynomial is \(2\) or less, so we can write
\begin{equation*}
p(x) = a x^2 + b x + c \text{.}
\end{equation*}
Then for every scalar \(k\text{,}\) we have
\begin{equation*}
k p(x) = k a x^2 + k b x + k c \text{.}
\end{equation*}
Since this scaled polynomial also has degree \(2\) (or less, since either \(k\) or \(a\) could be zero), it is again in the subcollection, as required.
Conclusion.
Since all parts of the Subspace Test pass, the collection of all polynomials of degree \(2\) or less is a subspace of \(\poly(\R)\text{.}\)
Remark 16.4.5.
Similar to this last example, the Subspace Test can be used to verify that \(\poly_n(\R)\text{,}\) the subcollection of \(\poly(\R)\) consisting of all polynomials with degree \(n\) or less, is a subspace for every fixed value of positive integer \(n\text{.}\)
Subsection 16.4.2 Important subspace examples
Here are a few more important examples of subspaces.
Example 16.4.6. The trivial subspace.
Consider the subcollection in a vector space consisting of just the zero vector. Since we already know that the zero vector space is, indeed, a vector space, there is no need for the Subspace Test. In every vector space, the zero space \(\{\zerovec\}\) is always a subspace.
Example 16.4.7. The full subspace.
Consider the subcollection in a vector space consisting of every vector. (This may not seem like a subcollection, but every vector in this subcollection is in the original vector space.) Since it is obviously true that the collection of all vectors in a vector space forms a vector space, we have that every vector space is a subspace of itself.
Example 16.4.8. The solution space of a homogeneous system.
Suppose \(A\) is a fixed \(m \times n\) matrix. Solutions to the homogeneous system \(A\uvec{x} = \zerovec\) can be considered as (column) vectors in \(\R^n\text{,}\) so the solution set to this system is a subcollection of a vector space. Is it a subspace? Let’s apply the Subspace Test, similarly to Discovery 16.2.i.
Nonempty.
Since a homogeneous system is always consistent, the solution set is nonempty. In particular, the solution set contains the zero vector, since this is the vector corresponding to the trivial solution.
Closed under vector addition.
Suppose \(\uvec{x}_1\) and \(\uvec{x}_2\) are two solutions to this system. Then both
\begin{align*}
A\uvec{x}_1 \amp= \zerovec \amp
\amp\text{and} \amp
A\uvec{x}_2 \amp= \zerovec\text{.}
\end{align*}
To check if the sum vector is also in the solution set, we need to check whether \(\uvec{x} = \uvec{x}_1+\uvec{x}_2\) satisfies the matrix equation \(A\uvec{x}=\zerovec\text{:}\)
\begin{equation*}
A(\uvec{x}_1+\uvec{x}_2)
= A\uvec{x}_1 + A\uvec{x}_2
= \zerovec + \zerovec
= \zerovec\text{.}
\end{equation*}
So the sum vector is indeed in the solution set.
Closed under scalar multiplication.
Suppose \(\uvec{x}_0\) is a solution to this system. Then \(A\uvec{x}_0 = \zerovec\text{.}\) For a scalar \(k\text{,}\) to check whether the scaled vector \(k\uvec{x}_0\) is also in the solution set, we need to check whether \(\uvec{x} = k\uvec{x}_0\) satisfies the matrix equation \(A\uvec{x}=\zerovec\text{:}\)
\begin{equation*}
A(k\uvec{x}_0)
= kA\uvec{x}_0
= k\zerovec
= \zerovec\text{.}
\end{equation*}
So the scaled vector is indeed in the solution set.
Conclusion.
Since all parts of the Subspace Test pass, the solution set of the homogeneous system \(A\uvec{x}=\zerovec\) is a subspace of \(\R^n\text{.}\)
Remark 16.4.9.
Every subspace is somehow defined by a homogeneous condition or a set of homogeneous conditions. In the solution space example above, this was explicit — the subcollection was directly defined as the solution set of a homogeneous matrix equation \(A\uvec{x} = \zerovec\text{.}\) On the other hand, it’s easy to see that the solution set of a nonhomogeneous system \(A\uvec{x} = \uvec{b}\) would not be a subspace of \(\R^n\text{,}\) since it would not contain the zero vector.
Let’s reconsider some of the examples of Discovery 16.2 from this perspective.
- In Discovery 16.2.a, the \(xy\)-plane is a subspace of \(\R^3\text{,}\) and it corresponds to the homogeneous condition \(z=0\text{.}\) However, in Discovery 16.2.b, the plane parallel to the \(xy\)-plane in \(\R^3\) but shifted one unit upward is not a subspace, and it corresponds to the nonhomogeneous condition \(z=1\text{.}\)
- In Discovery 16.2.c, the collection of \(10 \times 10\) diagonal matrices is a subspace of \(\matrixring_{10}(\R)\text{,}\) and it corresponds to the homogeneous conditions that the off-diagonal entries be zero. However, in Discovery 16.2.d, the collection of those \(12 \times 12\) matrices with a \(7\) in the \((1,1)\) entry is not a subspace of \(\matrixring_{12}(\R)\text{,}\) and this collection corresponds to the nonhomogeneous condition of requiring the top-left entry be \(7\text{.}\)
- In Discovery 16.2.f, the collection \(\poly_2(\R)\) of polynomials of degree \(2\) or less is a subspace of \(\poly(\R)\text{,}\) and it corresponds to the homogeneous conditions of requiring the coefficient on every power \(x^n\text{,}\) \(n\ge 3\text{,}\) be zero. However, in Discovery 16.2.g, the collection of polynomials of degree exactly \(2\) is not a subspace. While this subcollection requires the same homogeneous conditions as those defining \(\poly_2\text{,}\) it also requires the nonhomogeneous condition that the coefficient on \(x^2\) be nonzero.
Subsection 16.4.3 Determining if a vector is in a span
In Discovery 16.3, we explored the question of determining whether a given vector was in the subspace generated by a specified spanning set. For this to be true, that vector must be a linear combination of vectors in the spanning set.
Example 16.4.10. A span of \(\R^3\)-vectors.
This example corresponds to Discovery 16.3.a. Working in \(\R^3\text{,}\) we would like to determine if \(\uvec{v} = (1,-1,4)\) is in \(\Span S\) for \(S = \bigl\{(1,0,1),(2,1,-1)\bigr\}\text{.}\) Let’s try to express \(\uvec{v}\) as a linear combination of vectors in the spanning set, and see if it works out. Set
\begin{equation*}
(1,-1,4) = s(1,0,1) + t(2,1,-1) \text{,}
\end{equation*}
for unknown scalars \(s,t\text{.}\) Combining the linear combination on the right into a single vector and comparing components on each side, we get (surprise!) a linear system of equations:
\begin{equation*}
\left\{\begin{array}{rcrcr}
1 \amp = \amp s \amp + \amp 2t \text{,} \\
-1 \amp = \amp \amp \amp t \text{,} \\
4 \amp = \amp s \amp - \amp t \text{.}
\end{array}\right.
\end{equation*}
Now, we don’t actually care about the solution to this system — we only care if the system is consistent or not, because if it’s consistent then there is a way to express \(\uvec{v}\) as a linear combination of the spanning vectors, so that \(\uvec{v}\) is in \(\Span S\text{.}\) And, as you can check for yourself, this system is consistent.
There is an interesting pattern to note if we actually convert the system above into an augmented matrix:
\begin{equation*}
\left[\begin{array}{rr|r} 1 \amp 2 \amp 1 \\ 0 \amp 1 \amp -1 \\ 1 \amp -1 \amp 4 \end{array}\right] \text{.}
\end{equation*}
Notice that the columns in the coefficient matrix are precisely the vectors in the spanning set, and the column of constants is precisely the vector that we are testing as being in \(\Span S\) or not.
Example 16.4.11. A span of matrices.
This example corresponds to Discovery 16.3.b. Working in \(\matrixring_{2 \times 3}(\R)\text{,}\) we would like to determine if \(\uvec{v} = \left[\begin{smallmatrix} 0 \amp 2 \amp 2 \\ 3 \amp -3 \amp -3 \end{smallmatrix}\right] \) is in \(\Span S\text{,}\) for
\begin{equation*}
S = \left\{
\begin{bmatrix} 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix},
\begin{bmatrix} 0 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \end{bmatrix},
\begin{bmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}
\right\}\text{.}
\end{equation*}
Here, we can see more directly that \(\uvec{v}\) is not in \(\Span S\text{.}\) Notice that the nonzero entries of the matrices in \(S\) do not overlap. From this, we can see that every linear combination of these spanning matrices will have the first two entries in the second row equal to each other. But the entries of \(\uvec{v}\) do not have this property.
If we didn’t notice this, we could carry out a similar procedure as in the previous example, beginning with the vector equation
\begin{equation*}
\left[\begin{array}{rrr} 0 \amp 2 \amp 2 \\ 3 \amp -3 \amp -3 \end{array}\right]
=
r\begin{bmatrix} 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix} +
s\begin{bmatrix} 0 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \end{bmatrix} +
t\begin{bmatrix} 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \end{bmatrix}
\end{equation*}
in the unknown scalars \(r,s,t\text{.}\) Combining the linear combination on the right into one matrix, and then comparing entries on each side, we get a linear system with augmented matrix
\begin{equation*}
\left[\begin{array}{rrr|r}
0 \amp 0 \amp 0 \amp 0 \\
1 \amp 0 \amp 0 \amp 2 \\
1 \amp 0 \amp 0 \amp 2 \\
0 \amp 1 \amp 0 \amp 3 \\
0 \amp 1 \amp 0 \amp -3 \\
0 \amp 0 \amp 1 \amp -3
\end{array}\right]\text{.}
\end{equation*}
In this matrix, the inconsistency is obvious in the fourth and fifth rows.
Example 16.4.12. A span of polynomials.
This example corresponds to Discovery 16.3.c. Working in \(\poly(\R)\text{,}\) we would like to determine if the vector \(\uvec{v} = 2 - x + 3 x^2\) is in \(\Span S\) for \(S = \{1, 1 + x, 1 + x^2\}\text{.}\) Again, we set up a vector equation expressing \(\uvec{v}\) as a linear combination in the vectors of \(S\) with unknown scalars:
\begin{equation*}
2 - x + 3 x^2 = r \cdot 1 + s (1 + x) + t (1 + x^2) \text{.}
\end{equation*}
Two polynomials are only equal if they have the same degree and all the same coefficients. From this, we get the following linear system:
\begin{equation*}
\begin{array}{rrcrcrcr}
\text{constant term:} \amp 2 \amp = \amp r \amp + \amp s \amp + \amp t, \\
\text{$x$ term:} \amp -1 \amp = \amp \amp \amp s, \\
\text{$x^2$ term:} \amp 3 \amp = \amp \amp \amp \amp \amp t,
\end{array}
\end{equation*}
which can be converted into augmented matrix
\begin{equation*}
\left[\begin{array}{rrr|r}
1 \amp 1 \amp 1 \amp 2 \\
0 \amp 1 \amp 0 \amp -1 \\
0 \amp 0 \amp 1 \amp 3
\end{array}\right]\text{.}
\end{equation*}
This system is consistent, so \(\uvec{v}\) is indeed in \(\Span S\text{.}\)
Subsection 16.4.4 Determining if a spanning set generates the whole vector space
In Discovery 16.4, we attempted to determine whether a given spanning set generated the entire vector space. In other words, we attempted to answer: is every vector in the vector space somehow a linear combination of spanning set vectors? In the three examples of that discovery activity, the answer was very clearly yes.
Example 16.4.13. A spanning set for \(\R^5\).
In Discovery 16.4.a, clearly every vector in \(\R^5\) can be decomposed as a linear combination of the standard basis vectors:
\begin{equation*}
(a,b,c,d,e) = a \uvec{e}_1 + b \uvec{e}_2 + c \uvec{e}_3 + d \uvec{e}_4 + e \uvec{e}_5 \text{.}
\end{equation*}
Example 16.4.14. A spanning set for \(\matrixring_2(\R)\).
In Discovery 16.4.b, every vector in \(\matrixring_2(\R)\) can be decomposed as a linear combination of the provided spanning set vectors:
\begin{equation*}
\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
= a \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}
+ b \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}
+ c \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix}
+ d \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}\text{.}
\end{equation*}
Example 16.4.15. A spanning set for \(\poly_3(\R)\).
In Discovery 16.4.c, every vector \(a + b x + c x^2 + d x^3\) in \(\poly_3(\R)\) is naturally expressed as a linear combination of \(1\) and the powers of \(x\) up to \(x^3\text{,}\) where the scalars in the linear combination are just the coefficients of the polynomial.
Remark 16.4.16.
In each of the spaces in the examples above, there are analogues for other “dimensions” of vectors.
- In \(\R^n\text{,}\) the standard basis vectors \(\uvec{e}_1,\uvec{e}_2,\dotsc,\uvec{e}_n\) always form a spanning set for the entire vector space.
- In \(\matrixring_{m \times n}(\R)\text{,}\) the set of “standard basis vectors,” consisting of those matrices that have all zero entries except for a single \(1\) in one specific entry, is always a spanning set for the entire vector space.
- When we write a polynomial, we naturally write it as a linear combination of the constant polynomial \(1\) and powers of \(x\). So in \(\poly_n(\R)\text{,}\) the “standard basis vectors” \(1,x,x^2,\dotsc,x^n\) form a spanning set for the entire vector space.
In more complicated examples, the question “Is \(\Span S\) equal to the whole space?” may be more difficult to answer with the concepts we have accumulated so far. We will make this question more manageable through a deeper study of the relationships of vectors to one another with respect to linear combinations, and by attaching a notion of “size” to subspaces. In the meantime, see Example 16.6.2 in Section 16.6 for a preliminary method of answering this question.