Skip to main content
Logo image

Section 16.3 Concepts

When faced with any big problem, mathematical or otherwise, it is often a good idea to break the big problem up into smaller parts. In a vector space, the “smaller parts” are smaller vector spaces inside the larger space, called subspaces.

Subsection 16.3.1 Recognizing subspaces

How can we recognize subspaces? To be a subspace, a subcollection of vectors must satisfy all ten vector space axioms on its own. But Axiom A 2, Axiom A 3, and Axioms S 2–5 all concern the algebra of vectors, and don’t really take into account where the vectors are considered to “live”. Since these algebra axioms are true about all vectors in the larger space, they are automatically true about the vectors in the subcollection. So that leaves Axiom A 1, Axiom A 4, Axiom A 5, and Axiom S 1.
For axioms Axiom A 1 and Axiom S 1, we already have addition and scalar multiplication of vectors defined as in the large space. But when we are considering whether the smaller collection is a vector space all on its own, the vectors not in this collection are no longer relevant. So the parts of these axioms that say “the result is always equal to another in the collection of objects” now refer to the subcollection of vectors under consideration. That is, we need to make sure that when vectors in the subcollection are added or scalar multiplied, then the result is again in the subcollection, not somewhere else in the vector space at large. We call this property being closed under the vector operations.
Similarly, for Axiom A 5, we already know that every vector in the large space has a negative, hence so does every vector in the subcollection. But we need to check that the subcollection is closed under taking negatives — that the negative of a vector in the subcollection is again in the subcollection. And we know that there is a zero vector in the large space, but the subcollection needs a zero vector too, to satisfy Axiom A 4. The zero vector from the larger space already satisfies the property that \(\uvec{v}+\zerovec=\uvec{v}\) for all vectors, but again we need the zero vector to be in the subcollection.
As we will prove in Proposition 16.5.1, we really only need to check a subcollection for closure under addition and scalar multiplication in order to verify that it is also a vector space.

Remark 16.3.2.

  • The first condition might seem unnecessary. But in math it is possible to accidentally be considering some collection of objects that in reality contains no objects. For example, consider \(\matrixring_1(\R)\text{,}\) the vector space of all \(1\times 1\) matrices. We could try to determine whether the subcollection of all \(1\times 1\) matrices whose square is equal to \(\begin{bmatrix} -1 \end{bmatrix}\) is a subspace of of \(\matrixring_1(\R)\text{,}\) but we’d be wasting our time because there are no such matrices.
  • The logic of the test works in reverse as well: every subspace satisfies the three statements of the test because it is a vector space all on its own and thus satisfies the ten vector space axioms. (This is the “and only if” part of Proposition 16.5.1.) So a subspace is always nonempty because it must contain a zero vector (Axiom A 4), and it is always closed under the vector operations (Axiom A 1 and Axiom S 1).
See Section 16.4 for examples of applying the Subspace Test to verify that certain subcollections of vectors in vector spaces form subspaces.

Subsection 16.3.2 Building subspaces

Suppose we are studying a problem for which certain vectors in a certain vector space are important. We would like to do linear algebra with these certain special vectors, so the fact that they are part of a vector space is essential, but perhaps not all of the vector space in which they live is relevant to the problem. Can we form a smaller subspace which contains these important vectors? Even better, can we determine the smallest subspace which contains these important vectors?
As stated above, every subspace must satisfy the three parts of the Subspace Test. So if a subspace contains our special vectors, then it must also contain all scalar multiples of those vectors. And it must also be closed under vector addition, so it must also contain all sums of scalar multiples of the special vectors. Therefore, it must contain every linear combination of the special vectors. (In other words, subspaces are also closed under taking linear combinations.)
As we noted in Discovery guide 16.1, and will verify in Subsection 16.5.2 (Proposition 16.5.5), the subcollection of a vector space consisting of all linear combinations of a set of vectors \(S\) is always a subspace, called the span of \(S\) and written \(\Span S\text{.}\) So the process of taking all possible linear combinations of a set of vectors can be used to build subspaces. And sometimes, as in Discovery 16.4, the space that span builds ends up being the whole larger space.

Subsection 16.3.3 The subspaces of \(\R^n\)

We saw in Subsection 14.3.2 that a line through the origin in \(\R^n\) can be described in vector form as \(\uvec{x} = t \uvec{p}\) for some vector \(\uvec{p}\) that is parallel to the line (and taking “initial” point \(\uvec{x}_0 = \zerovec\text{,}\) since the line passes through the origin). Similarly, we saw that a plane through the origin in \(\R^3\) can be described in vector form as \(\uvec{x} = s \uvec{p}_1 + t \uvec{p}_2\) for some vectors \(\uvec{p}_1\) and \(\uvec{p}_2\) that are parallel to the plane but not parallel to each other. With our new, more sophisticated view of vector spaces and subspaces, we can now recognize a line \(\uvec{x} = t \uvec{p}\) as the subspace \(\Span \{\uvec{p}\}\text{,}\) and a plane \(\uvec{x} = s \uvec{p}_1 + t \uvec{p}_2\) as the subspace \(\Span \{\uvec{p}_1,\uvec{p}_2\}\text{.}\)
In fact, every (nontrivial) subspace of \(\R^n\) has a geometric interpretation as a line or a plane or some sort of higher-dimensional hyperplane. In particular,
  • the subspaces of \(\R^2\) are precisely
    • the zero space \(\{\zerovec\}\text{,}\)
    • \(\Span \{\uvec{p}\}\) for a nonzero vector \(\uvec{p}\text{,}\) which builds a line through the origin, and
    • \(\Span \{\uvec{p}_1,\uvec{p}_2\}\) for two nonzero, nonparallel vectors \(\uvec{p}_1\) and \(\uvec{p}_2\text{,}\) which builds the whole plane \(\R^2\text{;}\)
  • the subspaces of \(\R^3\) are precisely
    • the zero space \(\{\zerovec\}\text{,}\)
    • \(\Span \{\uvec{p}\}\) for a nonzero vector \(\uvec{p}\text{,}\) which builds a line through the origin,
    • \(\Span \{\uvec{p}_1,\uvec{p}_2\}\) for two nonzero, nonparallel vectors \(\uvec{p}_1\) and \(\uvec{p}_2\text{,}\) which builds a plane through the origin, and
    • \(\Span \{\uvec{p}_1,\uvec{p}_2,\uvec{p}_3\}\) for three nonzero, non-coplanar vectors \(\uvec{p}_1\text{,}\) \(\uvec{p}_2\text{,}\) and \(\uvec{p}_3\text{,}\) which builds all of space \(\R^3\text{;}\)
  • the subspaces of \(\R^4\) are precisely
    • the zero space \(\{\zerovec\}\text{,}\)
    • \(\Span \{\uvec{p}\}\) for a nonzero vector \(\uvec{p}\text{,}\) which builds a line through the origin,
    • \(\Span \{\uvec{p}_1,\uvec{p}_2\}\) for two nonzero, nonparallel vectors \(\uvec{p}_1\) and \(\uvec{p}_2\text{,}\) which builds a plane through the origin,
    • \(\Span \{\uvec{p}_1,\uvec{p}_2,\uvec{p}_3\}\) for three nonzero, non-coplanar vectors \(\uvec{p}_1\text{,}\) \(\uvec{p}_2\text{,}\) and \(\uvec{p}_3\text{,}\) which builds a hyperplane through the origin, and
    • \(\Span \{\uvec{p}_1,\uvec{p}_2,\uvec{p}_3,\uvec{p}_4\}\) for four nonzero, non-cohyperplanar vectors \(\uvec{p}_1\text{,}\) \(\uvec{p}_2\text{,}\) \(\uvec{p}_3\text{,}\) and \(\uvec{p}_4\text{,}\) which builds all of four-dimensional space \(\R^4\text{;}\)
  • etc.

Subsection 16.3.4 Recognizing when two subspaces are the same

One of the goals of the next few chapters is to determine how to describe vector spaces using spanning sets in an optimal fashion. In this endeavour, we will want to know when a refinement of a spanning set still spans the space we are trying to describe. So, in particular, we will need to know when two spanning sets generate the same subspace. Since spans are defined by linear combinations, and subspaces are closed under linear combinations, this will not be as difficult as it sounds, and we provide a test for this situation as Proposition 16.5.6 in Subsection 16.5.3.