Section 20.3 Concepts
In this section.
- Subsection 20.3.1 The “just-right” number of vectors in a spanning set
- Subsection 20.3.2 Dimension as geometric “degrees of freedom”
- Subsection 20.3.3 Dimension as algebraic “degrees of freedom”
- Subsection 20.3.4 The dimension of a subspace
- Subsection 20.3.5 The dimension of the trivial vector space
Subsection 20.3.1 The “just-right” number of vectors in a spanning set
In Discovery 20.1, we reminded ourselves of the geometric interpretation of linear dependence and independence in \(\R^3\text{.}\)
That discovery activity ties these new, abstract concepts back to our previous descriptions of lines and planes in Chapter 15. In that chapter, we described a line via an “initial” vector and a parallel vector, and we described a plane via an “initial” vector and two parallel vectors that are not parallel to each other. Recall that for a line or plane in \(\R^3\) to be a subspace, it must contain the zero vector (i.e. it must pass through the origin). In this case, we can (and always will) take the “initial” point to be the origin.
So a line through the origin can be described by the vector equation \(\uvec{x} = t\uvec{p}\text{,}\) where \(\uvec{p}\) is a nonzero vector parallel to the line. With our new concept of span, we can instead write \(L = \Span\{\uvec{p}\}\) to represent the line \(L\) through the origin that is parallel to the vector \(\uvec{p}\text{.}\) One vector is the “just-right” size for the spanning set for a line. If we had a spanning set for \(L\) consisting of two vectors, then because \(L\) goes through the origin, and because spanning vectors are always part of the space they span, both vectors would have to be parallel to the line and so would be parallel to each other. That is, the two spanning vectors would be scalar multiples of each other, and the spanning set would be linearly dependent.
Similarly, a plane \(P\) through the origin described by the vector equation \(\uvec{x} = s\uvec{p}_1 + t\uvec{p}_2\text{,}\) where \(\uvec{p}_1,\uvec{p}_2\) are nonzero vectors parallel to the plane but not to each other, can also be represented as \(P = \Span\{\uvec{p}_1,\uvec{p}_2\}\text{.}\) Two is the “just-right” size for the spanning set for a plane — one vector would only span a line, and three vectors that are all parallel to the plane would have to be linearly dependent.
When we consider all of \(\R^3\text{,}\) the “just-right” size for a spanning set is three — two vectors would only span a plane (or just a line if the two vectors are parallel to each other), and four vectors would be linearly dependent.
So it seems that there is always a “just-right” size for a spanning set to be a basis — if it's too small it spans only a subspace and not the whole space, and if it's too large it will be linearly dependent. We call this “just-right” size the dimension of the space.
Checking that a proposed spanning set actually does span the whole space can be difficult, as we noted at the end of Subsection 17.4.4. In Subsection 20.5.2, we will find that the concept of dimension gives us a powerful way to sidestep this task if we already know the dimension of the space. If we have the “just-right” number of vectors, and those vectors are linearly independent, then the subspace they span will have the same “size” (i.e. dimension) as the whole space, which will force that subspace to in fact be the whole space.
Subsection 20.3.2 Dimension as geometric “degrees of freedom”
Again thinking of our tasks and results in Discovery 20.1, we can make the geometric interpretation of dimension more explicit.
Lines have dimension \(1\).
Imagine standing on a line; how many “degrees of freedom” of movement do you have while staying on the line? You can only move forwards or backwards, and backwards is just the opposite (i.e. negative) of forwards. So you only have one “degree of freedom” on a line, and this is reflected in the fact that a basis for a line requires only one vector — that vector will represent the forward direction, and its negative will represent the backward direction. One “degree of freedom” on a line, and the dimension of a line is \(1\text{.}\)
Planes have dimension \(2\).
On a plane, you have two “degrees of freedom” of movement: you can move forwards/backwards (one direction and its opposite), or you can move side-to-side (a second direction and its opposite). So a basis for a plane requires exactly two vectors that do not represent the same direction, and the dimension of a plane is \(2\text{.}\)
Space has dimension \(3\).
When we consider all of space, we add a third dimension representing a third “degree of freedom,” since you can now move upwards or downwards in addition to the previous forward/backward and side-to-side directions.
Subsection 20.3.3 Dimension as algebraic “degrees of freedom”
There is an algebraic interpretation of the “degrees of freedom” point of view discussed above that we can transplant from \(\R^n\) to other vector spaces. Consider again a plane in \(\R^3\) described via a vector equation \(\uvec{x} = s\uvec{p}_1 + t\uvec{p}_2\text{.}\) Each of the vectors \(\uvec{p}_1,\uvec{p}_2\) represents an independent direction of movement along the plane, providing us with our two geometric degrees of freedom on this plane of dimension \(2\text{.}\) Algebraically, these two degrees of freedom are provided by the parameters \(s\) and \(t\text{.}\) To convert the general formula \(\uvec{x} = s\uvec{p}_1 + t\uvec{p}_2\) representing all vectors in the plane to a specific formula representing one vector on the plane, we need to choose a specific value for \(s\) (related to how far to move in the direction \(\uvec{p}_1\)) and a specific value for \(t\) (related to how far to move in the direction \(\uvec{p}_2\)). These two values can be chosen independently — that is, choosing a value for \(t\) does not depend on what value is chosen for \(s\text{,}\) and vice versa. So two independent parameters in a general description of every vector, representing two “degrees of freedom,” corresponds to the dimension value of \(2\) for the plane.
In Discovery 20.2, we practised using this point of view to not only determine the dimension of a space, but to extract a basis for the space from a general parametric description of the vectors in the space. Below is a general procedure for the process. See Subsection 20.4.1 for examples of using this procedure.
Procedure 20.3.1. Obtaining a basis from paramaters.
To determine a basis for a subspace \(U\) of a vector space \(V\text{,}\) when subspace \(U\) is not already described in terms of a spanning set:
- Determine a general, parametric expression capable of expressing all vectors in \(V\text{.}\)
- Use the defining conditions of the subspace \(U\) to reduce your general expression from the previous step to the minimum number of independent parameters possible.
- Expand the reduced parametric expression from the previous step to a linear combination of the form\begin{equation*} \uvec{x} = (\text{parameter})\cdot(\text{vector}) + (\text{parameter})\cdot(\text{vector}) + \dotsb + (\text{parameter})\cdot(\text{vector}), \end{equation*}where there is one term in the linear combination for each independent parameter, and the vectors involved are specific vectors in the space, not involving parameters.
- The collection of specific vectors in the general linear combination expression from the previous step, without parameters, should now form a basis for \(U\text{.}\)
Remark 20.3.2.
- This procedure can still be used in the case \(U\) is equal to the whole space \(V\text{,}\) but likely Step 2 will not be needed. In this case, the procedure is likely to produce a standard basis for \(V\text{.}\)
- In Remark 17.4.9, we claimed that that every subspace is somehow defined by one or more homogeneous conditions. Typically, in Step 2 of the procedure, you will be using such homogeneous conditions to express relationships between the parameters, in which some parameters can be solved for and then eliminated by substituting for them in the general parametric vector expression from Step 1.
- This procedure was actually one of the first things we learned, back in Chapter 2! Except back then we called the procedure row reduction. When we solve a homogeneous system of equations with \(m\times n\) coefficient matrix \(A\text{,}\) we are attempting to find all vectors \(\uvec{x}\) that satisfy the homogeneous condition \(A\uvec{x} = \zerovec\text{.}\) We could have started this process by assigning parameters \(x_1=t_1\text{,}\) \(x_2=t_2\text{,}\) and so on, at the beginning of the process, but this was not necessary because the matrix-reduction process doesn't involve any variable/parameter letters. By row reducing, we simplify the original homogeneous conditions (i.e. the original equations in the system) so that it becomes obvious how we can isolate certain of the variables and express them in terms of the others (or determine that they are always zero and so can be eliminated completely). We then assign the minimum number of parameters necessary, leading to a general, parametric expression for all vectors in the solution space. See Subsection 20.4.1 for an example of using this procedure to determine a basis for the solution space of a homogeneous system.
Subsection 20.3.4 The dimension of a subspace
In Discovery 20.5, we considered how the dimension of a subspace compares to the dimension of the whole space. The dimension of a space is defined to be the number of vectors required in a basis (i.e. a linearly independent spanning set) for the space. We know what spanning set means for a subspace — a set of vectors is a spanning set when the collection of all possible linear combinations of the spanning set vectors is the same as the collection of all vectors in the subspace. But the definition of linearly independent does not seem to be relative to the space that the vectors are in, except for the use of the vector operations for that space, which are always the same in a subspace as they are in the whole space.
In more detail, the definitions of linear dependence and independence involve only the zero vector and the concept of linear combination, and every subspace contains the zero vector and is closed under taking linear combinations (Proposition 17.5.2). So if we have a set of vectors in a subspace of a larger vector space, and we would like to determine whether that set is linearly dependent or independent, it is irrelevant whether we consider those vectors as being a part of the subspace or as being a part of the large space — the answer will be the same regardless of our point of view on where these vectors “live.”
It seems like a spanning set for a subspace should require fewer vectors than a spanning set for the larger space. This was our experience in Discovery 20.2, where eliminating dependent parameters using the subspace conditions led to a smaller basis. And the concepts of linear dependence/independence are independent of the concept of subspace. So our intuition is that the dimension of a subspace should be less than the dimension of the whole space, and that is exactly what we will see in Subsection 20.5.3.
Subsection 20.3.5 The dimension of the trivial vector space
What should the dimension of the trivial vector space \(\{\zerovec\}\) be? If this were the subspace of \(\R^n\) consisting of just the origin, we would have zero “degrees of freedom” of movement, as we couldn't move at all without leaving the subspace. And if we want a general algebraic expression describing all vectors in this space, zero parameters are needed since we can simply write \(\uvec{x} = \zerovec\text{.}\) So both our geometric and our algebraic conceptions of dimension suggest that \(\dim\{\zerovec\}\) should be \(0\text{.}\)
Furthermore, in the previous subsection we decided that the dimension of a subspace should be smaller than the dimension of the whole space. The trivial vector space is always a subspace of every vector space, even \(1\)-dimensional spaces. But clearly the trivial space is not the same “size” as a \(1\)-dimensional space, so its dimension should be strictly smaller than \(1\text{,}\) which only leaves dimension-\(0\) as a possibility.
But what about the technical definition of dimension? How many vectors are required in a basis for the trivial space? A basis for \(\{\zerovec\}\) cannot contain a nonzero vector, because a span always contains its spanning vectors and this space does not contain anything nonzero. But while the collection of vectors consisting of just the zero vector is a spanning set for the space of vectors consisting of just a zero vector, we decided in Chapter 18 that the zero vector all by itself should be considered a linearly dependent set. However, the collection of vectors that contains no vectors at all (i.e. the empty set) is linearly independent, because it does not contain an example of a vector that can be expressed as a linear combination of other vectors in the set (since it contains nothing at all). So if we just decide that \(\Span\{\}\) should result in the trivial vector space, then we can consider the empty set of vectors \(\{\}\) as a basis for the trivial space \(\{\zerovec\}\text{,}\) and this basis contains \(0\) vectors.
For all of these reasons, it seems correct to consider \(\dim\{\zerovec\}\) to be \(0\text{.}\)