Section 18.3 Concepts
In this section.
Statement 3 of Proposition 17.5.5 guarantees that every vector space has a spanning set. To prove this statement, we verified that every vector space is trivially generated by itself, i.e. \(V = \Span V\text{.}\)
But this doesn't give us a useful description of the vector space \(V\text{,}\) it basically just says “to build all the vectors in \(V\) out of some vectors in \(V\text{,}\) use all the vectors in \(V\text{.}\)” The point of a spanning set is to give you a set of building-block vectors that can be used to construct every other vector in the space through linear combinations. To make an analogy to chemistry, the vectors in a spanning set act like atoms, and linear combinations are then like molecules built out of different combinations of different quantities of those atoms. So we would like our set of building blocks to be as simple as possible — that is, we would like to get down to some sort of optimal description of a vector space in terms of a spanning set. Linear dependence and independence are precisely the concepts we will need in order to judge whether we have such an optimal spanning set.
Subsection 18.3.1 Reducing spanning sets
Suppose we have a spanning set \(S\) for a space \(V\text{,}\) so that \(V = \Span S\text{.}\) This equality of spaces says that every vector in the space \(V\) can somehow be expressed as a linear combination of vectors in \(S\text{.}\)
Suppose further that one of the vectors in \(S\) can be expressed as a linear combination of some of the others, say
where each of \(\uvec{w},\uvec{u}_1,\uvec{u}_2,\dotsc,\uvec{u}_m\) is a vector in \(S\text{.}\)
Then \(\uvec{w}\) is not actually needed when expressing vectors in \(V\) as linear combinations of vectors in \(S\text{.}\)
Indeed, consider a vector \(\uvec{x}\) in \(V\) expressed as a linear combination of vectors in \(S\text{,}\) including \(\uvec{w}\text{,}\) say
where each of \(\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_n\) is a vector in \(S\text{.}\) But then we could substitute the expression in (\(\star\)) for \(\uvec{w}\) to obtain
Here, each of the \(\uvec{u}_i\) vectors and the \(\uvec{v}_j\) vectors are in \(S\text{,}\) making this an expression for \(\uvec{x}\) as a linear combination of vectors in \(S\) but not including \(\uvec{w}\).
If \(\uvec{w}\) isn't needed to express vectors in \(V\) as linear combinations of vectors in \(S\text{,}\) then we should have \(\Span S = \Span S'\text{,}\) where \(S'\) is the set of all vectors in \(S\) except \(\uvec{w}\text{.}\) That is, we can discard \(\uvec{w}\) from the spanning set for \(V\text{,}\) and still have a spanning set.
This pattern will help us reduce down to some sort of optimal spanning set: we can keep removing these redundant spanning vectors that are linear combinations of other spanning vectors until there are none left.
Subsection 18.3.2 Linear dependence and independence
A set of vectors is called linearly dependent precisely when it is non-optimal as a spanning set in the way described above: when one of the vectors in the set is a linear combination of others in the set. However, it is usually not obvious that some vector is redundant in this way — checking each vector in turn is tedious, and also would not be a very convenient way to reason with the concept of linear dependence in the abstract.
However, having an expression for a vector \(\uvec{w}\) as a linear combination of other vectors \(\uvec{u}_1,\uvec{u}_2,\dotsc,\uvec{u}_m\text{,}\) such as
is equivalent to having a nontrivial linear combination of these vectors equal to the zero vector:
And vice versa, since if we have a nontrivial linear combination of these vectors that results in the zero vector, say
and the coefficient \(k\) on \(\uvec{w}\) is nonzero, then we can rearrange to get
an expression for \(\uvec{w}\) as a linear combination of the others.
Again, the advantage of checking for linear combinations equal to the zero vector is that in a collection of vectors \(S\text{,}\) we usually don't know ahead of time which one will be the odd one out (that is, which one will play the role of \(\uvec{w}\) as above). In a nontrivial linear combination equalling \(\zerovec\text{,}\) we can take as the redundant vector \(\uvec{w}\) any of the vectors whose coefficient is nonzero (which is required to perform the step of dividing by \(k\) in the algebra isolating \(\uvec{w}\) above).
Now, we can always take the trivial linear combination, where all coefficients are \(0\text{,}\) to get a result of \(\zerovec\text{.}\) But if this is the only linear combination of a set of vectors by which we can get \(\zerovec\) as a result, then none of the vectors can act as the redundant \(\uvec{w}\) as above, because an expression like (\(\star\star\)) always leads to an equality like (\(\star\star\star\)), involving a nontrivial linear combination.
This logic leads to the Test for Linear Dependence/Independence.
Procedure 18.3.1. Test for Linear Dependence/Independence.
To test whether vectors \(\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_m\) are linearly dependent/independent, solve the homogeneous vector equation
in the (scalar) variables \(k_1,k_2,\dotsc,k_m\text{.}\)
If this vector equation has a nontrivial solution for these coefficient variables, then the vectors \(\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_m\) are linearly dependent.
Otherwise, if this vector equation has only the trivial solution \(k_1=0,k_2=0,\dotsc,k_m=0\text{,}\) then the vectors \(\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_m\) are linearly independent.
Subsection 18.3.3 Linear dependence and independence of just one or two vectors
For a pair \(\uvec{u},\uvec{v}\) of vectors to be linearly dependent, one must be a linear combination of the other. But a linear combination of one vector is just a scalar multiple, and so a pair of vectors is linearly dependent if one is a scalar multiple of the other. If both vectors are nonzero, that scalar must also be nonzero and so could be shifted to the other side as its reciprocal:
So nonzero vectors \(\uvec{u},\uvec{v}\) are linearly dependent if and only if each is a scalar multiple of the other. In \(\R^n\text{,}\) we would have called such vectors parallel.
What about a set containing a single vector? Our motivation for creating the concept of linear dependence/independence was to measure whether a spanning set contained redundent information or whether it was somehow “optimal.” A spanning set consisting of a single nonzero vector cannot be reduced to a smaller spanning set, so it is already optimal and we should refer to that spanning set as linearly independent. This coincides with the result of the test for linear dependence/independence for a single vector \(\uvec{v}\text{:}\) if \(\uvec{v}\) is nonzero, then there are no nontrivial solutions to the vector equation \(k\uvec{v} = \zerovec\text{.}\)
But what about the zero vector by itself? Scalar multiples of \(\zerovec\) remain \(\zerovec\text{,}\) so \(\Span\{\zerovec\}\) is the trivial vector space consisting of just the zero vector. Is \(\{\zerovec\}\) an optimal spanning set for the trivial space, or can it be reduced further? By convention, we also consider \(\Span\{\}\) to be the trivial vector space (where \(\{\}\) represents a set containing no vectors, called the empty set), because we always want the \(\Span\) process to return a subspace of the vector space in which we're working. So the spanning set \(\{\zerovec\}\) can be reduced to the empty set, and still span the same space. This line of reasoning leads us to consider the set of vectors containing only the zero vector to be linearly dependent.
Subsection 18.3.4 Linear dependence and independence in \(\R^n\)
Independent directions.
In Subsection 17.3.3, we discussed how a nonzero vector in \(\R^n\) spans a line through the origin. If a second vector is linearly dependent with the vector spanning the line, then as discussed in the previous subsection (Subsection 18.3.3), that second vector must be parallel with the first, hence parallel with the line. To get something linearly independent, we need to branch off in a new direction from the line.
In Subsection 17.3.3, we also discussed how a pair of nonzero, nonparallel vectors in \(\R^n\) span a plane through the origin. If a third vector is linearly dependent with those two spanning vectors, it is somehow a linear combination of them and so lies in that plane. To get something linearly independent, we need to branch off in a new direction from that plane.
This idea that we can enlarge an independent set by including a new vector in a new direction works in abstract vector spaces as well, as we will see in Proposition 18.5.6 in Subsection 18.5.2.
Maximum number of linearly independent vectors.
In Discovery 18.5, we considered the possible sizes of linearly independent sets in \(\R^2\) and \(\R^3\text{.}\) We can certainly have two linearly independent vectors in \(\R^2\text{,}\) since clearly the standard basis vectors \(\uvec{e}_1\) and \(\uvec{e}_2\) form a linearly independent set in \(\R^2\text{.}\) Could we have three? Two linearly independent vectors would have to be nonparallel, and so they would have to span a plane — i.e. they would have to span the \(entire\) plane. Geometrically, a third linearly independent vector would have to branch off in a “new direction,” as in our discussion above. But in \(\R^2\) there is no new third direction in which to head — we can't have a vector breaking up out of the plane “into the third dimension,” because there is no third dimension available in \(\R^2\text{.}\) Algebraically, if we had three vectors \(\uvec{u}=(u_1,u_2)\text{,}\) \(\uvec{v}=(v_1,v_2)\text{,}\) \(\uvec{w}=(w_1,w_2)\) in \(\R^2\) and attempted the test for dependence/independence, we would start by setting up the vector equation
Combining the linear combination on the left back into one vector, and comparing coordinates on either side, we would obtain linear system
in the unknown coefficients \(k_1,k_2,k_3\text{.}\) Because there are only two equations, the reduced form for the coefficient matrix for this system can have no more than two leading ones, so it requires at least one parameter to solve, which means there are nontrivial solutions. So three vectors in \(\R^2\) can never by linearly independent.
We come to a similar conclusion in \(\R^3\) using both geometric and algebraic reasoning — three independent vectors in \(\R^3\) is certainly possible (for example, the standard basis vectors), but a set of four vectors in \(\R^3\) can never be linearly independent. Geometrically, once you have three independent vectors pointing in three “independent directions,” there is no new direction in \(\R^3\) for a fourth independent vector to take. Algebraically, we could set up the test for independence with four vectors in \(\R^3\) and it would lead to a homogeneous system of three equations (one for each coordinate) in four variables (one unknown coefficient for each vector). Since the system would have more variables than equations, it would require parameters to solve, leading to nontrivial solutions.
And the pattern continues in higher dimensions — a collection of more than four vectors in \(\R^4\) must be linearly dependent, a collection of more than five vectors in \(\R^5\) must be linearly dependent, and so on. In fact, this pattern can also be found in abstract vectors spaces — see Lemma 18.5.7 in Subsection 18.5.2. And this pattern will help us transplant the idea of dimension from \(\R^n\) to abstract spaces.