Section 18.3 Concepts
In this section.
Subsection 18.3.1 Reducing spanning sets
Suppose we have a spanning set S for a space V, so that V=SpanS. This equality of spaces says that every vector in the space V can somehow be expressed as a linear combination of vectors in S. Suppose further that one of the vectors in S can be expressed as a linear combination of some of the others, say
w=k1u1+k2u2+β―+kmum,
where each of w,u1,u2,β¦,um is a vector in S.
Then w is not actually needed when expressing vectors in V as linear combinations of vectors in S.
Indeed, consider a vector x in V expressed as a linear combination of vectors in S, including w, say
x=cw+c1v1+c2v2+β―+kmvn,
where each of v1,v2,β¦,vn is a vector in S. But then we could substitute the expression in (β) for w to obtain
x=c(k1u1+k2u2+β―+kmum)+c1v1+c2v2+β―+kmvn=ck1u1+ck2u2+β―+ckmum+c1v1+c2v2+β―+kmvn.
Here, each of the ui vectors and the vj vectors are in S, making this an expression for x as a linear combination of vectors in S but not including w.
If w isn't needed to express vectors in V as linear combinations of vectors in S, then we should have SpanS=SpanSβ², where Sβ² is the set of all vectors in S except w. That is, we can discard w from the spanning set for V, and still have a spanning set.
This pattern will help us reduce down to some sort of optimal spanning set: we can keep removing these redundant spanning vectors that are linear combinations of other spanning vectors until there are none left.Subsection 18.3.2 Linear dependence and independence
A set of vectors is called linearly dependent precisely when it is non-optimal as a spanning set in the way described above: when one of the vectors in the set is a linear combination of others in the set. However, it is usually not obvious that some vector is redundant in this way β checking each vector in turn is tedious, and also would not be a very convenient way to reason with the concept of linear dependence in the abstract. However, having an expression for a vector w as a linear combination of other vectors u1,u2,β¦,um, such as
w=k1u1+k2u2+β―+kmum,
is equivalent to having a nontrivial linear combination of these vectors equal to the zero vector:
k1u1+k2u2+β―+kmum+(β1)w=0.
And vice versa, since if we have a nontrivial linear combination of these vectors that results in the zero vector, say
k1u1+k2u2+β―+kmum+kw=0,
and the coefficient k on w is nonzero, then we can rearrange to get
w=βk1ku1+(βk2k)u2+β―+(βkmk)um,
an expression for w as a linear combination of the others.
Again, the advantage of checking for linear combinations equal to the zero vector is that in a collection of vectors S, we usually don't know ahead of time which one will be the odd one out (that is, which one will play the role of w as above). In a nontrivial linear combination equalling 0, we can take as the redundant vector w any of the vectors whose coefficient is nonzero (which is required to perform the step of dividing by k in the algebra isolating w above).
Now, we can always take the trivial linear combination, where all coefficients are 0, to get a result of 0. But if this is the only linear combination of a set of vectors by which we can get 0 as a result, then none of the vectors can act as the redundant w as above, because an expression like (ββ) always leads to an equality like (βββ), involving a nontrivial linear combination.
This logic leads to the Test for Linear Dependence/Independence.
Procedure 18.3.1. Test for Linear Dependence/Independence.
To test whether vectors v1,v2,β¦,vm are linearly dependent/independent, solve the homogeneous vector equation
k1v1+k2v2+β―+kmvm=0
in the (scalar) variables k1,k2,β¦,km.
If this vector equation has a nontrivial solution for these coefficient variables, then the vectors v1,v2,β¦,vm are linearly dependent.
Otherwise, if this vector equation has only the trivial solution k1=0,k2=0,β¦,km=0, then the vectors v1,v2,β¦,vm are linearly independent.
Subsection 18.3.3 Linear dependence and independence of just one or two vectors
For a pair u,v of vectors to be linearly dependent, one must be a linear combination of the other. But a linear combination of one vector is just a scalar multiple, and so a pair of vectors is linearly dependent if one is a scalar multiple of the other. If both vectors are nonzero, that scalar must also be nonzero and so could be shifted to the other side as its reciprocal:
u=kvβΊ1ku=v(for kβ 0).
So nonzero vectors u,v are linearly dependent if and only if each is a scalar multiple of the other. In Rn, we would have called such vectors parallel.
What about a set containing a single vector? Our motivation for creating the concept of linear dependence/independence was to measure whether a spanning set contained redundent information or whether it was somehow βoptimal.β A spanning set consisting of a single nonzero vector cannot be reduced to a smaller spanning set, so it is already optimal and we should refer to that spanning set as linearly independent. This coincides with the result of the test for linear dependence/independence for a single vector v: if v is nonzero, then there are no nontrivial solutions to the vector equation kv=0.
But what about the zero vector by itself? Scalar multiples of 0 remain 0, so Span{0} is the trivial vector space consisting of just the zero vector. Is {0} an optimal spanning set for the trivial space, or can it be reduced further? By convention, we also consider Span{} to be the trivial vector space (where {} represents a set containing no vectors, called the empty set), because we always want the Span process to return a subspace of the vector space in which we're working. So the spanning set {0} can be reduced to the empty set, and still span the same space. This line of reasoning leads us to consider the set of vectors containing only the zero vector to be linearly dependent.
Subsection 18.3.4 Linear dependence and independence in Rn
Independent directions.
In Subsection 17.3.3, we discussed how a nonzero vector in Rn spans a line through the origin. If a second vector is linearly dependent with the vector spanning the line, then as discussed in the previous subsection (Subsection 18.3.3), that second vector must be parallel with the first, hence parallel with the line. To get something linearly independent, we need to branch off in a new direction from the line.Maximum number of linearly independent vectors.
In Discovery 18.5, we considered the possible sizes of linearly independent sets in R2 and R3. We can certainly have two linearly independent vectors in R2, since clearly the standard basis vectors e1 and e2 form a linearly independent set in R2. Could we have three? Two linearly independent vectors would have to be nonparallel, and so they would have to span a plane β i.e. they would have to span the entire plane. Geometrically, a third linearly independent vector would have to branch off in a βnew direction,β as in our discussion above. But in R2 there is no new third direction in which to head β we can't have a vector breaking up out of the plane βinto the third dimension,β because there is no third dimension available in R2. Algebraically, if we had three vectors u=(u1,u2), v=(v1,v2), w=(w1,w2) in R2 and attempted the test for dependence/independence, we would start by setting up the vector equation
k1u+k2v+k3w=0.
Combining the linear combination on the left back into one vector, and comparing coordinates on either side, we would obtain linear system
{u1k1+v1k2+w1k3=0,u2k1+v2k2+w2k3=0,
in the unknown coefficients k1,k2,k3. Because there are only two equations, the reduced form for the coefficient matrix for this system can have no more than two leading ones, so it requires at least one parameter to solve, which means there are nontrivial solutions. So three vectors in R2 can never by linearly independent.
We come to a similar conclusion in R3 using both geometric and algebraic reasoning β three independent vectors in R3 is certainly possible (for example, the standard basis vectors), but a set of four vectors in R3 can never be linearly independent. Geometrically, once you have three independent vectors pointing in three βindependent directions,β there is no new direction in R3 for a fourth independent vector to take. Algebraically, we could set up the test for independence with four vectors in R3 and it would lead to a homogeneous system of three equations (one for each coordinate) in four variables (one unknown coefficient for each vector). Since the system would have more variables than equations, it would require parameters to solve, leading to nontrivial solutions.
And the pattern continues in higher dimensions β a collection of more than four vectors in R4 must be linearly dependent, a collection of more than five vectors in R5 must be linearly dependent, and so on. In fact, this pattern can also be found in abstract vectors spaces β see Lemma 18.5.7 in Subsection 18.5.2. And this pattern will help us transplant the idea of dimension from Rn to abstract spaces.