Use this and the expression for \(\uvec{x}\) in Task a to express \(\uvec{x}\) as a linear combination of just\(\uvec{v}_1\) and \(\uvec{v}_3\text{.}\)
Task b shows that \(\uvec{x}\) is in \(\Span \{\uvec{v}_1,\uvec{v}_3\}\text{.}\) Do you think that similar calculations and the same reasoning can be carried out for every vector in \(\Span \{\uvec{v}_1,\uvec{v}_2,\uvec{v}_3\}\text{?}\)
Discovery 17.1 demonstrates a common pattern: when one of the vectors in a spanning set can be expressed as a linear combination of the others, that vector becomes redundant, and a smaller spanning set can be used in place of the original one. We’ll give this situation a name: a set of vectors is called linearly dependent if (at least) one of the vectors in the set can be written as a linear combination of other vectors in the set; otherwise the set of vectors is called linearly independent. However, it can be tedious to check each vector in a set one-by-one to see if it is a linear combination of others. Luckily, for a finite set of vectors, there is a way to check all of them all at once.
where the coefficients \(k_1,k_2,\dotsc,k_m\) are (scalar) variables.
If vector equation (✶) has a nontrivial solution in the variables \(k_1,k_2,\dotsc,k_m\text{,}\) then the vectors \(\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_m\) are linearly dependent.
Otherwise, if vector equation (✶) has only the trivial solution \(k_1=0,k_2=0,\dotsc,k_m=0\text{,}\) then the vectors \(\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_m\) are linearly independent.
The next discovery activity will help you understand the Test for Linear Dependence/Independence. To keep it simple, we’ll consider just three vectors at a time.
has a nontrivial solution. This means that there are values for the scalars \(k_1,k_2,k_3\text{,}\) at least one of which is not zero, so that equation (✶✶) is true.
Use some algebra to manipulate equation (✶✶) to demonstrate that one of the vectors can be expressed as a linear combination of the others (and hence, by definition, the vectors \(\uvec{u}_1,\uvec{u}_2,\uvec{u}_3\) are linearly dependent).
Suppose they weren’t: for example, suppose \(\uvec{w}_3 = c_1\uvec{w}_1 + c_2\uvec{w}_2\) were true for some scalars \(c_1,c_2\text{.}\) Manipulate this expression for \(\uvec{w}_3\) until is says something about equation (✶✶✶). Do you see now why \(\uvec{w}_1,\uvec{w}_2,\uvec{w}_3\)cannot satisfy the definition of linearly dependence, and hence must be linearly independent?
After setting up the vector equation from the test for linear dependence/independence, you are solving for the scalars \(k_1,k_2,k_3\text{,}\) not for \(x\text{.}\) On the right-hand side, the zero represents the zero vector, which in this space is the zero polynomial. What are the coefficients on powers of \(x\) in the zero polynomial? The left-hand side, being equal, must have the same coefficients.
If the test for linear dependence/independence is to remain true in the case of a “set” of vectors consisting of just one vector, how should we define linear dependence/independence for such a set?