Skip to main content
Logo image

Section 17.4 Examples

Subsection 17.4.1 Testing dependence/independence

Here we will carry out examples of applying the Test for Linear Dependence/Independence.

Example 17.4.1. Testing dependence/independence in Rn.

Are the vectors (1,0,0,1),(1,1,0,1),(2,1,0,0),(5,1,0,5) in R4 linearly dependent or independent? Set up the test:
k1(1,0,0,1)+k2(1,1,0,1)+k3(2,1,0,0)+k4(5,1,0,5)=(0,0,0,0).
Notice how we have used the proper zero vector in this space on the right-hand side. On the left-hand side, we want to combine the expression into one vector so that we can compare with the zero vector.
(k1,0,0,k1)+(k2,k2,0,k2)+(2k3,k3,0,0)+(5k4,k4,0,5k4)=(0,0,0,0)(k1+k2+2k3+5k4,k2+k3+k4,0,k1k2+5k4)=(0,0,0,0)
Comparing components on either side, we obtain a system of four equations in the unknown scalars from the linear combination:
{k1+k2+2k3+5k4=0,k2+k3+k4=0,0=0,k1k2+5k4=0.
Now we’ll solve this homogeneous system by row reducing its coefficient matrix.
(✶)[1125011100001105]reducerow[1010011000010000]
Note that here it was not necessary to reduce all the way to RREF, as we are not actually interested in the solutions to this system — we only need to know whether there exist nontrivial solutions. From the reduced matrix, we can see that k3 is a free variable and would be assigned a parameter in the general solution. The necessity of a parameter means there are an infinite number of solutions, which in particular means there are nontrivial solutions. Therefore, this collection of vectors is linearly dependent.

Remark 17.4.2.

Notice how the vectors from R4 that we were testing in the previous example ended up as columns in the coefficient matrix in (✶) — we saw a similar pattern in Example 16.4.10 (and in the other examples in Subsection 16.4.3), where we tested whether a particular vector was in the span of some collection of vectors.

Example 17.4.3. Testing dependence/independence in Mm×n(R).

  1. Consider the matrices in Discovery 17.4.a. First we set up the Test for Linear Dependence/Independence. Again, we use the proper zero vector on the right-hand side, and then we combine the expression on the left-hand side into one vector so that we may compare against the zero vector.
    k1[1001]+k2[0110]+k3[0001]=[0000][k100k1]+[0k2k20]+[000k3]=[0000][k1k2k2k1+k3]=[0000]
    There is no need to set up a system of equations here — we can see from comparing the top rows on either side that k1=0 and k2=0. Then, from the (2,2) entries, we see that k1+k3=0. But since we already have k1=0, we get k3=0 as well. So there is only the trivial solution, and these vectors are linearly independent.
  2. Consider the matrices in Discovery 17.4.b. Again, we start by setting up the Test for Linear Dependence/Independence using the appropriate zero vector.
    k1[1001]+k2[1001]+k3[3002]=[0000]
    As before, this will lead to a homogeneous system of equations in the unknown scalars k1,k2,k3, and the coefficient matrix of this system will have the entries of the three vectors as columns:
    [113000000112]reducerow[101/2015/2000000].
    From the reduced matrix, we see that k3 is a free variable and will be assigned a parameter in the general solution. The necessity of a parameter implies nontrivial solutions, so these vectors are linearly dependent.
    The reduced matrix can also be used to tell us exactly how these vectors are linearly dependent. Since k3 is free, we obtain a solution to the system for every possible value we assign to it. To get a simple nontrivial solution, let’s set k3=1. Then solving the equations represented by the nonzero rows of the reduced matrix gives us k1=1/2 and k2=5/2. Putting these back into the vector equation from when we first set up the Test for Linear Dependence/Independence, we get
    (12)[1001]+(52)[1001]+[3002]=[0000](12)[1001]+(52)[1001]+[3002]=[0000][3002]=12[1001]+52[1001].
    From this we see exactly how one of the vectors in our collection can be expressed as a linear combination of others in the collection.

Example 17.4.4. Testing dependence/independence in Pn(R).

Consider the polynomials from Discovery 17.4.c. Are they linearly dependent or independent? Set up the test, using the zero polynomial as the zero vector on the right-hand side:
k1(1+x)+k2(1+x2)+k3(2x+3x2)=0.
As usual, we simplify the linear combination on the left-hand side into one vector. Here, this means collecting like terms.
(k1+k2+2k3)+(k1k3)x+(k2+3k3)x2=0.
The polynomial on the left can only be equal to the zero polynomial if all its coefficients are zero, leading to the following system of equations:
{k1+k2+2k3=0,k1k3=0,k2+3k3=0.
Once again, we reduce the coefficient matrix to determine if there are nontrivial solutions:
[112101013]reducerow[101013000].
Since variable k3 is free, there exist nontrivial solutions and the vectors are linearly dependent.

Example 17.4.5. Testing dependence/independence in F(D).

Let’s do an example in a function space. Consider vectors f(x)=x, g(x)=sin(πx/2) and h(x)=cos(πx/2) in F(R), the space of functions defined on the whole real number line. Are these functions linearly dependent or independent? Let’s start the test by setting up the vector equation
k1x+k2sin(πx/2)+k3cos(πx/2)=0.
Here, there is no algebraic way we can simplify the expression on the left-hand side. However, remember that the 0 on the right-hand side represents the zero function, and that functions are only equal when they always produce the same output given the same input (Definition 15.5.1). So let’s try substituting some input x-values into the functions on either side of our vector equation above:
x=0:k10+k20+k31=0,x=1:k11+k21+k30=0,x=2:k12+k20+k3(1)=0.
From the first equation we see k3=0. Combining this with the third equation we also get k1=0. Then combining that with the second equation we finally get k2=0. Since only the trivial solution is possible, these vectors are linearly independent.

Subsection 17.4.2 Linear independence of “standard” spanning sets

Finally, let’s check the “standard” spanning sets of our favourite example vector spaces.

Example 17.4.6. Independence of the standard basis vectors in Rn.

The standard basis vectors e1,e2,,en form a spanning set for Rn, and they are also linearly independent, as we see if we apply the test:
k1e1+k2e1++knen=0(k1,k2,,kn)=(0,0,,0).
So clearly each scalar kj must be zero, which means there is only the trivial solution.

Example 17.4.7. Independence of the standard spanning vectors in Mm×n(R).

In Remark 16.4.16, we noted that there is also a “standard” set of spanning vectors in Mm×n(R), consisting of those matrices that have all zero entries except for a single 1 in one specific entry. We might call these “standard basis vectors” for Mm×n(R). Write Eij for the matrix of this type with a 1 in the (i,j)th entry. These spanning vectors are also linearly independent. Here, when we apply the Test for Linear Dependence/Independence, it is best if we enumerate our scalars with the same scheme as the vectors:
k11E11+k12E12++kmnEmn=0[kij]=0.
Again, we immediately see that only the trivial solution is possible.

Example 17.4.8. Independence of the standard spanning vectors in Pn(R).

Also in Remark 16.4.16, we noted that the powers of x (along with the constant polynomial 1) form a spanning set for Pn(R). We might call these “standard basis vectors” for Pn(R). Are they linearly independent? Apply the Test:
k01+k1x+k2x2++knxn=0.
If any of these coefficients are nonzero, the polynomial on the left-hand side will be nonzero, so only the trivial solution is possible. Therefore, powers of x are always linearly independent in Pn(R).