Section 18.4 Examples
Subsection 18.4.1 Checking a basis
Let’s start by working through Discovery 18.1, where we were asked to determine whether a collection of vectors forms a basis for a vector space. In each case we are looking to check two properties: that the collection is linearly independent, and that it forms a spanning set for the whole vector space.
Example 18.4.1. A collection of vectors too large to be a basis.
We already know that \(\R^3\) can be spanned by the three standard basis vectors, and so Lemma 17.5.7 tells that any set of more than three vectors in \(\R^3\) must be linearly dependent. Set \(S\) contains four vectors, so it can’t be a basis because it is linearly dependent. However, \(S\) is a spanning set — can you see how?
Example 18.4.2. A nonstandard basis for \(\R^3\).
This set \(S\) is linearly independent, which can be verified using the Test for Linear Dependence/Independence. As we saw in many examples in Section 17.4, the vector equation
\begin{equation*}
k_1(1,0,0) + k_2(1,1,0) + k_3(0,0,2) = (0,0,0)
\end{equation*}
that we use to begin the Test for Linear Dependence/Independence leads to a homogeneous system. In this case, that system has coefficient matrix
\begin{equation*}
\begin{bmatrix} 1 \amp 1 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 2 \end{bmatrix},
\end{equation*}
where the vectors in \(S\) appear as columns. This matrix can be reduced to \(I\) in two operations, and so only the trivial solution is possible.
The set \(S\) is also a spanning set for \(V\text{.}\) To check this, we need to make sure that every vector in \(\R^3\) can be expressed as a linear combination of the vectors in \(S\text{.}\) That is, we need to check that if \((x,y,x)\) is an arbitrary vector in \(\R^3\text{,}\) then we can always determine scalars \(a,b,c\) so that
\begin{equation*}
a(1,0,0) + b(1,1,0) + c(0,0,2) = (x,y,z).
\end{equation*}
Similar to the Test for Linear Dependence/Independence, the above vector equation leads to a system of equations with augmented matrix
\begin{equation*}
\left[\begin{array}{ccc|c}
1 \amp 1 \amp 0 \amp x \\
0 \amp 1 \amp 0 \amp y \\
0 \amp 0 \amp 2 \amp z
\end{array}\right].
\end{equation*}
The same two operations as before will reduce the coefficient part of this matrix to \(I\text{,}\) so that a solution always exists, regardless of the values of \(x,y,z\text{.}\) But it’s also possible to determine a solution directly by inspection of the vector equation above, as clearly
\begin{equation*}
(x-y)(1,0,0) + y(1,1,0) + \frac{z}{2}(0,0,2) = (x,y,z)
\end{equation*}
will be a solution.
Because this set is both linearly independent and a spanning set, it is a basis for the space.
Example 18.4.3. An independent set that does not span.
In Discovery 18.1.c, we considered a set \(S\) of three vectors in \(V = \matrixring_2(\R)\text{.}\)
This set \(S\) is linearly independent (check using the test!), but it is not a spanning set. We can see a linear combination of these vectors will never have a nonzero entry in the \((1,2)\) entry. In particular, the vector
\begin{equation*}
\begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}
\end{equation*}
is not in \(\Span S\text{.}\) Since \(S\) does not span the entire space, it is not a basis for \(V\text{.}\)
Example 18.4.4. A set that neither spans nor is independent.
In Discovery 18.1.d, we considered a set \(S\) of four vectors in the space \(V\) of all \(2\times 2\) upper triangular matrices.
This set of vectors is not a basis because it is neither a spanning set nor linearly independent.
It can’t be a spanning set for the space \(V\) because a linear combination of these vectors will always have the same number in both diagonal entries. In particular, the vector
\begin{equation*}
\left[\begin{array}{rr} 1 \amp 0 \\ 0 \amp -1 \end{array}\right]
\end{equation*}
is upper triangular, so it is in \(V\text{,}\) but it is not in \(\Span S\text{.}\)
Also, we could use the test to determine that these vectors are linearly dependent, but we can see directly that one of these vectors is a linear combination of others:
\begin{equation*}
\begin{bmatrix}1 \amp 3\\0 \amp 1\end{bmatrix}
= \begin{bmatrix}1 \amp 1\\0 \amp 1\end{bmatrix}
+ \begin{bmatrix}1 \amp 2\\0 \amp 1\end{bmatrix}
- \begin{bmatrix}1 \amp 0\\0 \amp 1\end{bmatrix}.
\end{equation*}
Example 18.4.5. The standard basis for the space of \(3 \times 3\) lower triangular matrices.
In Discovery 18.1.e, we considered a set \(S\) of six vectors in the space \(V\) of all \(3\times 3\) lower triangular matrices.
We might call these matrices the “standard basis vectors” for the space of \(3\times 3\) lower triangular matrices, since when we simplify a linear combination of them, such as
\begin{align}
k_{11}\begin{bmatrix}1 \amp 0 \amp 0\\0 \amp 0 \amp 0\\0 \amp 0 \amp 0\end{bmatrix}
+ k_{21}\begin{bmatrix}0 \amp 0 \amp 0\\1 \amp 0 \amp 0\\0 \amp 0 \amp 0\end{bmatrix}
+ k_{22}\begin{bmatrix}0 \amp 0 \amp 0\\0 \amp 1 \amp 0\\0 \amp 0 \amp 0\end{bmatrix}\notag\\
+ k_{31}\begin{bmatrix}0 \amp 0 \amp 0\\0 \amp 0 \amp 0\\1 \amp 0 \amp 0\end{bmatrix}
+ k_{32}\begin{bmatrix}0 \amp 0 \amp 0\\0 \amp 0 \amp 0\\0 \amp 1 \amp 0\end{bmatrix}
+ k_{33}\begin{bmatrix}0 \amp 0 \amp 0\\0 \amp 0 \amp 0\\0 \amp 0 \amp 1\end{bmatrix}\\\notag\\
= \begin{bmatrix}k_{11} \amp 0 \amp 0\\k_{21} \amp k_{22} \amp 0\\k_{31} \amp k_{32} \amp k_{33}\end{bmatrix},\tag{✶}
\end{align}
we see that the coefficients in the linear combination on the left correspond directly to the entries in the resulting sum matrix on the right, just as with other “standard” bases that we’ve encountered.
The set \(S\) is a spanning set for \(V\text{,}\) since we can clearly achieve every possible vector in this space using linear combinations of vectors in \(S\) by varying the coefficients in the general linear combination (✶) above.
The left-hand side of (✶) is also the left-hand side of the vector equation that we use in the Test for Linear Dependence/Independence, and from the right-hand side of (✶) we can see that if we set this linear combination to equal the zero vector (which is the \(3\times 3\) zero matrix here), the only solution is the trivial one.
Since \(S\) is both linearly independent and a spanning set, it is a basis for \(V\text{.}\)
Example 18.4.6. Another independent set that does not span.
In Discovery 18.1.f, we considered a set \(S\) of three vectors in the space \(V = \poly_3(\R)\text{.}\)
We have already seen in Subsection 17.4.2 that powers of \(x\) are always linearly independent in a space of polynomials. But this set of polynomials cannot be a spanning set for \(\poly_3(\R)\) because no linear combination of \(1,x,x^2\) will ever produce a polynomial of degree \(3\text{.}\) So \(S\) is not a basis.
Example 18.4.7. The standard basis for \(\poly_3(\R)\).
In Discovery 18.1.g, we considered a set \(S\) of four vectors in the space \(V = \poly_3(\R)\text{.}\)
Again, we know that powers of \(x\) are linearly independent in a space of polynomials. However, this time \(S\) is also a spanning set, since we naturally write polynomials of degree \(3\) as linear combinations of powers of \(x\text{:}\)
\begin{equation*}
a_0\cdot 1 + a_1 x + a_2 x^2 + a_3 x^3.
\end{equation*}
Such linear combinations can also be used to produce polynomials of degree less than \(3\text{,}\) by setting the coefficients on the higher powers to \(0\text{.}\) Since \(S\) is both independent and a spanning set, it is a basis for \(\poly_3(\R)\text{.}\)
Remark 18.4.8.
After we study the concept of dimension in the next chapter, the process of determining whether a set of vectors is a basis will become simpler. It is fairly straightforward to check the linear independence condition, since this usually reduces to solving a homogeneous system of linear equations, but checking the spanning condition directly is more tedious. In Chapter 19, we will see that if we know the correct number of vectors required in a basis, we only need to check one of the two conditions in the definition of basis (Corollary 19.5.6). And, as mentioned, usually it is the linear independence condition that is easier to verify.
Subsection 18.4.2 Standard bases
In Subsection 17.4.2, we checked that certain “standard” spanning sets for our main examples of vector spaces were also linearly independent. Since they both span and are linearly independent, that makes each of them a basis for the space that contains them. We’ll list them again here.
Example 18.4.9. The standard basis of \(\R^n\).
The standard basis vectors \(\uvec{e}_1,\uvec{e}_2,\dotsc,\uvec{e}_n\) form a basis for \(\R^n\text{,}\) justifying the word “basis” in our description “standard basis vectors” for these vectors.
Example 18.4.10. The standard basis of \(\matrixring_{m \times n}(\R)\).
The space \(\matrixring_{m \times n}(\R)\) of \(m \times n\) matrices also has a standard basis: the collection of matrices \(E_{ij}\) that have all entries equal to \(0\) except for a single \(1\) in the \(\nth[(i,j)]\) entry.
Example 18.4.11. The two standard bases of \(\poly_n(\R)\).
A space of polynomials \(\poly_n(\R)\) also has a standard basis: the collection \(1,x,x^2,x^3,\dotsc,x^n\) of powers of \(x\text{.}\)
As an ordered basis, we have two reasonable choices here: the order already presented, and the reverse order \(x^n,x^{n-1},\dotsc,x^2,x,1\text{.}\) We will stick with the order of increasing powers of \(x\text{,}\) so that when we index the coefficients in a linear combination, as in
\begin{equation*}
a_0 \cdot 1 + a_1 x + a_2 x^2 + \dotsc + a_n x^n,
\end{equation*}
then their indices are increasing with the exponents on \(x\text{.}\)
Subsection 18.4.3 Coordinate vectors
Finally, we’ll do some computations with coordinate vectors, by working Discovery 18.4 and Discovery 18.5.
First, from Discovery 18.4.
Example 18.4.12. Determining a coordinate vector.
-
In Discovery 18.4.a, we considered a vector \(\uvec{w}\) in \(\matrixring_2(\R)\) relative to the standard basis.First, decompose \(\uvec{w}\) as a linear composition of the vectors in \(S\text{.}\) Since \(S\) is the standard basis for \(\matrixring_2(\R)\text{,}\) this can be done by inspection:\begin{equation*} \left[\begin{array}{rr} -1 \amp 5 \\ 3 \amp -2 \end{array}\right] = (-1)\begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + 5\begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} + 3\begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} + (-2)\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}. \end{equation*}To get the coordinate vector, we wrap the four coefficients up (in order) in an \(\R^4\) vector:\begin{equation*} \rmatrixOfplain{\uvec{w}}{S} = (-1,5,3,-2). \end{equation*}
-
In Discovery 18.4.b, we considered the same vector from \(\matrixring_2(\R)\) as in the previous example, but relative to a nonstandard basis.We could probably also decompose \(\uvec{w}\) by inspection here, but instead we’ll demonstrate the general method. Write \(\uvec{w}\) as an unknown linear combination of the basis vectors, and then simplify the linear combination:\begin{align*} \left[\begin{array}{rr} -1 \amp 5 \\ 3 \amp -2 \end{array}\right] \amp = k_1\begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + k_2\begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} + k_3\begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} + k_4\begin{bmatrix} 0 \amp 0 \\ 1 \amp 1 \end{bmatrix}\\ \amp = \begin{bmatrix} k_1+k_2 \amp k_2 \\ k_3+k_4 \amp k_4 \end{bmatrix}. \end{align*}Comparing entries on left- and right-hand sides, we obtain a system of equations:\begin{equation*} \left\{\begin{array}{rcrcr} k_1 \amp + \amp k_2 \amp = \amp -1 \text{,} \\ \amp \amp k_2 \amp = \amp 5 \text{,} \\ k_3 \amp + \amp k_4 \amp = \amp 3 \text{,} \\ \amp \amp k_4 \amp = \amp -2 \text{.} \end{array}\right. \end{equation*}If we had more complicated basis vectors, we would have a more complicated system, which we could solve by forming an augmented matrix and row reducing. As it is, we can solve by inspection:\begin{align*} k_1 \amp= -6, \amp k_2 \amp= 5, \amp k_3 \amp= 5, \amp k_4 \amp= -2. \end{align*}We collect these four coefficients (in order) in an \(\R^4\) vector:\begin{equation*} \rmatrixOfplain{\uvec{w}}{S} = (-6,5,5,-2). \end{equation*}Even though we were working with the same vector as in the previous example, we ended up with a different coordinate vector because it is relative to a different basis.
-
In Discovery 18.4.c, we considered a vector \(\uvec{w}\) in \(\poly_3(\R)\) relative to the standard basis.The standard basis of \(\poly_3(\R)\) consists of powers of \(x\) (along with the constant polynomial \(1\)), and our polynomial \(\uvec{w}\) is naturally written as a linear combination of powers of \(x\text{.}\) However, there is no \(x^2\) term, so we need to insert one:\begin{equation*} \uvec{w} = 3\cdot 1 + 4x + 0x^2 - 5x^3. \end{equation*}Once again, we wrap up these four coefficients (in order) in an \(\R^4\) vector:\begin{equation*} \rmatrixOfplain{\uvec{w}}{S} = (1,4,0,-5). \end{equation*}
-
In Discovery 18.4.d, we considered a vector \(\uvec{w}\) in \(\R^3\) relative to a nonstandard basis.Rather than try to guess, we should set up equations and solve. Start by writing \(\uvec{w}\) as an unknown combination of the basis vectors and combine into a single vector expression:\begin{align*} (1,1,1) \amp= k_1(-1,0,1) + k_2(0,2,0) + k_3(1,1,0) \\ \amp= (-k_1 + k_3, 2k_2+k_3, k_1). \end{align*}This leads to a system of equations:\begin{equation*} \left\{\begin{array}{rcrcrcr} -k_1 \amp \amp \amp + \amp k_3 \amp = \amp 1 \text{,} \\ \amp \amp 2k_2 \amp + \amp k_3 \amp = \amp 1 \text{,} \\ k_1 \amp \amp \amp \amp \amp = \amp 1 \text{.} \end{array}\right. \end{equation*}We could probably solve by inspection again, but let’s form an augmented matrix and reduce:\begin{equation*} \left[\begin{array}{rrr|r} -1 \amp 0 \amp 1 \amp 1 \\ 0 \amp 2 \amp 1 \amp 1 \\ 1 \amp 0 \amp 0 \amp 1 \end{array}\right] \qquad\rowredarrow\qquad \left[\begin{array}{rrr|r} 1 \amp 0 \amp 0 \amp 1 \\ 0 \amp 1 \amp 0 \amp -1/2 \\ 0 \amp 0 \amp 1 \amp 2 \end{array}\right] \end{equation*}Notice again how the columns in the initial augmented matrix, including the column of constants, are the vectors involved. The column of constants in the final reduced matrix is our coordinate vector:\begin{equation*} \rmatrixOfplain{\uvec{w}}{S} = (1,-1/2,2). \end{equation*}
-
In Discovery 18.4.e, we considered a vector \(\uvec{w}\) in \(\R^3\) relative to the standard basis.This is similar to the first example — we have the standard basis for \(\R^3\text{,}\) so it is simple to decompose \(\uvec{w}\) as a linear combination of the vectors in the basis:\begin{equation*} \uvec{w} = -2\uvec{e}_1 + 3\uvec{e}_2 + (-5)\uvec{e}_3. \end{equation*}Collect these coefficients together into an \(\R^3\) vector:\begin{equation*} \rmatrixOfplain{\uvec{w}}{S} = (-2,3,-5). \end{equation*}
Remark 18.4.13.
The last two parts of the example above might seem kind of weird, but the point is all about point of view. Relative to the standard basis, a vector in \(\R^n\) is equal to its own coordinate vector. In other words, the standard basis is standard because it corresponds to the natural way that we think of vectors in \(\R^3\) — in terms of its \(x\)-, \(y\)-, and \(z\)-coordinates. This is similar to how the standard basis for a polynomial space leads to coordinate vectors that just record the coefficients of polynomials, or how the standard basis for a matrix space leads to coordinate vectors that just record the entries of the matrices.
But if we change our point of view and use a nonstandard basis for \(\R^n\text{,}\) then coordinate vectors allow us to use vectors in \(\R^n\) to represent other vectors in \(\R^n\text{,}\) where everything is “tuned” to the perspective of the nonstandard basis. And similarly if we use nonstandard bases in other spaces.
Now we’ll work through Discovery 18.5. This activity is the same as the previous, but in reverse — we are given a coordinate vector from \(\R^n\text{,}\) and we can use its components as the coefficients in a linear combination of the basis vectors. We’ll complete two of the examples from this discovery activity, and leave the rest to you.
Example 18.4.14. Determining a vector from its coordinate vector.
-
This is Discovery 18.5.a.Just compute the linear combination using the coordinate vector components as coefficients, in the proper order:\begin{align*} \uvec{w} \amp= 3 \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + (-5) \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix} + 1 \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} + 1 \begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}\\ \amp= \left[\begin{array}{rr} 3 \amp -5 \\ 1 \amp 1 \end{array}\right]\text{.} \end{align*}This result should not be surprising, as both a \(2 \times 2\) matrix and a vector in \(\R^4\) are just a collection of four numbers.
-
This is Discovery 18.5.b.Again, just compute the linear combination using the coordinate vector components as coefficients, in the proper order:\begin{align*} \uvec{w} \amp= 3 \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix} + (-5) \begin{bmatrix} 1 \amp 1 \\ 0 \amp 0 \end{bmatrix} + 1 \begin{bmatrix} 0 \amp 0 \\ 1 \amp 0 \end{bmatrix} + 1 \begin{bmatrix} 0 \amp 0 \\ 1 \amp 1 \end{bmatrix}\\ \amp= \left[\begin{array}{rr} -2 \amp -5 \\ 2 \amp 1 \end{array}\right]\text{.} \end{align*}Even though we were working with the same coordinate vector as in the previous example, we ended up with a different matrix result because it is relative to a different basis.
-
This is Discovery 18.5.c.Use the same process here as in the previous two examples above:\begin{equation*} \uvec{w} = -3 \cdot 1 + 1 x + 0 x^2 + 3 x^3 = -3 + x + 3 x^ 3 \text{.} \end{equation*}