Section 18.3 Concepts
Subsection 18.3.1 Basis as a minimal spanning set
The purpose of a spanning set for a vector space is to be able to describe every vector in the space systematically in terms of linear combinations of certain specific vectors. But to be able to do this as simply as possible, we would like our spanning set to be “optimal” for this purpose. We have seen that spanning sets can contain redundant information — when a spanning set is linearly dependent, then one of its vectors can be expressed as a linear combination of others, and so that particular vector is not needed for the purpose of describing every vector in the vector space in terms of linear combinations of spanning vectors. Even worse, we saw in Discovery 18.2 that a linearly dependent spanning set allows for other types of redundancy. In particular, if a spanning set is linearly dependent, then every vector in the vector space will have an infinite number of different descriptions as linear combinations of spanning vectors. Clearly such a situation is not “optimal.”
However, Lemma 17.5.4 and Proposition 17.5.5 tell us that we can remove this redundancy while still keeping a spanning set. By eliminating linearly dependent vectors from a spanning set one at a time, we can eventually reduce to a linearly independent spanning set — a basis for the space. As we saw in Discovery 18.3, a basis will no longer exhibit the second kind of redundancy discussed above, so that in terms of a basis, every vector in the space has one unique description as a linear combination (where we do not consider reorderings of the linear combination expression, or insertion or removal of basis vectors with a zero coefficient, as different expressions). And since a basis is linearly independent, it seems that it cannot contain any of the first kind of redundancy discussed above, because none of its vectors can be expressed as a linear combination of others. So it would be reasonable to guess that a basis is minimal in the sense that it cannot be reduced any further while still remaining a spanning set. And this is exactly true, as we will see in Statement 1.a of Theorem 18.5.2 in Subsection 18.5.2.
Subsection 18.3.2 Basis as a maximal linearly independent set
As above, a spanning set that is not linearly independent can be reduced to one that is, making it a basis, and a basis cannot be reduced any further while still remaining a spanning set. But perhaps we can also work the other way. A linearly independent set that does not span the whole vector space can be enlarged using Proposition 17.5.6; perhaps we could continue to enlarge the set until it does span the whole vector space, at which point it would become a basis.
But we know from Lemma 17.5.7 that a collection of vectors that is larger than a known (finite) spanning set must be linearly dependent. Since a basis is, by definition, a special kind of spanning set, a basis is also maximal in the sense that it cannot be enlarged any further while still remaining linearly independent.
Subsection 18.3.3 Basis is not unique
It is important to note that a vector space will not have just one basis. In fact, except for the trivial vector space, every vector space has an infinite number of different possible bases. But often spaces have an obvious, preferred basis, called the standard basis for the space. We will see examples of standard bases for various spaces in Subsection 18.4.2.
Subsection 18.3.4 Ordered versus unordered basis
In mathematics, usually a collection or set of objects is considered to be unordered — all that matters is the inclusion of the members of the collection, not the order in which those members are written down. For example, if \(V = \Span \{\uvec{u},\uvec{v},\uvec{w}\}\) for some collection of vectors \(\uvec{u},\uvec{v},\uvec{w}\) in \(V\text{,}\) saying that \(\{\uvec{w},\uvec{v},\uvec{u}\}\) is a spanning set for \(V\) is the same as saying that \(\{\uvec{u},\uvec{v},\uvec{w}\}\) is a spanning set for \(V\text{.}\) However, we usually prefer one uniform way to describe vectors in \(V\) as linear combinations of spanning vectors. It would be inconsistent to write
\begin{equation*}
\uvec{x} = a_1\uvec{u} + a_2\uvec{v} + a_3\uvec{w}
\end{equation*}
for some vector \(\uvec{x}\) in \(V\text{,}\) and then to write
\begin{equation*}
\uvec{y} = b_1\uvec{w} + b_2\uvec{v} + b_3\uvec{u}
\end{equation*}
for some other vector \(\uvec{y}\) in \(V\text{.}\) So usually we take a spanning set to have a particular order, and to always express linear combinations in that order, especially when our spanning set is a basis. To emphasize that a basis has such a preferred ordering, we might refer to it as an ordered basis. But you should assume from this point forward that every basis is an ordered one.
Subsection 18.3.5 Coordinates of a vector
Subsubsection 18.3.5.1 Basic concept of coordinates relative to a basis
Suppose we have a basis for a vector space. Since a basis is a spanning set, every vector in the space has a decomposition as a linear combination of these basis vectors. But, as we saw in Discovery 18.3, a vector in the space cannot have more than one such decomposition. That is, every vector \(\uvec{w}\) in the vector space has one unique expression as a linear combination of the basis vectors. Because of this, we can consider the coefficients that go into such an expression as a “signature” or “code” that uniquely identifies \(\uvec{w}\text{.}\) For example, if \(V = \Span\basisfont{B}\) for some basis \(\basisfont{B} = \{\uvec{v}_1,\uvec{v}_2,\uvec{v}_3\}\text{,}\) and we have a vector \(\uvec{w}\) in \(V\) for which
\begin{gather}
\uvec{w} = 3\uvec{v}_1 + 5\uvec{v}_2 + (-1)\uvec{v}_3\text{,}\tag{✶}
\end{gather}
then the numbers \(3,5,-1\) (in that order) uniquely identify the vector \(\uvec{w}\) relative to the (ordered) basis, and no other triple of numbers identify \(\uvec{w}\text{.}\) These coefficients are called the coordinates of \(\uvec{w}\) relative to the basis \(\basisfont{B}\text{.}\) Now, we already have a concept that collects together triples of numbers in particular orders — vectors in \(\R^3\text{.}\) So, in this example, to every vector in the space \(V\) (which may be a space of matrices or a space of functions or etc.) we can associate one unique vector in \(\R^3\) whose components are the coordinates of the vector relative to the basis \(\basisfont{B}\text{.}\) For our example vector \(\uvec{w}\) above, we can collect the three coefficients from the linear combination in (✶) either into a triple of coordinates \((x,y,z)\) or into a column vector:
\begin{align*}
\rmatrixOf{\uvec{w}}{B} \amp= (3,5,-1) \text{,} \amp
\matrixOf{\uvec{w}}{B} \amp= \left[\begin{array}{r} 3 \\ 5 \\ -1 \end{array}\right] \text{.}
\end{align*}
The equivalent \(\R^3\)-vectors \(\rmatrixOf{\uvec{w}}{B}\) and \(\matrixOf{\uvec{w}}{B}\) are each called the coordinate vector of \(\uvec{w}\) relative to \(\basisfont{B}\text{,}\) the only difference between the two being the presentation.
To repeat, since \(\basisfont{B}\) is a spanning set, every vector in the space can be expressed as a linear combination of the vectors in \(\basisfont{B}\text{,}\) and so every vector has an associated coordinate vector. And since \(\basisfont{B}\) is linearly independent, it contains no redundancy as a spanning set, and so each vector can only have one unique coordinate vector associated to it. Which also means that every vector in \(\R^3\) can be interpreted as a coordinate vector relative to \(\basisfont{B}\text{,}\) and can be traced to one particular vector in \(V\text{.}\)
In general, the number of coordinates required is the same as the number of vectors in the basis being used. So if \(V = \Span\basisfont{B}\) for basis \(\basisfont{B}=\{\uvec{v}_1,\uvec{v}_2,\dotsc,\uvec{v}_n\}\text{,}\) then the coordinate vector for each vector in \(V\) needs to be a vector in \(\R^n\text{.}\) See Subsection 18.4.3 for examples of computing coordinate vectors and of interpreting vectors in \(\R^n\) as coordinate vectors.
Warning 18.3.1. Order matters in coordinate vectors.
Because of Axiom A 2, reordering a linear combination of vectors does not produce a different vector as the end result. However, when extracting coefficients from a linear combination to form a coordinate vector, order definitely does matter, since we have decided to consider every basis as an ordered basis.
For example, if \(\basisfont{B} = \{\uvec{u}_1,\uvec{u}_2,\uvec{u}_3\}\) is a basis for a space \(V\text{,}\) then the vector \(\uvec{v} = \uvec{u}_1 + 2\uvec{u}_2 + 3\uvec{u}_3\) has coordinate vector
\begin{equation*}
\rmatrixOf{\uvec{v}}{B} = (1,2,3).
\end{equation*}
If we rearrange the linear combination to \(\uvec{v} = 2\uvec{u}_2 + \uvec{u}_1 + 3\uvec{u}_3\text{,}\) we are obviously not forming a different vector \(\uvec{v}\text{,}\) but we are changing our point of view to a different ordered basis, \(\basisfont{B}' = \{\uvec{u}_2,\uvec{u}_1,\uvec{u}_3\}\text{,}\) creating a different coordinate vector for \(\uvec{v}\text{:}\)
\begin{equation*}
\rmatrixOf{\uvec{v}}{B'} = (2,1,3).
\end{equation*}
Subsubsection 18.3.5.2 Linearity of coordinates
In Discovery 18.6, we discovered that performing a computation \(2 \uvec{v} + \uvec{w}\) in a vector space \(V\) and performing the corresponding calculation \(2 \rmatrixOf{\uvec{v}}{B} + \rmatrixOf{\uvec{w}}{B}\) with the corresponding coordinate vectors in \(\R^n\) relative to some basis \(\basisfont{B}\) of \(V\) would essentially yield the same result. (That is, the result of combining coordinate vectors ends up being the coordinate vector of the result of combining the original vectors.)
To consider why this works out, let’s consider the operations involved in a linear combination (vector addition and scalar multiplication) separately. For the remainder of this discussion, suppose \(\basisfont{B} = \{ \uvec{u}_1, \uvec{u}_2, \dotsc, \uvec{u}_n \}\) is a basis for a particular vector space \(V\text{.}\)
Addition of coordinate vectors.
If you have two vectors in \(V\) expressed uniquely as linear combinations of the basis vectors,
\begin{equation*}
\begin{array}{rcrcrcccr}
\uvec{v} \amp = \amp a_1 \uvec{u}_1 \amp + \amp a_2 \uvec{u}_2 \amp + \amp \dotsc \amp + \amp a_n \uvec{u}_n \text{,} \\
\uvec{w} \amp = \amp b_1 \uvec{u}_1 \amp + \amp b_2 \uvec{u}_2 \amp + \amp \dotsc \amp + \amp b_n \uvec{u}_n \text{,}
\end{array}
\end{equation*}
then adding the vectors can be accomplished by adding the linear combinations. Algebraically, we can add linear combinations by collecting like terms, and when we do so we will be adding the corresponding coefficients on each basis vector. But coefficients on basis vectors are where components of coordinate vectors come from, and so we can say that the coordinate vector of a sum is the sum of the coordinate vectors.
Scalar multiplication of a coordinate vector.
If you have a vectors in \(V\) expressed uniquely as linear combinations of the basis vectors,
\begin{equation*}
\uvec{v} = a_1 \uvec{u}_1 + a_2 \uvec{u}_2 + \dotsc + a_n \uvec{u}_n \text{,}
\end{equation*}
then multiplying this vector by a scalar can be accomplished by scalar multiplying the linear combination. Algebraically, we can scalar multiply a linear combination by distributing the scalar through the vector sum, and when we do so we will be multiplying the coefficient on each basis vector by the scalar. But coefficients on basis vectors are where components of coordinate vectors come from, and so we can say that the coordinate vector of a scalar multiple is the scalar multiple of the coordinate vector.