Skip to main content
Logo image

Discovery guide 19.1 Discovery guide

Recall.

A basis for a vector space is a linearly independent spanning set.

Discovery 19.1.

Answer each of the following assuming nonzero vectors in \(\R^3\text{.}\)

(a)

What geometric shape is the span of one nonzero vector?

(b)

(i)
What is the definition of linearly dependent for a set of two vectors?
(ii)
What does this mean geometrically?
(iii)
What is the shape of the span of two nonzero linearly dependent vectors?

(c)

(i)
What does linearly independent mean geometrically for a set of two vectors?
(ii)
What is the shape of the span of two linearly independent vectors?

(d)

Based on your answers so far, do you think a set of two vectors can be a basis for \(\R^3\text{?}\)

(e)

(i)
What is the definition of linearly dependent for a set of three vectors?
(ii)
What does this mean geometrically?
(iii)
What is the shape of the span of three nonzero linearly dependent vectors? (There are actually two possibilities here.)

(f)

(i)
What does linearly independent mean geometrically for a set of three vectors?
(ii)
What is the “shape” of the span of three linearly independent vectors?

(g)

Do you think a set of four vectors can be a basis for \(\R^3\text{?}\)

(h)

Determine the “dimension” of each of the following subspaces of \(\R^3\text{.}\) In each case, how does the number you come up with correspond with the answers you’ve given throughout this activity?
(i)
A line through the origin.
(ii)
A plane through the origin.
(iii)
All of \(\R^3\text{.}\)
(iv)
The trivial subspace (i.e. just the origin).
We’ve been using the word “dimension” informally throughout our developlment of the concepts of vectors (e.g. calling vectors in \(\R^2\) two-dimensional vectors), but finally we can match our intuition about the “dimension” of the various types of subspaces of \(\R^3\) with the theoretical concepts of linear independence and spanning to make the following definition.
dimension of a vector space
the number of vectors required in a basis for that space
One way to obtain a basis for a space (and hence to determine its dimension) is to assign parameters — then each independent parameter corresponds to a basis vector.
For example, in \(\R^2\) we have natural parameters associated to the \(x\)- and \(y\)-coordinates: \(\uvec{x} = (x,y)\text{.}\) Expanding this into a linear combination, we get \(\uvec{x} = x(1,0) + y(0,1)\text{.}\) Parameter \(x\) corresponds to vector \((1,0)\) and parameter \(y\) corresponds to vector \((0,1)\text{,}\) and together the two corresponding vectors form a basis \(\{(1,0),(0,1)\}\) for \(\R^2\text{.}\) (In fact, the standard basis for \(\R^2\text{!}\)). Since there were two independent parameters required to described an arbitrary vector in the space, this led to two basis vectors, and so the dimension of \(\R^2\) is (surprise!) \(2\text{.}\)

Step-by-step procedure.

  1. Express arbitrary elements in the space in terms of parameters.
  2. Use any extra conditions to reduce to the minimum number of independent parameters (if necessary).
  3. Split up your parametric vector description into a linear combination based on the remaining parameters.
  4. Extract the basis vector attached to each parameter.
  5. Count the basis vectors to determine the dimension of the space (which should also correspond to the number of independent parameters required).

Discovery 19.2.

In each of the following, determine a basis for the given space using the parameter method outlined above, similarly to the provided \(\R^2\) example. Then count the dimension of the space.

(a)

\(\R^3\text{.}\)

(b)

The subspace of \(\R^3\) consisting of vectors whose second coordinate is zero.

(c)

The subspace of \(\R^3\) consisting of vectors whose first and third coordinates are equal.

(d)

\(\matrixring_2(\R)\text{,}\) i.e. the space of \(2\times 2\) matrices.

(e)

The subspace of \(\matrixring_2(\R)\) consisting of upper-triangular matrices.

(f)

The subspace of \(\matrixring_2(\R)\) consisting of upper-triangular matrices whose diagonal entries add to zero.

(g)

The subspace of \(\matrixring_2(\R)\) consisting of matrices whose entries sum to zero.

(h)

\(\poly_5(\R)\text{,}\) i.e. the space of polynomials of degree \(5\) or less.

(i)

The subspace of \(\poly_5(\R)\) consisting of polynomials with constant term equal to zero.

(j)

The subspace of \(\poly_5(\R)\) consisting of odd polynomials, i.e. those involving only odd powers of \(x\) (and no constant term).

(k)

The subspace of \(\poly_5(\R)\) consisting of even polynomials, i.e. those involving only even powers of \(x\) (and a constant term).
A vector space is called finite-dimensional if it can be spanned by a finite set; otherwise, it is called infinite-dimensional. For example, \(\R^n\) is finite-dimensional for each value of \(n\text{,}\) because it can be spanned by the finite set of standard basis vectors \(\{\uvec{e}_1,\uvec{e}_2,\dotsc,\uvec{e}_n\}\text{.}\)

Discovery 19.3.

Is the vector space of all polynomials is finite- or infinite-dimensional?
Hint.
If \(S\) is a finite set of polynomials, what are the possible degrees of the polynomials in \(\Span S\text{?}\)
We’ve already seen that a linearly dependent spanning set can be reduced to a basis (Proposition 18.5.1). Working the other way, we will use Proposition 17.5.6 to argue in Subsection 19.5.2 that a linearly independent set that is not a spanning set can be built up to a basis by including additional vectors (Proposition 19.5.4). Proposition 17.5.6 tells us exactly how to do this: to ensure linear independence at each step, the new vector to be included should not be in the span of the old (i.e. the new should not be any linear combination of the old).

Discovery 19.4.

In each of the following, enlarge the provided linearly independent set into a basis for the space.

Hint.

Since we now know the dimensions of these spaces, we know how many linearly independent vectors are required to form a basis. Just guess simple new vectors to include in the given set, one at a time, and for each make sure your new vector is not a linear combination of the vectors you already have. (You can check this by trying to solve an appropriate system of linear equations.)

(a)

\(V=\R^3\text{,}\) \(S = \{(1,1,0),(1,0,1)\}\text{.}\)

(b)

\(V = \matrixring_2(\R)\text{,}\) \(S = \left\{\; \left[\begin{smallmatrix} 1 \amp 1\\1 \amp 1 \end{smallmatrix}\right],\;\; \left[\begin{smallmatrix} 1 \amp 0\\1 \amp -1 \end{smallmatrix}\right] \;\right\} \text{.}\)

Discovery 19.5.

Suppose \(V\) is a finite-dimensional vector space, and \(W\) is a subspace of \(V\text{.}\)

(a)

What is the relationship between \(\dim W\) and \(\dim V\text{?}\) Justify your answer in terms of the definition of dimension.
Hint.
The pattern of the previous exercise, where a linearly independent set can be built up into a basis, might help in articulating your justification.

(b)

Is it possible for \(\dim W = \dim V\) to be true?