Section 19.4 Examples
Subsection 19.4.1 Determining a basis from a parametric expression
Example 19.4.1. From the discovery guide.
First, let’s carry out some of the examples from Discovery 19.2.
-
In Discovery 19.2.c, we considered a certain subspace of \(\R^3\text{.}\)An arbitrary vector in \(\R^3\) requires three parameters to describe its three components: \(\uvec{x} = (a,b,c)\text{.}\) If we restrict to just those vectors whose first and third components are equal, we can replace \(c\) by \(a\text{,}\) to get\begin{equation*} \uvec{x} = (a,b,a) = (a,0,a) + (0,b,0) = a(1,0,1) + b(0,1,0). \end{equation*}So a basis for this subspace of \(\R^3\) is \(\basisfont{B} = \{(1,0,1),(0,1,0)\}\text{,}\) and the dimension is \(2\text{.}\)
-
In Discovery 19.2.g, we considered a certain subspace of \(\matrixring_2(\R)\text{.}\)An arbitrary matrix in \(\matrixring_2(\R)\) requires four parameters to describe its four entries:\begin{equation*} A = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}. \end{equation*}If we restrict to those matrices whose entries sum to zero, so that \(a+b+c+d=0\text{,}\) then we can isolate \(d=-a-b-c\) and substitute that into the matrix:\begin{align*} A \amp= \begin{bmatrix} a \amp b \\ c \amp -a-b-c \end{bmatrix} \\ \amp = \left[\begin{array}{rr} a \amp 0 \\ 0 \amp -a \end{array}\right] + \left[\begin{array}{rr} 0 \amp b \\ 0 \amp -b \end{array}\right] + \left[\begin{array}{rr} 0 \amp 0 \\ c \amp -c \end{array}\right]\\ \amp = a\left[\begin{array}{rr} 1 \amp 0 \\ 0 \amp -1 \end{array}\right] + b\left[\begin{array}{rr} 0 \amp 1 \\ 0 \amp -1 \end{array}\right] + c\left[\begin{array}{rr} 0 \amp 0 \\ 1 \amp -1 \end{array}\right]. \end{align*}So this subspace of \(\matrixring_2(\R)\) has dimension \(3\text{,}\) with basis\begin{equation*} \basisfont{B} = \left\{ \left[\begin{array}{rr} 1 \amp 0 \\ 0 \amp -1 \end{array}\right], \left[\begin{array}{rr} 0 \amp 1 \\ 0 \amp -1 \end{array}\right], \left[\begin{array}{rr} 0 \amp 0 \\ 1 \amp -1 \end{array}\right] \right\}. \end{equation*}
-
In Discovery 19.2.j, we considered a certain subspace of \(\poly_5(\R)\text{.}\)An arbitrary polynomial in \(\poly_5(\R)\) requires six parameters, one for each power of \(x\text{,}\) along with a parameter for the constant term:\begin{equation*} p(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4 + a_5 x^5. \end{equation*}If we restrict to only odd polynomials, we need to eliminate the constant term and the even powers of \(x\text{:}\)\begin{equation*} p(x) = a_1 x + a_3 x^3 + a_5 x^5. \end{equation*}(Equivalently, we have applied the homogeneous conditions \(a_0 = 0\text{,}\) \(a_2 = 0\text{,}\) and \(a_4 = 0\text{.}\)) So this subspace of \(\poly_5(\R)\) has dimension \(3\text{,}\) with basis \(\basisfont{B} = \{x,x^3,x^5\}\text{.}\)
Example 19.4.2. Dimensions of familiar spaces via parameters.
Now let’s examine how the dimensions of our favourite example spaces relate to our parametric point of view. We considered specific examples of these in parts of Discovery 19.2, but here we’ll work more generally.
- An arbitrary vector in \(\R^n\) requires \(n\) parameters, one for each component:\begin{equation*} \uvec{x} = (x_1,x_2,\dotsc,x_n). \end{equation*}If we expanded this into a linear combination, each parameter would be attached to a standard basis vector \(\uvec{e}_j\text{.}\) Since we’ve got \(n\) parameters and a corresponding \(n\) standard basis vectors, we have\begin{equation*} \dim \R^n = n. \end{equation*}
- An arbitrary \(m\times n\) matrix in \(\matrixring_{m \times n}(\R)\) requires \(mn\) parameters, one for each entry:\begin{equation*} A = [a_{ij}], \qquad 1 \le i \le m, \;\; 1 \le j \le n. \end{equation*}If we expanded this into a linear combination, each parameter would be attached to a standard basis matrix \(E_{ij}\text{,}\) with zeros in all entries except for a single \(1\) in the \(\nth[(i,j)]\) entry. Since we’ve got \(mn\) parameters and a corresponding \(mn\) standard basis matrices, we have\begin{equation*} \dim \matrixring_{m \times n}(\R) = mn. \end{equation*}
- An arbitrary polynomial in \(\poly_n(\R)\text{,}\) the space of polynomials of degree \(n\) or less, requires \(n+1\) parameters, one for each power of \(x\) plus an extra one for the constant term:\begin{equation*} p(x) = a_0 + a_1x + a_2 x^2 + \dotsb + a_n x^n. \end{equation*}This is already naturally expressed as a linear combination, and each parameter is attached to a polynomial from the standard basis\begin{equation*} \basisfont{B} = \{1,x,x^2,\dotsc,x^n\} \text{.} \end{equation*}Since we’ve got \(n+1\) parameters and a corresponding \(n+1\) standard basis polynomials, we have\begin{equation*} \dim \poly_n(\R) = n+1. \end{equation*}
Example 19.4.3. The solution space of a homogeneous system.
In Remark 19.3.2, we noted how assigning parameters after row reducing a homogeneous system corresponded directly to a parameter-based procedure for determine the basis for a space. Let’s illustrate this correspondence with an example.
Consider the homogeneous system in Discovery 2.4, which we solved in Example 2.4.4. In Example 16.4.8, we used the Subspace Test to verify that the solution set of a homogeneous system with an \(m\times n\) coefficient matrix is a subspace of \(\R^n\text{.}\) The system from Discovery 2.4 has a \(4 \times 4\) coefficient matrix that we reduced:
\begin{equation*}
\left[\begin{array}{rrrr}
3 \amp 6 \amp -8 \amp 13 \\
1 \amp 2 \amp -2 \amp 3 \\
2 \amp 4 \amp -5 \amp 8
\end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rrrr}
1 \amp 2 \amp 0 \amp -1 \\
0 \amp 0 \amp 1 \amp -2 \\
0 \amp 0 \amp 0 \amp 0
\end{array}\right]\text{.}
\end{equation*}
Assigning parameters to free variables \(x_2,x_4\text{,}\) we obtained the general solution in parametric form:
\begin{align*}
x_1 \amp= -2s + t, \amp x_2 \amp= s, \amp x_3 \amp= 2t, \amp x_4 \amp= t.
\end{align*}
We can use these expressions as components in a general solution vector, and expand it out to a linear combination, just as in the previous examples in this subsection:
\begin{equation*}
\uvec{x}
= \begin{bmatrix} -2s + t \\ s \\ 2t \\ t \end{bmatrix}
= \begin{bmatrix} -2s \\ s \\ 0 \\ 0 \end{bmatrix}
+ \begin{bmatrix} t \\ 0 \\ 2t \\ t \end{bmatrix}
= s\left[\begin{array}{r} -2 \\ 1 \\ 0 \\ 0 \end{array}\right]
+ t\begin{bmatrix} 1 \\ 0 \\ 2 \\ 1 \end{bmatrix}
\end{equation*}
Since two parameters are needed to describe the solution vectors for this system, the solution space has dimension \(2\text{,}\) and a basis for this subspace is
\begin{equation*}
\basisfont{B} = \left\{
\left[\begin{array}{r} -2 \\ 1 \\ 0 \\ 0 \end{array}\right],
\begin{bmatrix} 1 \\ 0 \\ 2 \\ 1 \end{bmatrix}
\right\}\text{.}
\end{equation*}
Subsection 19.4.2 An infinite-dimensional example
All of the examples in the previous subsection involved finite-dimensional spaces. Here’s an example of an infinite-dimensional space.
In Discovery 19.3, we considered the space of all polynomials. This space cannot be spanned by any finite collection of polynomials, because such a collection would have a polynomial of largest degree, and then every linear combination of those polynomials would have that degree or smaller. So the span of those polynomials could never include polynomials of larger degree. Thus,
\begin{equation*}
\dim \poly(\R) = \infty.
\end{equation*}
We can still come up with a basis for this space, but it will contain an infinite number of vectors:
\begin{equation*}
\poly(\R) = \Span\{1,x,x^2,x^3,\dotsc\}.
\end{equation*}
This equality says that every polynomial is a linear combination of a finite number of powers of \(x\text{.}\) This spanning set is also linearly independent because no power of \(x\) can be expressed as a linear combination of other powers of \(x\text{.}\)
Subsection 19.4.3 Enlarging a linearly independent set to a basis
In Discovery 19.4.b, we are given a linearly independent set \(S\) of vectors in \(V = \matrixring_2(\R)\text{,}\) and we would like to enlarge it to a basis for the whole space. Since \(S\) is linearly independent, it is a basis for the subspace \(\Span S\text{.}\) Since we know that \(\dim\matrixring_2(\R) = 4\text{,}\) we need to add two more linearly independent vectors to get up to a basis for \(V\text{.}\) To do this, we can use Proposition 17.5.6, which says that to enlarge a linearly independent set, we need to add a vector from outside the span of the vectors we already have.
An obvious source for candidate vectors to use to enlarge \(S\) is the standard basis \(\basisfont{B} = \{E_{11},E_{12},E_{21},E_{22}\}\text{.}\) We know that \(\Span S\) can’t contain all four of these vectors, because then Statement 1 of Proposition 16.5.6 would imply that all of \(V = \Span \basisfont{B}\) would be contained in \(\Span S\text{,}\) which is not possible because \(\dim (\Span S)\) is just \(2\text{.}\) So let’s start by checking whether \(E_{11}\) is in \(\Span S\text{.}\) The vector equation
\begin{equation*}
k_1 \begin{bmatrix} 1 \amp 1 \\ 1 \amp 1\end{bmatrix}
+ k_2 \left[\begin{array}{rr} 1 \amp 0 \\ 1 \amp -1\end{array}\right]
= \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0\end{bmatrix}
\end{equation*}
leads to a system of equations with augmented matrix as on the left below, which we can reduce:
\begin{equation*}
\left[\begin{array}{rr|r}
1 \amp 1 \amp 1 \\
1 \amp 0 \amp 0 \\
1 \amp 1 \amp 0 \\
1 \amp -1 \amp 0
\end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rr|r}
1 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \\
0 \amp 0 \amp 1 \\
0 \amp 0 \amp 0
\end{array}\right].
\end{equation*}
The one in the last column indicates that the system is inconsistent, which is what we want — there is no solution, so \(E_{11}\) is not in \(\Span S\text{,}\) and so we can enlarge \(S\) by including \(E_{11}\) and it will remain linearly independent. Call the enlarged set \(S'\text{.}\)
Now let’s check if \(E_{12}\) is in the span of these three linearly independent vectors that we have already. Our vector equation is now,
\begin{equation*}
k_1 \begin{bmatrix} 1 \amp 1 \\ 1 \amp 1 \end{bmatrix}
+ k_2 \left[\begin{array}{rr} 1 \amp 0 \\ 1 \amp -1 \end{array}\right]
+ k_3 \begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix}
= \begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}
\end{equation*}
which leads to a system with augmented and reduced augmented matrices
\begin{equation*}
\left[\begin{array}{rrr|r}
1 \amp 1 \amp 1 \amp 0 \\
1 \amp 0 \amp 0 \amp 1 \\
1 \amp 1 \amp 0 \amp 0 \\
1 \amp -1 \amp 0 \amp 0
\end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rrr|r}
1 \amp 0 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \amp 0 \\
0 \amp 0 \amp 1 \amp 0 \\
0 \amp 0 \amp 0 \amp 1
\end{array}\right].
\end{equation*}
Again, there is no solution, so \(E_{12}\) is not in \(\Span S'\text{.}\) We are now up to four linearly independent vectors, which must form a basis for the \(4\)-dimensional space \(\matrixring_2(\R)\text{.}\)
Our final basis is
\begin{equation*}
\left\{
\begin{bmatrix} 1 \amp 1 \\ 1 \amp 1 \end{bmatrix},
\left[\begin{array}{rr} 1 \amp 0 \\ 1 \amp -1 \end{array}\right],
\begin{bmatrix} 1 \amp 0 \\ 0 \amp 0 \end{bmatrix},
\begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix}
\right\}.
\end{equation*}
A look ahead.
In the example above, we could have augmented our initial spanning set vectors with all standard basis vectors, and checked all of them all at once:
\begin{equation*}
\left[\begin{array}{rr|rrrr}
1 \amp 1 \amp 1 \amp 0 \amp 0 \amp 0 \\
1 \amp 0 \amp 0 \amp 1 \amp 0 \amp 0 \\
1 \amp 1 \amp 0 \amp 0 \amp 1 \amp 0 \\
1 \amp -1 \amp 0 \amp 0 \amp 0 \amp 1
\end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rr|rrrr}
1 \amp 0 \amp 0 \amp 0 \amp 1/2 \amp 1/2 \\
0 \amp 1 \amp 0 \amp 0 \amp 1/2 \amp -1/2 \\
0 \amp 0 \amp 1 \amp 0 \amp -1 \amp 0 \\
0 \amp 0 \amp 0 \amp 1 \amp -1/2 \amp -1/2
\end{array}\right].
\end{equation*}
By changing the position of the vertical line that indicates the separation of the coefficients from the column of constants one column at a time, we can change our point of view on each augmented column from representing a vector to be achieved as a linear combination of the spanning vectors to a vector included as a spanning vector.