Section 20.4 Examples
Subsection 20.4.1 The three spaces
We will do an example column space, row space, and null space, all in one example.
Consider the \(4\times 5\) matrix
\begin{equation*}
A =
\left[\begin{array}{rrrrr}
-8 \amp 9 \amp 11 \amp 7 \amp 5 \\
1 \amp -1 \amp -1 \amp -3 \amp 6 \\
-2 \amp 2 \amp 2 \amp 5 \amp -9 \\
1 \amp -1 \amp -1 \amp 1 \amp -6
\end{array}\right].
\end{equation*}
Row reduce, as usual:
\begin{equation*}
\left[\begin{array}{rrrrr}
-8 \amp 9 \amp 11 \amp 7 \amp 5 \\
1 \amp -1 \amp -1 \amp -3 \amp 6 \\
-2 \amp 2 \amp 2 \amp 5 \amp -9 \\
1 \amp -1 \amp -1 \amp 1 \amp -6
\end{array}\right]
\qquad\rowredarrow\qquad
\left[\begin{array}{rrrrr}
1 \amp 0 \amp 2 \amp 0 \amp -1 \\
0 \amp 1 \amp 3 \amp 0 \amp 2 \\
0 \amp 0 \amp 0 \amp 1 \amp -3 \\
0 \amp 0 \amp 0 \amp 0 \amp 0
\end{array}\right].
\end{equation*}
Column space of \(A\).
From the positions of the leading ones in the reduced matrix, we see that the first, second, and fourth columns of \(A\) are linearly independent, so a basis for the column space of \(A\) is
\begin{equation*}
\basisfont{B}_{\mathrm{col}} = \left\{
\left[\begin{array}{r} -8 \\ 1 \\ -2 \\ 1 \end{array}\right],
\left[\begin{array}{r} 9 \\ -1 \\ 2 \\ -1 \end{array}\right],
\left[\begin{array}{r} 7 \\ -3 \\ 5 \\ 1 \end{array}\right]
\right\},
\end{equation*}
and the dimension of the column space of \(A\) is \(3\text{.}\)
We can also see from the reduced matrix the exact dependence relationships between the columns of \(A\text{.}\) In the reduced matrix, the leading-one columns are the first three standard basis vectors, and we can easily see how the third and fifth columns can be decomposed as linear combinations of these standard basis vectors. In \(A\text{,}\) the third and fifth columns can be decomposed in the exact same way as linear combinations of the vectors in \(\basisfont{B}_{\mathrm{col}}\text{.}\) If we label the columns of \(A\) as \(\uvec{c}_1,\uvec{c}_2,\uvec{c}_3,\uvec{c}_4,\uvec{c}_5\text{,}\) then we have
\begin{align*}
\uvec{c}_3 \amp= 2\uvec{c}_1 + 3\uvec{c}_2, \amp
\uvec{c}_5 \amp= (-1)\uvec{c}_1 + 2\uvec{c}_2 + (-3)\uvec{c}_4.
\end{align*}
Row space of \(A\).
The leading ones guarantee that the nonzero rows in the reduced matrix are linearly independent. Since row reducing does not change the row space, we get our basis for the row space of \(A\) from the reduced matrix:
\begin{equation*}
\basisfont{B}_{\mathrm{row}} = \{
\left[\begin{array}{rrrrr}
1 \amp 0 \amp 2 \amp 0 \amp -1 \\
\end{array}\right],
\left[\begin{array}{rrrrr}
0 \amp 1 \amp 3 \amp 0 \amp 2 \\
\end{array}\right],
\left[\begin{array}{rrrrr}
0 \amp 0 \amp 0 \amp 1 \amp -3 \\
\end{array}\right]
\}.
\end{equation*}
The dimension of the row space of \(A\) is again \(3\text{.}\)
Null space of \(A\).
Finally, for the null space of \(A\) we solve the homogeneous system as usual. The third and fifth columns represent free variables, so we set parameters \(x_3 = s\) and \(x_5 = t\text{.}\) Solving for the remaining variables leads to a general solution in parametric form
\begin{align*}
x_1 \amp= -2s + t, \amp
x_2 \amp= -3s - 2t, \amp
x_3 \amp= s, \amp
x_4 \amp= 3t, \amp
x_5 \amp= t.
\end{align*}
In vector form, we have
\begin{equation*}
\uvec{x}
= \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix}
= \begin{bmatrix} -2s+t \\ -3s-2t \\ s \\ 3t \\ t \end{bmatrix}
= \begin{bmatrix} -2s \\ -3s \\ s \\ 0 \\ 0 \end{bmatrix}
+ \begin{bmatrix} t \\ 2t \\ 0 \\ 3t \\ t \end{bmatrix}
= s\left[\begin{array}{r} -2 \\ -3 \\ 1 \\ 0 \\ 0 \end{array}\right]
+ t\begin{bmatrix} 1 \\ 2 \\ 0 \\ 3 \\ 1 \end{bmatrix}.
\end{equation*}
So a basis for the null space of \(A\) is
\begin{equation*}
\basisfont{B}_{\mathrm{null}} = \left\{
\left[\begin{array}{r} -2 \\ -3 \\ 1 \\ 0 \\ 0 \end{array}\right],
\begin{bmatrix} 1 \\ 2 \\ 0 \\ 3 \\ 1 \end{bmatrix}
\right\},
\end{equation*}
and the dimension of the null space is \(2\text{.}\)
Subsection 20.4.2 Enlarging a linearly independent set
Row space is also a convenient tool for enlarging a linearly independent set into a basis. Here are two examples of carrying out this task, one using vectors in \(\R^n\text{,}\) and one using vectors in another space, where we use the associated coordinate vectors in \(\R^n\) to assist us.
Example 20.4.1. Using row space to enlarge a linearly independent set in \(\R^4\).
Suppose we would like to take the linearly independent set
\begin{equation*}
\{ (1,3,2,0), (2,6,1,1) \}
\end{equation*}
of vectors in \(\R^4\) and enlarge it into a basis for all of \(\R^4\text{.}\) Since \(\dim \R^4 = 4\text{,}\) we need two more vectors.
Using Proposition 17.5.6, we can start by determining a vector that is not in the subspace \(U = \Span\{\uvec{v}_1,\uvec{v}_2\}\text{,}\) where \(\uvec{v}_1,\uvec{v}_2\) are the two given vectors. However, guess-and-check is not a very efficient method for doing this. Instead, let’s set up a matrix with \(\uvec{v}_1\) and \(\uvec{v}_2\) as rows, so that \(U\) is precisely the row space of that matrix. We can then use row reduction to determine a simpler basis for \(U\text{:}\)
\begin{equation*}
\begin{bmatrix}
1 \amp 3 \amp 2 \amp 0 \\
2 \amp 6 \amp 1 \amp 1
\end{bmatrix}
\qquad\rowredarrow\qquad
\left[\begin{array}{rrrr}
1 \amp 3 \amp 0 \amp \frac{2}{3} \\
0 \amp 0 \amp 1 \amp -\frac{1}{3}
\end{array}\right].
\end{equation*}
We can see from the pattern of leading ones in the reduced matrix that to span all of \(\R^4\text{,}\) we need to introduce some “independence” in the second and fourth coordinates. So let’s try enlarging our initial set of vectors by the second and fourth standard basis vectors:
\begin{equation*}
\begin{bmatrix}
1 \amp 3 \amp 2 \amp 0 \\
2 \amp 6 \amp 1 \amp 1 \\
0 \amp 1 \amp 0 \amp 0 \\
0 \amp 0 \amp 0 \amp 1
\end{bmatrix}
\qquad\rowredarrow\qquad
\begin{bmatrix}
1 \amp 0 \amp 0 \amp 0 \\
0 \amp 1 \amp 0 \amp 0 \\
0 \amp 0 \amp 1 \amp 0 \\
0 \amp 0 \amp 0 \amp 1
\end{bmatrix}.
\end{equation*}
The rows of the reduced matrix are the four standard basis vectors for \(\R^4\text{,}\) hence the row space of the reduced matrix is all of \(\R^4\text{.}\) We know that row operations do not change row space, so the rows of the initial matrix must also span all of \(\R^4\text{.}\) Since we have a spanning set for a dimension-\(4\) space consisting of four vectors, those four vectors must for a basis for the space.
Example 20.4.2. Using row space to enlarge a linearly independent set in \(\matrixring_2(\R)\).
Suppose we would like to take the linearly independent set
\begin{equation*}
\left\{
\begin{bmatrix} 1 \amp 3 \\ 2 \amp 0 \end{bmatrix},
\begin{bmatrix} 2 \amp 6 \\ 1 \amp 1 \end{bmatrix}
\right\}
\end{equation*}
of vectors in \(\matrixring_2(\R)\) and enlarge it into a basis for all of \(\matrixring_2(\R)\text{.}\) Since \(\dim \matrixring_2(\R) = 4\text{,}\) we need two more vectors. Now, we cannot row reduce the given matrices — that would be meaningless, as these matrices are not made of row vectors or column vectors, they are themselves vectors. However, we can get back to the land of row vectors by using coordinate vectors relative to the standard basis \(\basisfont{S}\) for \(\matrixring_2(\R)\text{:}\)
\begin{align*}
\rmatrixOf{\uvec{v}_1}{S} \amp= (1,3,2,0), \amp
\rmatrixOf{\uvec{v}_2}{S} \amp= (2,6,1,1),
\end{align*}
where \(\uvec{v}_1,\uvec{v}_2\) are the two given vectors. These coordinate vectors are precisely the vectors from Example 20.4.1 above, so using those results we expect that we should be able to enlarge our basis using vectors \(\uvec{v}_3\) and \(\uvec{v}_4\) that have coordinate vectors
\begin{align*}
\rmatrixOf{\uvec{v}_3}{S} \amp= (0,1,0,0), \amp
\rmatrixOf{\uvec{v}_4}{S} \amp= (0,0,0,1).
\end{align*}
Thus, we can enlarge the initial set of vectors to the basis
\begin{equation*}
\left\{
\begin{bmatrix} 1 \amp 3 \\ 2 \amp 0 \end{bmatrix},
\begin{bmatrix} 2 \amp 6 \\ 1 \amp 1 \end{bmatrix},
\begin{bmatrix} 0 \amp 1 \\ 0 \amp 0 \end{bmatrix},
\begin{bmatrix} 0 \amp 0 \\ 0 \amp 1 \end{bmatrix}
\right\}
\end{equation*}
for \(\matrixring_2(\R)\text{.}\)