Section 21.3 Concepts
In this section.
We have already seen in Example 17.4.8 that the solution set of a homogeneous system \(A\uvec{x} = \zerovec\) with \(m\times n\) coefficient matrix \(A\) is a subspace of \(\R^n\text{.}\) We will return to this special subspace at the end of this section, but first we will discuss a special subspace of \(\R^m\) related to nonhomogeneous systems with coefficient matrix \(A\text{.}\)
Subsection 21.3.1 Column space
The “consistent space” of a coefficient matrix.
The solution set of a nonhomogeneous system \(A\uvec{x} = \uvec{b}\) with \(m\times n\) coefficient matrix \(A\) cannot be a subspace of \(\R^n\) because it can never contain the zero vector. Even worse, if the system is inconsistent, then the solution set does not contain any vectors at all.
Question 21.3.1.
Amongst all systems with coefficient matrix \(A\text{,}\) which are consistent?
We know that the homogeneous system \(A\uvec{x} = \zerovec\) is consistent because it has at least the trivial solution. But for what other vectors of \(\uvec{b}\) besides \(\uvec{b} = \zerovec\) is the system \(A\uvec{x}=\uvec{b}\) consistent? It is possible to verify directly that the collection of all such \(\uvec{b}\) vectors is a subspace of \(\R^m\text{.}\)
Until we know more about it, for now let's refer to this subspace of \(\R^m\) as the consistent space of \(A\text{.}\)
Consistent space versus column space.
To better understand this so-called consistent space, we should relate it back to the matrix \(A\) as we did in Discovery 21.2, because \(A\) is the only thing common to all the \(\uvec{b}\) vectors in this space. Let's again think of \(A\) as being made up of column vectors in \(\R^m\text{:}\)
In Discovery 21.1, we found that the result of computing \(A\uvec{e}_j\) is \(\uvec{c}_j\text{,}\) the \(\nth[j]\) column of \(A\) (where \(\uvec{e}_j\) is the \(\nth[j]\) standard basis vector in \(\R^n\text{,}\) as usual). But this says that each system \(A\uvec{x}=\uvec{c}_j\) is consistent, since there is at least one solution \(\uvec{x}=\uvec{e}_j\text{.}\) Therefore, each of the columns of \(A\) is in the consistent space of \(A\text{.}\) And because the span of these columns is the smallest subspace that contains each of them, we can conclude that every vector in the column space \(\Span\{\uvec{c}_1,\uvec{c}_2,\dotsc,\uvec{c}_n\}\) of \(A\) is also in the consistent space of \(A\text{.}\)
What other vectors could be in this space? If \(A\uvec{x}=\uvec{b}\) is consistent, then it has at least one solution
But then
So whenever \(A\uvec{x}=\uvec{b}\) is consistent, we find that \(\uvec{b}\) is equal to some linear combination of the columns of \(A\) (with coefficients taken from the components of a solution vector). In other words, every vector in the consistent space of \(A\) is also in the column space of \(A\text{.}\) So the two spaces are equal: the system \(A\uvec{x}=\uvec{b}\) is consistent when, and only when, the vector of constants \(\uvec{b}\) is in the column space of \(A\).
Determining a basis for a column space.
Since the columns of \(A\) are, by definition, a spanning set for the column space of \(A\text{,}\) we can reduce it to a basis. Once again, we can apply row reduction to this task. Row reducing is equivalent to multiplying on the left by elementary matrices, and when we defined matrix multiplication in Subsection 4.3.7 we did so column-by-column:
Because matrix multiplication distributes over linear combinations, multiplying a collection of column vectors by a common matrix cannot create independence out of dependence. Even better, the process of row reducing can be reversed (i.e. we are multiplying by invertible matrices), so it follows that multiplying a collection of column vectors by an invertible common matrix cannot create dependence out of independence. As we partially reasoned in Discovery 21.3, this means that we can recognize independence/dependence relationships amongst the columns of \(A\) from the independence/dependence relationships amongst the columns of reduced forms of \(A\text{,}\) leading to the following procedure.
Procedure 21.3.2. To determine a basis for the column space of matrix \(A\).
- Reduce to at least REF.
- Extract from \(A\) all those columns in positions corresponding to the positions of the leading ones in the reduced matrix.
These extracted columns will form a basis for the column space of \(A\text{.}\)
Remark 21.3.3.
It is important that you take the basis vectors from the columns of \(A\), not from the columns of the reduced matrix — row operations do not change independence/dependence relationships amongst the columns, but they do change the column space.
Using column space to determine linear dependence/independence.
In Discovery 21.4.a, we used this new procedure to create a reinterpretation of the Test for Linear Dependence/Independence for vectors in \(\R^m\text{.}\)
Procedure 21.3.4. To use row reduction to test linear dependence/independence in \(\R^m\).
To determine whether a collection of \(n\) vectors in \(\R^m\) is linearly dependent or independent, form an \(m\times n\) matrix by using the vectors as columns, and then row reduce to determine the rank of the matrix. If the rank is equal to \(n\) (i.e. there is a leading one in every column of the reduced matrix), then the vectors are linearly independent. If the rank is less that \(n\) (i.e. at least one column of the reduced matrix does not contain a leading one), then the vectors are linearly dependent.
Note that this isn't really a new version of the Test for Linear Dependence/Independence, it's just a shortcut — if we were to use the full test, the column vectors we are testing would appear as the columns of the coefficient matrix for the homogeneous system created by the test. (See Example 18.4.1, and the other examples in Subsection 18.4.1.)
In Discovery 21.4.b and Discovery 21.4.c, we also used this new procedure to connect column space to invertibility for a square matrix. We will summarize these new facts in Subsection 21.5.3.
Subsection 21.3.2 Row space
Analyzing the row space of a matrix is considerably easier. As we discovered in Discovery 21.5 and Discovery 21.6, elementary row operations do not change the row space of a matrix, so the row spaces of a matrix and each of its REFs are the same space. Clearly we do not need the zero rows from an REF to span this space. But the pattern of leading ones guarantees that the nonzero rows in an REF are linearly independent.
Procedure 21.3.5. To determine a basis for the row space of matrix \(A\).
- Reduce to at least REF.
- Extract the nonzero rows from the REF you have computed.
These extracted rows will form a basis for the row space of \(A\text{.}\)
Remark 21.3.6.
Note the difference from the column space procedure — in this procedure we get the basis vectors from the reduced matrix, not from the original matrix.
We can also use the row space procedure to test vectors for linear independence.
Procedure 21.3.7. A second way to use row reduction to test linear dependence/independence in \(\R^m\).
To determine whether a collection of \(m\) vectors in \(\R^n\) is linearly dependent or independent, form an \(m\times n\) matrix by using the vectors as rows, and then row reduce to determine the rank of the matrix. If the rank is equal to \(m\) (i.e. no zero rows can be produced by reducing), then the vectors are linearly independent. If the rank is less that \(n\) (i.e. reducing produces at least one zero row), then the vectors are linearly dependent.
Subsection 21.3.3 Column space versus row space
Question 21.3.8.
Which procedure — column space or row space — should we use?
When testing vectors from \(\R^n\) for linear independence, we clearly have a choice of whether to form a matrix using those vectors as columns or as rows. But we also have a choice when computing a basis for either type of space, because the column space of a matrix is the same as the row space of the transpose, and the row space of a matrix is the same as the column space of the transpose.
In Discovery 21.7, you were asked to think about this question. You might have considered the end results of the two procedures to determine the pros and cons of one procedure over the other.
- Column space
Produces a basis involving vectors from the original collection.
- Row space
Produces a “simplified” basis.
In the column space procedure, we always go back to the original matrix to pick out certain columns. So, this procedure effectively performs the task of reducing a spanning set down to a basis, a task that we knew could be done (Proposition 19.5.1) but didn't have a systematic means of carrying out. In the row space procedure, we take our basis vectors from the simplified nonzero rows of an REF for the matrix. Because the leading one in each row is in a different position, expressing other vectors in the space as linear combinations of these basis vectors is much more straightforward than it is in general. In fact, if you have taken a basis for the row space from the RREF, expressing other vectors in the space as linear combinations of these basis vectors can be done by inspection.
Subsection 21.3.4 Null space and the dimensions of the three spaces
We have already seen through examples in Subsection 20.4.1 how to extract a basis for the solution space for a homogeneous system \(A\uvec{x}=\zerovec\) (now called the null space of \(A\)) from the parameters assigned after row reducing.
The null space of \(A\) doesn't just represent the set of solutions to the homogeneous system — Lemma 4.5.4 tells us that it represents most of the data we need in order to know the solution set of every system that has \(A\) as a coefficient matrix. If we know one specific solution \(\uvec{x} = \uvec{x}_1\) to nonhomogeneous system \(A\uvec{x}=\uvec{b}\text{,}\) then every other solution can be obtained by adding to \(\uvec{x}_1\) a vector from the null space. Geometrically, this represents a translation of the null space away from the origin, like a plane that is translated away from the origin by an “initial” point \(\uvec{x}_1\text{.}\)
All three spaces — column, row, and null — are connected through the RREF of the matrix. For column space, we get a basis vector for each leading one in the RREF. For row space, we get a basis vector for each nonzero row in the RREF, and a row in the RREF is nonzero precisely when it contains a leading one. So even though column space is a subspace of \(\R^m\) and row space is a subspace of \(\R^n\) (where \(A\) is an \(m\times n\) matrix), the column space and the row space of \(A\) have the same dimension, and this dimension is equal to the rank of \(A\text{.}\) On the other hand, for the null space we get one basis vector for each parameter required to solve the homogeneous system. Parameters are assigned to free variables, and free variables are those whose columns do not contain a leading one. So the dimension of the null space is equal to the difference between the number of columns of \(A\) and the rank of \(A\text{,}\) which is just a more sophisticated way to state Proposition 2.5.8.