Section 20.5 Theory
Subsection 20.5.1 Column space
First we’ll record two facts concerning how multiplying each in a set of column vectors in by a common matrix affects linear dependence and independence, leading to our conclusion about how to determine a basis for the column space of a matrix from examining its RREF.
Proposition 20.5.1. Dependence/independence versus matrix transformation.
- If
is a linearly independent set of vectors then so too is - If
is square and invertible and is a linearly independent set, then so too is - If
is square and invertible and is another column vector in so that vector is linearly dependent with the vectors by the dependence relation is linearly dependent with by a dependence relation involving the same scalars,
Proof of Statement 1.
Let’s apply the Test for Linear Dependence/Independence to the vectors suppose that are scalars so that
Multiplying both sides of this equation by the matrix and using some matrix algebra, we get
But we have assumed that the vectors are linearly independent, so the only way this linear combination could equal the zero vector is if all the scalars are zero. Thus, the linear combination in (✶) must be the trivial one, and so the vectors are linearly independent.
Proof of Statement 2.
We assume are linearly independent. Since we also assume to be invertible, we can restate this as saying that vectors
Proof of Statement 3.
Simply apply the inverse to both sides of
to obtain
Corollary 20.5.2. Column space basis and dimension.
- A basis for the column space of an
matrix can be formed from those columns of (as column vectors in ) in positions corresponding to the locations of the leading ones in the RREF of - The dimension of the column space of a matrix is equal to the rank of the matrix.
Proof of Statement 1.
By definition, the columns of are a spanning set for the column space of By Proposition 18.5.1, this spanning set can be reduced to a basis; it’s a matter of determining the largest possible linearly independent set of these spanning column vectors.
Let be the product of elementary matrices corresponding to some sequence of row operations that reduces to its RREF. Because of the nature of RREF, each column of that contains a leading one will be a standard basis vector in no two such leading-one columns will be the same standard basis vector, and each column that does not contain a leading one will be a linear combination of those leading-one columns that appear to its left. Therefore, the leading-one columns represent the largest set of linearly independent vectors that can be formed from the columns of Since is invertible, the two statements of Proposition 20.5.1 tell us that the columns of will have the same relationships: those columns in that are in positions where the leading ones occur in will be linearly independent, and that will be the largest possible collection of linearly independent columns of because each of the other columns will be linearly dependent with the leading-one-position columns of to its left.
Thus, we can reduce the spanning set made up of all columns of to a basis for its column space by discarding the linearly dependent columns and keeping only those columns in positions corresponding to the locations of the leading ones occur in
Proof of Statement 2.
Since we obtain a basis for column space by taking those columns in the matrix in positions corresponding to the leading ones in a reduced form for the matrix, the number of basis vectors is equal to the number of leading ones.
Subsection 20.5.2 Row space
Next we’ll record our observations concerning how the row operations affect the row space of a matrix, leading to our conclusion about how to obtain a basis for the row space of a matrix from its RREF.
Proposition 20.5.3. Row space versus row operations.
If an elementary row operation is applied to a matrix, then the row space of the new matrix is the same as the row space of the old matrix.
Proof.
Consider an matrix as a collection of row vectors in
Then the row space of is, by definition, the subspace of
As we did in Discovery 20.5, we will make repeated use of Statement 2 of Proposition 16.5.6, which tells us how to determine when two spanning sets generate the same subspace.
Let’s consider each type of elementary row operation in turn.
- Suppose we swap two rows in
is the span of its row vectors. But every row vector in is equal to one of the row vectors in and vice versa. So clearly the conditions of the above-referenced Statement 2 are satisfied, and the rowspaces of the two matrices are the same space. -
Suppose we multiply one of the rows in
by a nonzero constantAgain, most of the row vectors in the new matrix are equal to one of the row vectors in and vice versa. So to fully satisfy the conditions of the above-referenced Statement 2, we need to verify that is somehow a linear combination of row vectors from and that is somehow a linear combination of row vectors from But is already expressed as a scalar multiple of a row vector from and since is nonzero we can also writeso that is also a scalar multiple of a row vector fromWith the conditions of the above-referenced Statement 2 now fully satisfied, we can conclude that the rowspaces of the two matrices are the same space. -
Suppose we replace one row vector in
by the sum of that row and a scalar multiple of another:Once again, most of the row vectors in the new matrix are equal to one of the row vectors in and vice versa. So to fully satisfy the conditions of the above-referenced Statement 2, we need to verify that is somehow a linear combination of row vectors from and that is somehow a linear combination of row vectors from But is already expressed as a linear combination of row vectors from and for we can writea linear combination of row vectors fromWith the conditions of the above-referenced Statement 2 now fully satisfied, we can conclude that the rowspaces of the two matrices are the same space.
Corollary 20.5.4. Row space basis and dimension.
Let represent a matrix.
- If
is an invertible square matrix of compatible size, then and have the same row space. - The row space of each REF for
(including the RREF of ) is always the same as that of - The nonzero rows of each REF for
form a basis for the row space of - The dimension of the row space of
is equal to the rank of
Proof of Statement 1.
Since is invertible, it can be expressed as a product of elementary matrices (Theorem 6.5.2), and the product has the same result as applying to the sequence of row operations represented by those elementary matrices. But Proposition 20.5.3 tells us that applying those operations does not change the row space.
Proof of Statement 2.
Let be an REF for and let be elementary matrices corresponding to some sequence of row operations that reduces to Set Then is an invertible matrix and Therefore, has the same row space as by Statement 1 of this corollary.
Proof of Statement 3.
Let be an REF for By Statement 2 of this corollary, the rows of are a spanning set for the row space of Clearly we can discard any zero rows from this spanning set, so it just remains to verify that the nonzero rows of are linearly independent. For this, we will use Proposition 17.5.6, building up our linearly independent spanning set one vector at a time. Let represent the nonzero rows of from top to bottom. Start with all by itself, this one nonzero vector is linearly independent. Now, cannot be in because the leading one in appears to the left of the leading one in and so no scalar multiple of will have a nonzero entry in the component where has its leading one. From this, Proposition 17.5.6 tells us that is linearly independent. Moving on, cannot be in because the leading one in appears to the left of both the leading one in and in and so no linear combination of those two vectors will have a nonzero entry in the component where has its leading one. From this, Proposition 17.5.6 tells us that is linearly independent. Repeating this argument as we move up the rows of we see that the nonzero rows of are linearly independent when taken altogether.
Proof of Statement 4.
Applying Statement 3 of this corollary to the RREF for the nonzero rows of form a basis for the row space of But the nonzero rows of must all contain leading ones, so the number of vectors in a basis for the row space of is equal to the number of leading ones in as desired.
Subsection 20.5.3 Column and row spaces versus rank and invertibility
As discovered in Discovery 20.4, we can use our observations recorded in Proposition 20.5.1 to connect column space to invertibility. We can similarly use Corollary 20.5.4 to also connect row space to invertibility.
First, we will extend the list of properties that are equivalent to invertibility of a square matrix, first started in Theorem 6.5.2, and then continued in Theorem 10.5.3.
Theorem 20.5.5. Characterizations of invertibility.
For a square matrix the following are equivalent.
- Matrix
is invertible. - Every linear system that has
as a coefficient matrix has one unique solution. - The homogeneous system
has only the trivial solution. - There is some linear system that has
as a coefficient matrix and has one unique solution. - The rank of
is equal to the size of - The RREF of
is the identity. - Matrix
can be expressed as a product of some number of elementary matrices. - The determinant of
is nonzero. - The null space of
consists of only the zero vector. - The columns of
are linearly independent. - The columns of
form a basis for where is the size of - The rows of
are linearly independent. - The rows of
form a basis for where is the size of
Proof.
We have previously encountered the equivalence of many of these statements, most recently in Theorem 10.5.3. So currently we only need to concern ourselves with the new statements. For each of these, if we can establish equivalence of the new statement to one of the old, then the new statement must be equivalent to all of the old, by the transitivity of logical equivalence.
Statement 9.
This is just restatement of Statement 3 using the concept of null space.
Statement 10.
From our reinterpretation of Proposition 17.5.1, stated in Procedure 20.3.4, we know that all of the columns of will be linearly independent if and only if every column of has a leading one. Therefore, this statement is equivalent to Statement 5.
Statement 11.
This statement is equivalent to Statement 10, since Proposition 19.5.5 tells us that we need exactly linearly independent vectors to form a basis for
Statement 12.
From the row space version of the Test for Linear Dependence/Independence stated in Procedure 20.3.7, we know that all of the rows of will be linearly independent if and only if every row of is nonzero. Therefore, this statement is also equivalent to Statement 5.
Statement 13.
This statement is equivalent to Statement 12, again since Proposition 19.5.5 tells us that we need exactly linearly independent vectors to form a basis for
Finally, we’ll record an observation from Discovery 20.9, which is just a reframing of Proposition 2.5.8.
Theorem 20.5.6. Rank-Nullity Theorem.
Proof.
The dimension of the column space of is equal to the number of leading ones in its RREF, while the dimension of the null space of is equal to the number of free variables, which is equal to the number of columns in the RREF that do not have a leading one. These two numbers must add up to the total number of columns in