Section 9.2 Concepts
In this section.
We would like to connect determinants to invertibility, and as always row operations are the way to do so.
Subsection 9.2.1 Swapping rows: effect on determinant
Swapping adjacent rows.
In Discovery 9.2, we first explored what happens to a determinant if we swap adjacent rows in a matrix, and we discovered the following. Suppose we take square matrix \(A\) and swap row \(i\) with row \(i+1\text{,}\) which are adjacent, obtaining new matrix \(A'\text{.}\) Compared to a cofactor expansion of \(\det A\) along row \(i\text{,}\) a cofactor expansion of \(\det A'\) along row \(i+1\) has all the same entries and minor determinants, because the \(\nth[(i+1)]\) row in \(A'\) now contains the entries from the \(\nth[i]\) row in \(A\text{,}\) and vice versa. However, the cofactor signs along the \(\nth[(i+1)]\) row are all the opposite of those along the \(\nth[i]\) row. Therefore, all the terms in the cofactor expansions of \(\det A\) and \(\det A'\) are negatives of each other, and so \(\det A' = -\det A\text{.}\) We concluded that swapping adjacent rows changes the sign of the determinant.
Swapping (possibly) non-adjacent rows.
Now, it might seem that we might sometimes get \(\det A'\) to be equal to \(\det A\) if we swapped non-adjacent rows. In particular, if we swapped two rows that were separated by a single other row (as in Discovery 9.2.d), the two rows would have the same pattern of cofactor signs, and our thinking above might would lead to \(\det A' = \det A\) in this case. However, it turns out that any swap of rows can be achieved by an odd number of consecutive adjacent row swaps, and an odd number of sign changes will have a net result of changing the sign. So any swap of a pair of rows changes the sign of the determinant.
Matrices with two identical rows.
In Discovery 9.3, we paused to consider a consequence of this effect of swapping rows on the determinant. Suppose a square matrix has two identical rows. If we swap those two particular rows, then from our discussion above we expect the determinant of the new matrix obtained from this operation to be the negative of the determinant of the original matrix. But if those rows are identical, then swapping them has no effect and the determinants of the new and old matrices should be equal. Since the only number that remains unchanged when its sign is changed is zero, we conclude that a square matrix with two (or more) identical rows has determinant \(0\text{.}\)
Corresponding elementary matrices.
Recall that elementary matrices are obtained from the identity by a single row operation. So if we take the identity matrix (which has determinant \(1\)) and swap two rows to obtain the elementary matrix that corresponds to that operation, then that elementary matrix must have determinant \(-1\text{.}\)
Subsection 9.2.2 Multiplying rows: effect on determinant
Multiplying a single row by a scalar.
In Discovery 9.4, we first explored what happens to a determinant if we multiply a single row in a matrix, and we discovered the following. Suppose we take square matrix \(A\) and multiply row \(i\) by the constant \(k\text{,}\) obtaining new matrix \(A'\text{.}\) Compared to a cofactor expansion of \(\det A\) along row \(i\text{,}\) a cofactor expansion of \(\det A'\) along row \(i\) has all the same minor determinants, because the entries in all the other rows are still the same as in \(A\text{.}\) However, when we add up all the “entry times cofactor” terms in a cofactor expansion of \(\det A'\) along row \(i\text{,}\) there is the new common factor of \(k\) from the scaled entries of that row. If we factor that common \(k\) out, we are left with exactly the cofactor expansion of \(\det A\) along row \(i\text{.}\) Hence, multiplying a single row in a matrix scales the determinant by the same factor.
Multiplying a whole matrix by a scalar.
In Discovery 9.4.d and Discovery 9.4.e, we also considered what happens if we multiply a whole matrix by a constant. But scalar multiplying a matrix is the same as multiplying every row by that scalar. If multiplying a single row by \(k\) changes the determinant by a factor of \(k\text{,}\) then multiplying every row by \(k\) must change the determinant by \(n\) factors of \(k\text{,}\) where \(n\) is the size of the matrix (and hence the number of rows). That is, for a square \(n\times n\) matrix \(A\) and a scalar \(k\text{,}\) we have \(\det (kA) = k^n \det A\text{.}\)
Warning 9.2.1.
It is very common for students to forget this lesson and incorrectly remember the formula as \(\det (kA)\) being equal to \(k \det A\text{,}\) just because that “looks” correct. Don't be one of those students!
Matrices with proportional rows.
Let's pause again to consider a consequence of this effect of multiplying a row by a constant on the determinant. Suppose \(A\) is a matrix where one row is equal to a multiple (by \(k\text{,}\) say) of another row, as in Discovery 9.5. We can multiply that row by \(1/k\) to obtain matrix \(A'\) with determinant \(\det A' = (1/k)\det A\text{.}\) But now \(A'\) has two identical rows, so \(\det A'=0\text{,}\) which forces \(\det A = 0\text{.}\) So we can extend our fact about matrices with some identical rows to matrices with some proportional rows: a matrix with two (or more) proportional rows has determinant \(0\).
Corresponding elementary matrices.
Again, let's consider elementary matrices corresponding to this type of operation. If we take the identity matrix (which has determinant \(1\)) and multiply a row by a nonzero constant \(k\) to obtain the elementary matrix that corresponds to that operation, then that elementary matrix must have determinant \(k\cdot 1 = k\text{.}\)
Subsection 9.2.3 Combining rows: effect on determinant
Now we move to the operation of adding a multiple of one row to another, explored in Discovery 9.6. This is the most complicated of the three operations, so we will just consider the \(3\times 3\) case, as in the referenced discovery activity. Consider the general \(3\times 3\) matrix
As in the discovery activity, suppose we add \(k\) times the second row to the first, to get
The cofactors, \(C_{11},C_{12},C_{13}\text{,}\) along the first row of \(A'\) are exactly the same as the cofactors along the first row in \(A\text{,}\) since calculating those cofactors only involve entries in the second and third rows, which have not changed. If we do a cofactor expansion of \(\det A'\) along the first row, we get
In the second term of the last line, we have sort of a “mixed” cofactor expansion for \(A\text{,}\) where the entries are from the second row but the cofactors are from the first row. This mixed expansion is definitely not equal to \(\det A\) or \(\det A'\text{,}\) but could it be the determinant of some new matrix \(A''\text{?}\) To have the same first-row cofactors as \(A\text{,}\) this new \(A''\) matrix would have to have the same second and third rows as \(A\text{,}\) since those entries are what are used to calculate the first-row cofactors. If we also repeat the second row entries from \(A\) in the first row of \(A''\text{,}\) so that
then a cofactor expansion of \(\det A''\) along the first row gives us exactly the “mixed” cofactor expansion in the second term of our last expression for \(\det A'\) above. However, \(A''\) has two identical rows, so its determinant is \(0\text{.}\) We can now continue our calculation from above:
This result is fairly surprising: while the two simpler row operations affected the determinant, the elementary row operation combining rows has no effect at all on the determinant.
Corresponding elementary matrices.
Just as with the other two row operations, we can apply what we've learned to elementary matrices. If we take the identity matrix and add a multiple of one row to another to obtain the elementary matrix that corresponds to that operation, then that elementary matrix must have the same determinant as the identity, which is \(1\text{.}\)
Subsection 9.2.4 Column operations and the transpose
You could imagine that an alien civilization might also develop the theory of linear algebra, but perhaps with some cosmetic differences. Perhaps they prefer to write their equations vertically, and so when they convert equations to augmented matrices, a column represents an equation and a row contains the coefficients in each equation for a particular variable. In essence, their matrix theory is the transpose of ours. They would then proceed to “column reduce” matrices in order to solve the underlying system, instead of row reducing. Since determinants can be computed by cofactor expansions along either rows or columns and yield the same result, and since the cofactor sign patterns we determined in Pattern (8.3.1) are symmetric in the main diagonal (i.e. the pattern is unchanged by a transpose), this alien development of linear algebra would discover all the same things about the relationships between column operations and the determinant as we have about row operations and the determinant. We have recorded all these facts about column operations in Subsection 9.4.1, alongside the corresponding facts about row operations. And there is one more fact about the transpose, which is the bridge between our matrix theory and the alien matrix theory: a transpose has no effect on the determinant. You can easily see why this would be true, since a cofactor expansion along a column in \(\utrans{A}\) would work out the same as a cofactor expansion along a row in \(A\text{.}\)
Subsection 9.2.5 Determinants by row reduction
The relationships between row operations and the determinant that we have explored in Discovery guide 9.1 and described above provide us with another method of computing determinants. An REF for a square matrix must always be upper triangular, since the leading ones must be either on or to the right of the main diagonal. So when row reducing there is always a point where we reach an upper triangular matrix. And from Statement 1 of Proposition 8.5.2 we know that determinants of upper triangular matrices are particularly easy to compute. So starting with any square matrix, we can row reduce to upper triangular, keeping track of how the determinant has changed at each step, and then work backwards from the determinant of the upper triangular matrix to determine the determinant of the original matrix. We'll save doing an example for Subsection 9.3.1.
Warning 9.2.2.
When using this method, it is really important to stick to elementary row operations. In learning to row reduce, you may have discovered that you can perform operations of the kind \(R_i \to kR_i + mR_j\) and still get the correct set of solutions to the corresponding system. However, this kind of operation is not elementary — it is actually a combination of two elementary operations performed at once, and will change the determinant. It's best just to avoid operations of this kind for determinant calculations.