Section 8.3 Concepts
In Discovery 8.1, we discovered that for every matrix there is a related matrix so that the product is a scalar multiple of the identity matrix. Call this scalar for now. If we can do some algebra to get
Goal 8.3.1.
Now, we could achieve this goal by always choosing and but that won’t help us replicate for larger matrices the patterns we discovered in the case. We will find that there is a very particular procedure to achieve this goal that works for every square matrix and recovers the case above, so we will tackle the goal in two parts:
- determine the scalar
for each square matrix and then - determine how to construct the matrix
that goes along with it.
The process of producing the scalar is then a function on square matrices. For a particular square matrix we will call the output of this function the determinant of , and usually write instead of
Idea 8.3.2.
and Proposition 6.5.6 we know both that is invertible and its inverse must be as in the case discussed above. Also, we will learn in Chapter 10 that when then must be singular. So the value of the determinant of a matrix will determine whether or not it is invertible.
For now, we will concentrate on the first step and learn how to compute determinants, as it turns out that the “companion” matrix will be constructed out of determinants of submatrices of We will discuss this special matrix and complete our goal in Chapter 10.
Subsection 8.3.1 Definition of the determinant
It may seem from Section 8.2 that the definition of determinant is circular — we define the determinant in terms of entries and cofactors (via cofactor expansions), where cofactors are defined in terms of minors, which are defined in terms of … determinants? But the key word in the definition of minor is smaller — determinants are defined recursively in terms of smaller matrices. In Discovery guide 8.1, after first exploring the determinant of a matrix as motivation, we started afresh with a precise definition of the determinant, and then defined the determinant in terms of determinants. Then the determinant is defined in terms of determinants, and so on. As we will see in examples in Section 8.4, computing a determinant from this recursive definition will involve unpacking it in terms of determinants of one smaller size, then unpacking those in terms of determinants of one size smaller again, and so on. Technically, this process should continue until we are down to a bunch of determinants, but since there is a simple formula for a determinant, in direct computations we will stop there.
Warning 8.3.3.
Computing determinants by cofactor expansions is extremely inefficient, whether by hand or by computer. For example, for a matrix, the recursive process of a cofactor expansion could eventually require you to compute more than million determinants. In the next chapter we will discover that we can also compute determinants by … you guessed it, row reduction! (And there are other, more efficient methods for determinants by computer — we will leave those to a numerical methods course.) But again, the goal of this course is not to turn you into a super-efficient computer. We want to understand and be somewhat proficient at computing determinants by cofactor expansions so that we can think about and understand them in the abstract while we develop the theory of determinants.
Subsection 8.3.2 Determinants of matrices
Consider the general matrix We should expect the invertibility of to be completely determined by the value of the single entry since that is all the information that contains. And that is precisely the case, as is invertible when with and is singular when because then would be the zero matrix. Since entry determines the invertibility of we set
Subsection 8.3.3 Determinants of matrices
using a cofactor expansion along the first row. (We leave it up to you, the reader, to check that a cofactor expansion along a column or along the second row yields the same result.) And we already verified by row reducing that a matrix is invertible precisely when (Proposition 6.5.9).
An easy way to remember the determinant formula is with a crisscross pattern, as illustrated in Figure 8.3.4 for general matrix
Determinant calculation pattern for matrices.
In Figure 8.3.4, the plus and minus signs do not mean positive and negative, but instead mean add and subtract: we start with a value of then add the product of the two entries along the arrow labelled (which may be a negative number) and subtract the product of the two entries along the arrow labelled
Subsection 8.3.4 Determinants of larger matrices
In Discovery 8.2–8.7, we used an inductive process to build up from computing determinants to determinants. The inductive process continues for larger matrices to provide a formula for the determinant of an matrix for every via a cofactor expansion along the first row:
And we saw in Discovery 8.8 that can replace the cofactor expansion in (†) with a cofactor expansion along any row or column of our choosing and get the same result.
While we have a convenient general formula for matrices in terms of the four entry variables, we certainly wouldn’t want to attempt to write out a general formula for the determinant of a matrix in twenty-five entry variables. Instead, for matrices larger than computing a determinant for a specific matrix from a cofactor expansion is a recursive process, since cofactors are just minor determinants with some sign changes. A cofactor expansion for an matrix requires cofactor calculations. Each of those cofactor calculations is a determinant calculation of some “submatrices”. Each of those determinants, if calculated by cofactor expansion, will require determinant calculations of various “submatrices”. And so on. As you can see, the number of calculations involved grows out of hand quite quickly, even for single-digit values of
We will work through some and cofactor expansions in Section 8.4, but we will develop a more efficient determinant calculation procedure based on row operations in Chapter 9. For now, let’s record the cofactor sign patterns from Discovery 8.9. Remember that a cofactor is equal to either the corresponding minor determinant or its negative, depending on whether the sum of row and column indices is even or odd. This extra “sign” portion of the cofactor formula in terms of minor determinants will alternate from entry to entry, since as we move along a row or along a column, only one of or will change, and so will flip from even to odd or vice versa. So the cofactor signs follow the patterns,
and so on.
Subsection 8.3.5 Determinants of special forms
In Discovery 8.10, we examined the determinant of diagonal and triangular matrices. Let’s consider the case of a diagonal matrix:
A cofactor expansion along the first column will look like
Because of all of those zero entries, the only cofactor we actually need to compute is and the cofactor expansion collapses to just the entry times its cofactor. But the cofactor sign of the entry is positive, so we really just get times its minor determinant:
This minor determinant is again a diagonal matrix, so we can again expand along the first column to get a similar result. And the pattern will continue until we finally get down to a minor
So the determinant of a diagonal matrix is equal to the product of its diagonal entries.
What if we apply this pattern to an scalar matrix Since such a matrix has the entry repeated down the diagonal times, the determinant will be factors of multiplied together, so that Applying this formula to the zero matrix ( ) and the identity matrix ( ), we have
When computing the determinant of an upper triangular matrix, a similar pattern of computation as in the diagonal case would arise, because choosing to always expand along the first column would result in diagonal entry times an upper triangular minor determinant. And the same pattern would repeat for lower triangular matrices, but for those it is best to expand along the first row.