Skip to main content
Logo image

Section 8.3 Concepts

In Discovery 8.1, we discovered that for every \(2 \times 2\) matrix \(A\) there is a related matrix \(A'\) so that the product \(AA'\) is a scalar multiple of the identity matrix. Call this scalar \(\delta\) for now. If \(\delta\neq 0\text{,}\) we can do some algebra to get
\begin{equation*} AA' = \delta I \qquad\implies\qquad A (\inv{\delta} A') = I, \end{equation*}
which means that \(A\) must be invertible with \(\inv{A} = \inv{\delta}A'\) (Proposition 6.5.6).
Now, we could achieve this goal by always choosing \(\delta=0\) and \(A'=0\text{,}\) but that won’t help us replicate for larger matrices the patterns we discovered in the \(2 \times 2\) case. We will find that there is a very particular procedure to achieve this goal that works for every square matrix and recovers the \(2 \times 2\) case above, so we will tackle the goal in two parts:
  1. determine the scalar \(\delta\) for each square matrix \(A\text{,}\) and then
  2. determine how to construct the matrix \(A'\) that goes along with it.
The process of producing the scalar \(\delta\) is then a function on square matrices. For a particular square matrix \(A\text{,}\) we will call the output \(\delta\) of this function the determinant of \(A\), and usually write \(\det A\) instead of \(\delta\text{.}\)
For now, we will concentrate on the first step and learn how to compute determinants, as it turns out that the “companion” matrix \(A'\) will be constructed out of determinants of submatrices of \(A\text{.}\) We will discuss this special matrix and complete our goal in Chapter 10.

Subsection 8.3.1 Definition of the determinant

It may seem from Section 8.2 that the definition of determinant is circular — we define the determinant in terms of entries and cofactors (via cofactor expansions), where cofactors are defined in terms of minors, which are defined in terms of … determinants? But the key word in the definition of minor is smaller — determinants are defined recursively in terms of smaller matrices. In Discovery guide 8.1, after first exploring the determinant of a \(2 \times 2\) matrix as motivation, we started afresh with a precise definition of the \(1 \times 1\) determinant, and then defined the \(2 \times 2\) determinant in terms of \(1 \times 1\) determinants. Then the \(3 \times 3\) determinant is defined in terms of \(2 \times 2\) determinants, and so on. As we will see in examples in Section 8.4, computing a determinant from this recursive definition will involve unpacking it in terms of determinants of one smaller size, then unpacking those in terms of determinants of one size smaller again, and so on. Technically, this process should continue until we are down to a bunch of \(1 \times 1\) determinants, but since there is a simple formula for a \(2 \times 2\) determinant, in direct computations we will stop there.

Warning 8.3.3.

Computing determinants by cofactor expansions is extremely inefficient, whether by hand or by computer. For example, for a \(10 \times 10\) matrix, the recursive process of a cofactor expansion could eventually require you to compute more than \(1.8\) million \(2 \times 2\) determinants. In the next chapter we will discover that we can also compute determinants by … you guessed it, row reduction! (And there are other, more efficient methods for determinants by computer — we will leave those to a numerical methods course.) But again, the goal of this course is not to turn you into a super-efficient computer. We want to understand and be somewhat proficient at computing determinants by cofactor expansions so that we can think about and understand them in the abstract while we develop the theory of determinants.

Subsection 8.3.2 Determinants of \(1 \times 1\) matrices

Consider the general \(1 \times 1\) matrix \(A=\begin{bmatrix}a\end{bmatrix}\text{.}\) We should expect the invertibility of \(A\) to be completely determined by the value of the single entry \(a\text{,}\) since that is all the information that \(A\) contains. And that is precisely the case, as \(A\) is invertible when \(a\neq 0\text{,}\) with \(\inv{A} = \begin{bmatrix}\inv{a}\end{bmatrix}\text{,}\) and \(A\) is singular when \(a=0\text{,}\) because then \(A\) would be the zero matrix. Since entry \(a\) determines the invertibility of \(A\text{,}\) we set \(\det \begin{bmatrix}a\end{bmatrix} = a\text{.}\)

Subsection 8.3.3 Determinants of \(2 \times 2\) matrices

In Discovery 8.6, we calculated the determinant of the general \(2 \times 2\) matrix to be
\begin{equation*} \det \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix} = ad -bc, \end{equation*}
using a cofactor expansion along the first row. (We leave it up to you, the reader, to check that a cofactor expansion along a column or along the second row yields the same result.) And we already verified by row reducing that a \(2 \times 2\) matrix is invertible precisely when \(ad-bc\neq 0\) (Proposition 6.5.9).

Subsection 8.3.4 Determinants of larger matrices

In Discovery 8.2–8.7, we used an inductive process to build up from computing \(1 \times 1\) determinants to \(3 \times 3\) determinants. The inductive process continues for larger matrices to provide a formula for the determinant of an \(n \times n\) matrix for every \(n\) via a cofactor expansion along the first row:
\begin{gather} \det A = a_{11} C_{11} + a_{12} C_{12} + a_{13} C_{13} + \dotsb + a_{1n} C_{1n}\text{.}\tag{✶} \end{gather}
And we saw in Discovery 8.8 that can replace the cofactor expansion in (✶) with a cofactor expansion along any row or column of our choosing and get the same result.
While we have a convenient general formula for \(2 \times 2\) matrices in terms of the four entry variables, we certainly wouldn’t want to attempt to write out a general formula for the determinant of a \(5 \times 5\) matrix in twenty-five entry variables. Instead, for matrices larger than \(2 \times 2\text{,}\) computing a determinant for a specific matrix from a cofactor expansion is a recursive process, since cofactors are just minor determinants with some sign changes. A cofactor expansion for an \(n \times n\) matrix requires \(n\) cofactor calculations. Each of those cofactor calculations is a determinant calculation of some \((n-1) \times (n-1)\) “submatrices”. Each of those determinants, if calculated by cofactor expansion, will require \(n-1\) determinant calculations of various \((n-2) \times (n-2)\) “submatrices”. And so on. As you can see, the number of calculations involved grows out of hand quite quickly, even for single-digit values of \(n\text{.}\)
We will work through some \(3 \times 3\) and \(4 \times 4\) cofactor expansions in Section 8.4, but we will develop a more efficient determinant calculation procedure based on row operations in Chapter 9. For now, let’s record the cofactor sign patterns from Discovery 8.9. Remember that a cofactor is equal to either the corresponding minor determinant or its negative, depending on whether the sum \(i+j\) of row and column indices is even or odd. This extra “sign” portion of the cofactor formula in terms of minor determinants will alternate from entry to entry, since as we move along a row or along a column, only one of \(i\) or \(j\) will change, and so \(i+j\) will flip from even to odd or vice versa. So the cofactor signs follow the patterns,
\begin{align} 3 \times 3 \amp \colon \; \left[\begin{smallmatrix} + \amp - \amp + \\ - \amp + \amp - \\ + \amp - \amp + \end{smallmatrix}\right], \amp 4 \times 4 \amp \colon \; \left[\begin{smallmatrix} + \amp - \amp + \amp - \\ - \amp + \amp - \amp + \\ + \amp - \amp + \amp - \\ - \amp + \amp - \amp + \end{smallmatrix}\right], \amp 5 \times 5 \amp \colon \; \left[\begin{smallmatrix} + \amp - \amp + \amp - \amp + \\ - \amp + \amp - \amp + \amp - \\ + \amp - \amp + \amp - \amp + \\ - \amp + \amp - \amp + \amp - \\ + \amp - \amp + \amp - \amp + \end{smallmatrix}\right],\tag{8.3.1} \end{align}
and so on.

Subsection 8.3.5 Determinants of special forms

In Discovery 8.10, we examined the determinant of diagonal and triangular matrices. Let’s consider the case of a diagonal matrix:
\begin{equation*} D = \begin{bmatrix} d_1 \amp 0 \amp \cdots \amp 0 \\ 0 \amp d_2 \amp \ddots \amp \vdots \\ \vdots \amp \ddots \amp \ddots \amp 0 \\ 0 \amp \cdots \amp 0 \amp d_n \end{bmatrix}. \end{equation*}
A cofactor expansion along the first column will look like
\begin{equation*} d_1 C_{11} + 0 \cdot C_{21} + 0 \cdot C_{31} + \dotsb + 0 \cdot C_{n1}. \end{equation*}
Because of all of those zero entries, the only cofactor we actually need to compute is \(C_{11}\text{,}\) and the cofactor expansion collapses to just the entry \(d_1\) times its cofactor. But the cofactor sign of the \((1,1)\) entry is positive, so we really just get \(d_1\) times its minor determinant:
\begin{equation*} \det D = d_1\, \begin{vmatrix} d_2 \amp 0 \amp \cdots \amp 0 \\ 0 \amp d_3 \amp \ddots \amp \vdots \\ \vdots \amp \ddots \amp \ddots \amp 0 \\ 0 \amp \cdots \amp 0 \amp d_n \end{vmatrix}. \end{equation*}
This minor determinant is again a diagonal matrix, so we can again expand along the first column to get a similar result. And the pattern will continue until we finally get down to a \(1 \times 1\) minor
\begin{align*} \det D = d_1 d_2\, \begin{vmatrix} d_3 \amp 0 \amp \cdots \amp 0 \\ 0 \amp d_4 \amp \ddots \amp \vdots \\ \vdots \amp \ddots \amp \ddots \amp 0 \\ 0 \amp \cdots \amp 0 \amp d_n \end{vmatrix} = d_1 d_2 d_3\, \begin{vmatrix} d_4 \amp 0 \amp \cdots \amp 0 \\ 0 \amp d_5 \amp \ddots \amp \vdots \\ \vdots \amp \ddots \amp \ddots \amp 0 \\ 0 \amp \cdots \amp 0 \amp d_n \end{vmatrix}\\ = \cdots = d_1 d_2 \dotsm d_{n-2} \begin{vmatrix}d_{n_1} \amp 0 \\ 0 \amp d_n\end{vmatrix} = d_1 d_2 \dotsm d_{n-1} \begin{vmatrix}[d_n]\end{vmatrix}\\ = d_1 d_2 \dotsm d_{n-1} d_n. \end{align*}
So the determinant of a diagonal matrix is equal to the product of its diagonal entries.
What if we apply this pattern to an \(n \times n\) scalar matrix \(k I\text{?}\) Since such a matrix has the entry \(k\) repeated down the diagonal \(n\) times, the determinant will be \(n\) factors of \(k\) multiplied together, so that \(\det (k I) = k^n\text{.}\) Applying this formula to the zero matrix (\(k = 0\)) and the identity matrix (\(k = 1\)), we have
\begin{align*} \det \zerovec \amp= 0, \amp \det I \amp= 1. \end{align*}
When computing the determinant of an upper triangular matrix, a similar pattern of computation as in the diagonal case would arise, because choosing to always expand along the first column would result in diagonal entry times an upper triangular minor determinant. And the same pattern would repeat for lower triangular matrices, but for those it is best to expand along the first row.