Skip to main content

Section 10.5 Theory

We have discussed the reasoning behind many of the below facts in Section 10.3, so we will omit some of the formal proofs.

Subsection 10.5.1 Adjoints and inverses

First, we record the adjoint inversion formula we have discovered.

Remark 10.5.2.

Based on our computations for the \(2\times 2\) case in Subsection 10.4.1, if \(A\) is \(2\times 2\) then the statement of the theorem above is exactly the same as Proposition 5.5.4.

Subsection 10.5.2 Determinants determine invertibility

As we saw in Subsection 10.3.2, there is a stronger connection between the determinant and invertibility, which we now state here more formally by adding a new statement to Theorem 6.5.2.

Remark 10.5.4.

In the last sentence of the theorem, the connecting phrase “if and only if” between the two conditions is just a different way to say that the two conditions are equivalent. And recall that conditions are equivalent when they have to be either all true or all false at the same time. Rephrasing in terms of the “all false” scenario, we could also say that a square matrix is singular if and only if its determinant is zero.

Subsection 10.5.3 Determinant formulas

Here we collect the determinant formulas from Subsections 10.3.3–10.3.7. First we look at a special case, previously considered in Discovery 10.4 and Subsection 10.3.3, of the multiplicative formula for determinants.

There are three cases to consider here, based on the type of elementary matrix we are dealing with.

Case \(E\) represents swapping rows.

The product \(E A\) represents the result of swapping two rows in \(A\text{,}\) so

\begin{equation*} \det (E A) = -\det A \end{equation*}

(Part 2 of Proposition 9.4.2).

But also \(\det E = -1\) (Part 1 of Proposition 9.4.5), so

\begin{equation*} (\det E) (\det A) = (-1) (\det A) = -\det A \end{equation*}

as well.

This establishes (\(\star\)) in this case.

Case \(E\) represents multiplying a row by \(k\).

The product \(E A\) represents the result of multiplying that row of \(A\) by \(k\text{,}\) so

\begin{equation*} \det (E A) = k \det A \end{equation*}

(Part 4 of Proposition 9.4.2).

But also \(\det E = k\) (Part 2 of Proposition 9.4.5), so

\begin{equation*} (\det E) (\det A) = k \det A \end{equation*}

as well.

This establishes (\(\star\)) in this case.

Case \(E\) represents adding a multiple of one row to another.

The product \(E A\) represents the result of adding a multiple of a row to another in \(A\text{,}\) so \(\det (E A)\) is equal to \(\det A\text{.}\) But also \(\det E = 1\) (Part 3 of Proposition 9.4.5), so

\begin{equation*} \det (E A) = \det A = (1) (\det A) = (\det E) (\det A), \end{equation*}

establishing (\(\star\)) in this case.

With the above lemma established, we can consider the general multiplicative formula for determinants.

  1. There are two cases to consider.

    Case \(M\) is invertible.

    In this case, \(M\) can be expressed as a product of elementary matrices (Theorem 10.5.3), and so Lemma 10.5.5 can be repeatedly applied to obtain the desired equality.

    In Discovery 10.5 and Subsection 10.3.4, we worked under the assumption that \(M\) could be expressed as a product of three elementary matrices, but the calculations and logic used there would work no matter how many elementary matrices were required in a product expression for \(M\text{.}\)

    Case \(M\) is singular.

    In this case, \(\det M = 0\) by our newly added statement in the list of Theorem 10.5.3, so we have

    \begin{equation*} \text{RHS} = (\det M) (\det N) = 0 \cdot \det N = 0 \end{equation*}

    as well. But we also know that if \(M\) is singular, then the product \(M N\) must also be singular (Statement 1 of Proposition 6.5.8). So again we can apply the equivalence of Statement 1 and Statement 8 of Theorem 10.5.3 to obtain

    \begin{equation*} \text{LHS} = \det (M N) = 0. \end{equation*}

    Since both LHS and RHS are equal to \(0\text{,}\) they are equal to each other.

  2. This result can be obtained by repeated applications of the formula in Statement 1, one \(M_i\) at a time.

Remark 10.5.7.

We can now understand the formula \(\det (k A) = k^n \det A\) as a special case of Proposition 10.5.6. Using \(M = k I\) and \(N = A\text{,}\) we have

\begin{equation*} \det (k A) = \det \bbrac{(k I) A} = \bbrac{\det (k I)} (\det A) = k^n \det A \text{.} \end{equation*}

(See Statement 2 of Proposition 8.5.2.)

Lemma 10.5.5 and the proof of Proposition 10.5.6 connect to Proposition 9.4.2 (which includes the formula \(\det (k A) = k^n \det A\) as one of its statements) by the fact that an \(n \times n\) scalar matrix \(k I\) is the product of \(n\) elementary matrices, one for each of the \(n\) operations multiply row \(R_j\) by \(k\).

Subsection 10.5.4 Cramer's rule

Finally, we formally record Cramer's rule (discussed in Subsection 10.3.8).