Section 10.4 Examples
In this section.
Subsection 10.4.1 The \(2\times 2\) case
Let's compute the adjoint of the general \(2\times 2\) matrix \(A = \left[\begin{smallmatrix} a \amp b \\ c \amp d \end{smallmatrix}\right] \text{.}\) First, the minors.
In the matrix of cofactors for a \(2 \times 2\) matrix, the off-diagonal cofactors become negative, and then the adjoint is the transpose of that.
The inverse of \(A\) is then the reciprocal of the determinant times the adjoint, so that
as promised in Proposition 5.5.4.
Subsection 10.4.2 Computing an inverse using the adjoint
As mentioned, using the adjoint to compute the inverse of a matrix is not very efficient for matrices larger than \(2\times 2\text{.}\) In most cases, you are better off using the row reduction method. However, there are situations where you might want to use the adjoint instead, as in the example below.
Example 10.4.1. Using the adjoint to compute an inverse.
It can be tedious to row reduce a matrix with variable entries. Consider
To row reduce \(X\text{,}\) our first step would be to obtain a leading one in the first column. We might choose to perform \(R_1\to\frac{1}{x}R_1\text{,}\) except that this operation would be invalid in the case that \(x=0\text{.}\) Or we might choose to perform \(R_2\to\frac{1}{x-1}R_2\text{,}\) except that this operation would be invalid in the case that \(x=1\text{.}\) So to row reduce \(X\) we would need to consider three different cases, \(x=0\text{,}\) \(x=1\text{,}\) and \(x\neq 0,1\text{,}\) performing different row reduction sequences in each of these cases. And when we get to the point of trying to obtain a leading one in the second column, we might discover there are even more cases to consider.
So instead we will attempt to compute the inverse of \(X\) using the adjoint. First, the minors.
We obtain the matrix of cofactors by making certain minor determinants negative, according to the \(3\times 3\) pattern of cofactor signs, and then the adjoint is the transpose.
To compute the inverse of \(X\text{,}\) we still need its determinant. But we already have all the cofactors, so a cofactor expansion will be easy. Let's do a cofactor expansion of \(\det X\) along the third row. (Remember that the cofactors already have the appropriate signs, so we are just summing “entry times cofactor” terms.)
Finally, we obtain a formula for the inverse of \(X\) that is valid for every value of \(x\) for which the determinant is nonzero,
Subsection 10.4.3 Cramer's rule
Example 10.4.2. Using Cramer's rule to compute individual variable values in a system of equations.
Consider the system
with coefficient matrix and vector of constants,
Conveniently, we have already computed \(\det A = -27\) in Example 8.4.5 (and again in Example 9.3.1). Since \(\det A \neq 0\text{,}\) we know that \(A\) is invertible and so the system has one unique solution. Suppose we want to know the value of \(x_2\) in the solution. We can form the matrix \(A_2\text{,}\) where the second column of \(A\) is replaced by \(\uvec{b}\text{,}\)
and compute \(\det A_2\) by a cofactor expansion along the third row (expanding the corresponding \(3\times 3\) minor determinant along the first row),
Thus, the value of \(x_2\) in the one unique solution to the system is
If we also want to know the value of \(x_4\text{,}\) we form the matrix \(A_4\text{,}\) where the fourth column of \(A\) is replaced by \(\uvec{b}\text{,}\)
and compute \(\det A_4\) (again by a cofactor expansion along the third row, followed by an expansion along the first row of the corresponding \(3\times 3\) minor determinant),
Thus, the value of \(x_4\) in the one unique solution to the system is