Section 10.4 Examples
In this section.
Subsection 10.4.1 The 2Γ2 case
Let's compute the adjoint of the general 2Γ2 matrix A=[abcd]. First, the minors.Subsection 10.4.2 Computing an inverse using the adjoint
As mentioned, using the adjoint to compute the inverse of a matrix is not very efficient for matrices larger than 2Γ2. In most cases, you are better off using the row reduction method. However, there are situations where you might want to use the adjoint instead, as in the example below.Example 10.4.1. Using the adjoint to compute an inverse.
It can be tedious to row reduce a matrix with variable entries. Consider
To row reduce X, our first step would be to obtain a leading one in the first column. We might choose to perform R1β1xR1, except that this operation would be invalid in the case that x=0. Or we might choose to perform R2β1xβ1R2, except that this operation would be invalid in the case that x=1. So to row reduce X we would need to consider three different cases, x=0, x=1, and xβ 0,1, performing different row reduction sequences in each of these cases. And when we get to the point of trying to obtain a leading one in the second column, we might discover there are even more cases to consider.
So instead we will attempt to compute the inverse of X using the adjoint. First, the minors.
We obtain the matrix of cofactors by making certain minor determinants negative, according to the 3Γ3 pattern of cofactor signs, and then the adjoint is the transpose.
To compute the inverse of X, we still need its determinant. But we already have all the cofactors, so a cofactor expansion will be easy. Let's do a cofactor expansion of det along the third row. (Remember that the cofactors already have the appropriate signs, so we are just summing βentry times cofactorβ terms.)
Subsection 10.4.3 Cramer's rule
Example 10.4.2. Using Cramer's rule to compute individual variable values in a system of equations.
Consider the system
with coefficient matrix and vector of constants,
Conveniently, we have already computed \det A = -27 in Example 8.4.5 (and again in Example 9.3.1). Since \det A \neq 0\text{,} we know that A is invertible and so the system has one unique solution. Suppose we want to know the value of x_2 in the solution. We can form the matrix A_2\text{,} where the second column of A is replaced by \uvec{b}\text{,}
and compute \det A_2 by a cofactor expansion along the third row (expanding the corresponding 3\times 3 minor determinant along the first row),
Thus, the value of x_2 in the one unique solution to the system is
If we also want to know the value of x_4\text{,} we form the matrix A_4\text{,} where the fourth column of A is replaced by \uvec{b}\text{,}
and compute \det A_4 (again by a cofactor expansion along the third row, followed by an expansion along the first row of the corresponding 3\times 3 minor determinant),
Thus, the value of x_4 in the one unique solution to the system is