The number one is important in algebra, it lets us do things like
\begin{align*}
5 a \amp = 15 \\
\frac{1}{5}\cdot 5 \cdot a \amp = \frac{1}{5} \cdot 15 \\
1 a \amp = 3 \\
a \amp = 3.
\end{align*}
The critical step for us right now is the last simplification of the left-hand side:
\begin{equation*}
1 a = a \text{.}
\end{equation*}
(a)
What matrix do you think will act similarly in matrix algebra for \(2\times 2\) matrices to how the number \(1\) acts in number algebra? To answer this, try to fill in the first matrix below so that the matrix equality is always true, no matter the values of \(a,b,c,d\text{.}\)
\begin{equation*}
\begin{bmatrix} \fillinmath{XX} \amp \fillinmath{XX} \\ \fillinmath{XX} \amp \fillinmath{XX} \end{bmatrix}
\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
=
\begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
\end{equation*}
(b)
Write \(I\) for your \(2\times 2\) matrix from Task a (for the I in Identity matrix).
(i)
Does \(IA=A\) work for every \(2 \times 2\) matrix \(A\text{?}\) For every \(2 \times 3\) matrix \(A\text{?}\) For every \(2\times\ell\) matrix \(A\text{,}\) no matter the number \(\ell\) of columns?
(ii)
Does \(BI = B\) also work for every \(2\times 2\) matrix \(B\text{?}\) For every \(\ell\times 2\) matrix \(B\text{?}\)
(c)
Extend: What is the \(3\times 3\) version of \(I\text{?}\) The \(4\times 4\) version? The \(n\times n\) version?
Discovery5.2.
In the preamble to Discovery 5.1, there were two ingredients necessary to make the algebra work:
there is a special number \(1\) so that \(1a=a\) for all numbers \(a\text{;}\) and
for a nonzero number like \(5\text{,}\) there is a multiplicative inverse \(1/5\) so that \((1/5)\cdot 5 = 1\text{.}\)
Multiplicative inverses are very useful in algebra, so we would also like to have them in matrix algebra.
\begin{equation*}
B = \begin{bmatrix} a \amp b \\ c \amp d \end{bmatrix}
\end{equation*}
so that \(BA = I\text{?}\)
Discovery 5.2 demonstrates that some square matrices have multiplicative inverses (i.e. are invertible) and some do not (called singular in this case). If square matrix \(A\) is invertible, write \(\inv{A}\) for its inverse. (But never write \(1/A\text{!}\)) This inverse is defined by its relationship to \(A\) and \(I\text{:}\)\(\inv{A}\) is the square matrix of the same size as \(A\) so that both \(A\inv{A} = I\) and \(\inv{A}A = I\) are true.
Discovery5.3.
In the following, assume \(A,B,C\) are square invertible matrices, all of the same dimension, and assume that \(k\) is a nonzero scalar. Do not just look up the answers in the rest of this chapter, try to come up with them yourselves.
For this activity, it might be helpful to think of the pattern of the inverse in the following way: given a square matrix \(M\text{,}\) the inverse of \(M\) is the square matrix of the same size that can fill both of the boxes below to create true matrix equalities..
\begin{align}
M \boxed{\phantom{X}} \amp = I \amp \boxed{\phantom{X}} M \amp = I\tag{✶}
\end{align}
(a)
What do you think is the inverse of \(\inv{A}\text{?}\) In other words, if you use \(M = \inv{A}\) in (✶), what single choice of matrix can be used to fill in both boxes?
(b)
Determine a formula for the inverse of \(kA\) in terms of \(k\) and \(\inv{A}\text{.}\) In other words, if you use \(M = kA\) in (✶), what formula involving \(k\) and \(\inv{A}\) can be used to fill in both boxes?
(c)
Explain why the formula for the inverse of the product \(AB\) is not\(\inv{A}\inv{B}\text{.}\) Then determine a correct formula in terms of \(\inv{A}\) and \(\inv{B}\text{.}\) (Again, to determine the correct formula for \(\inv{(AB)}\text{,}\) use \(M = AB\) in (✶), and then try to figure out what single formula you can enter into both boxes so that both left-hand sides reduce to \(I\text{.}\))
(d)
Extend: Determine a formula for the inverse of the product \(ABC\) in terms of the inverses \(\inv{A}\text{,}\)\(\inv{B}\text{,}\) and \(\inv{C}\text{.}\)
(e)
What do you think \(A^{-2}\) means? There are two possibilities because the notation implies the application of two different processes: squaring and inverting. Do they both work out to be the same? Try with \(A\) given below. (For convenience, its inverse is also given.)
What is it about matrix \(A\) that is making the usual algebra of “cancellation” fail?
Hint.
Think about the “hidden” algebra behind the cancellation \(ab=ac\Longrightarrow b=c\) for numbers.
(b)
In what circumstance is the algebra \(AB=AC \implies B=C\) valid? What explicit algebra steps go into this deduction?
(c)
Is the algebra \(AB=CA \implies B=C\) ever valid?
Discovery5.5.
If we have a linear system \(A\uvec{x}=\uvec{b}\) with a square and invertible coefficient matrix \(A\text{,}\) we can use matrix algebra to solve the system instead of row reducing, by using \(\inv{A}\) to isolate \(\uvec{x}\text{.}\)
Here is an invertible \(3 \times 3\) matrix \(A\) and its inverse:
In general, for system \(A\uvec{x} = \uvec{b}\) with a coefficient matrix \(A\) that is square and invertible, how many solutions does the system have? Justify your answer.
Hint.
How many solutions did each of the systems in Discovery 5.5 have? Why?