Section 7.3 Concepts
After writing down examples of these special forms of square matrices in Discovery 7.1, it should be obvious what these kinds of matrices “look” like. But we need to appreciate the difference between our conceptions and the technical definitions of these forms. For example, when we think of an example of an upper triangular matrix, we are likely to focus on the entries on and above the main diagonal, because those are what form the “upper triangular” shape, and all the other entries below the main diagonal are zero. But the technical definition of upper triangular matrix provided in Section 7.2 focuses on those zero entries below the main diagonal, and does not mention the entries on or above the main diagonal at all.
Unlike a conception, a technical definition aims to capture the minimum information necessary to identify an instance of the concept. For the purposes of identifying an upper triangular matrix, the entries on or above the main diagonal are irrelevant and only the zeros below the main diagonal matter, because if any of those entries were nonzero the matrix in question would most certainly not be upper triangular. But this minimalism in making technical definitions can sometimes have surprising side effects, as we discovered in Discovery 7.1. For example, a diagonal matrix is, by definition, also both upper and lower triangular, because its entries below and above the main diagonal are all zero. As an extreme example, a square zero matrix is simultaneously all three of diagonal, upper triangular, and lower triangular.
Question 7.3.1.
Why are these special forms important?
At this stage, we can state a few reasons why we might be interested in identifying these matrix forms with special names.
- For the diagonal and triangular forms, the fact that many of their entries are zero makes computing with them especially easy, whether with respect to matrix operations or with respect to solving systems.
- With regards to solving systems, any square matrix in REF (or RREF) must be upper triangular. And lower triangular is just the transposed version of upper triangular, so it seems reasonable to identify it along with the upper triangular form.
- Symmetric matrices play a special role in the geometry of the plane, of space, and of higher-dimensional “hyperspaces,” as you may discover in a second course in linear algebra.
- Finally, for each of these forms (including symmetric), you discovered in Discovery 7.1 that adding or scalar multiplying matrices of the form resulted in another matrix of the same form. This was also true for products, powers and inverses, except that a product of two symmetric matrices may not be symmetric. The fact that matrix operations on these forms produce results of the same form is an important property in more advanced abstract algebra.
Subsection 7.3.1 Algebra with scalar matrices
In Discovery 7.1, you might have noticed how certain rules of matrix algebra apply to scalar matrices:
- \(kI + mI =\)\((k+m)I\)
- (Rule 2.b of Proposition 4.5.1);
- \(kI - mI =\)\((k-m)I\)
- \((kI)(mI) = (km)I\)
- \((kI)^p = k^p I\)
- \(\inv{(kI)} = \inv{k}I\)
So for scalar matrices there seems to be a pattern: the matrix operation can be achieved by just performing the corresponding scalar operation. This essentially gives us a way to “inject” the algebra of numbers into the algebra of square matrices of any given size, an extremely important notion in more advanced abstract algebra that you may encounter a taste of in a second linear algebra course.
Subsection 7.3.2 Inverses of special forms
In Discovery 7.1.e, we examined the invertibility of these various forms of matrices. In Theorem 6.5.2 we learned that a matrix is invertible only if it can be reduced to the identity. Now, scalar matrices, diagonal matrices, and upper triangular matrices are already pretty close to being reduced, but we can see that if any of the diagonal entries of these forms of matrix is zero, then there will be no hope of getting a leading one in that column, and so we won’t be able to reduce to the identity. Thus, a scalar, diagonal, or upper triangular matrix is only invertible if its diagonal entries are all nonzero. And, since the transpose of a lower triangular is upper triangular, and since taking a transpose does not affect invertibility (Proposition 5.5.8), then the same is true about lower triangular matrices. Analyzing the invertibility of symmetric matrices is a little more complicated, but in Discovery 7.1.f, we discovered that for each of these special forms (including symmetric matrices), the inverse of a matrix of that form is also of that form.
Subsection 7.3.3 Decompositions using special forms
In Discovery 7.3 we discovered that an upper triangular matrix can be decomposed into a sum or a product of a diagonal matrix with a special kind of upper triangular matrix. Using the matrix from that discovery activity as an example, we have
\begin{align}
\begin{bmatrix} 2 \amp 1 \amp 1 \\ 0 \amp 3 \amp 1 \\ 0 \amp 0 \amp 5 \end{bmatrix}
\amp = \begin{bmatrix} 2 \amp 0 \amp 0 \\ 0 \amp 3 \amp 0 \\ 0 \amp 0 \amp 5 \end{bmatrix}
+ \begin{bmatrix} 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{bmatrix},\tag{✶}\\
\notag\\
\begin{bmatrix} 2 \amp 1 \amp 1 \\ 0 \amp 3 \amp 1 \\ 0 \amp 0 \amp 5 \end{bmatrix}
\amp = \begin{bmatrix} 2 \amp 0 \amp 0 \\ 0 \amp 3 \amp 0 \\ 0 \amp 0 \amp 5 \end{bmatrix}
\begin{bmatrix}
1 \amp \frac{1}{2} \amp \frac{1}{2} \\
0 \amp 1 \amp \frac{1}{3}\\0 \amp 0 \amp 1
\end{bmatrix}.\tag{✶✶}
\end{align}
The special upper triangular matrix in the product decomposition in (✶✶) is called a unipotent matrix because its powers will always have that line of ones down the main diagonal. The special upper triangular matrix in the sum decomposition in (✶) is called a nilpotent matrix because its powers will always have that line of zeros down the main diagonal, and in fact, just like the nilpotent matrices you analyzed in Discovery 7.4, if you raise this matrix to an exponent equal to its size you will get the zero matrix!