Section 11.4 Theory
In this section.
Subsection 11.4.1 Complex linear systems
There is nothing new to add to our theoretical knowledge of linear systems when considering complex ones — the following is still all true:
- a complex linear system has either no solution, one unique solution, or an infinite number of solutions;
- a complex matrix is row equivalent to one unique RREF matrix; and
- the number of parameters required to describe the general solution of a system that has an infinite number of solutions is equal to the number of variables minus the number of leading ones in the RREF of the matrix for the system.
The above is only a partial list — everything we learned about linear systems and reducing matrices in Chapter 1 and Chapter 2 remains true in the complex context.
Subsection 11.4.2 Complex matrices
Subsubsection 11.4.2.1 Basic algebra
The rules of matrix algebra contained in Proposition 4.5.1 all remain true for complex matrices, but we have a few rules to add for our new operations of matrix conjugate and complex adjoint.
Proposition 11.4.1. Matrix algebra.
The following are valid rules of matrix algebra. In each statement, assume that \(A\) and \(B\) are arbitrary complex matrices, \(\zerovec\) is a zero matrix, and \(I\) is an identity matrix, all of appropriate sizes so that the matrix operations can be carried out. In particular, in any rule involving a matrix power, the matrices involved are assumed to be square. Also assume that \(z\) is a complex scalar and that \(p\) is a positive integer.
-
Basic rules of matrix conjugation.
- \(\displaystyle \cconj{\zerovec} = \zerovec \)
- \(\displaystyle \cconj{I} = I \)
- \(\displaystyle \lcconj{\cconj{A}} = A \)
- \(\displaystyle \lcconj{A+B} = \cconj{A} + \cconj{B} \)
- \(\displaystyle \lcconj{zA} = \cconj{z} \cconj{A} \)
- \(\displaystyle \lcconj{AB} = \cconj{A} \cconj{B} \)
- \(\displaystyle \lcconj{A^p} = \cconj{A}^p \)
- \(\displaystyle \lcconj{\utrans{A}} = \utrans{\cconj{A}} \)
-
Basic rules of the complex adjoint.
- \(\displaystyle \adjoint{\zerovec} = \zerovec \)
- \(\displaystyle \adjoint{I} = I \)
- \(\displaystyle \adjoint{\bigl(\adjoint{A}\bigr)} = A \)
- \(\displaystyle \adjoint{(A+B)} = \adjoint{A} + \adjoint{B} \)
- \(\displaystyle \adjoint{(zA)} = \cconj{z} \adjoint{A} \)
- \(\displaystyle \adjoint{(AB)} = \adjoint{B} \adjoint{A} \)
- \(\displaystyle \adjoint{\bigl(A^p\bigr)} = \bigl(\adjoint{A}\bigr)^p \)
- \(\displaystyle \adjoint{\bigl(\utrans{A}\bigr)} = \utrans{\bigl(\adjoint{A}\bigr)} = \cconj{A} \)
Subsubsection 11.4.2.2 Inverses
As we saw in Example 11.3.7 and Example 11.3.8, our new complex operations play nicely with inverses as well.
Proposition 11.4.2. Inverse of a complex conjugate or adjoint.
If \(A\) is invertible, then so are \(\cconj{A}\) and \(\adjoint{A}\text{,}\) with
Proof idea.
Suppose \(A\) is an invertible, square, complex matrix. To verify that \(\lcconj{\inv{A}}\) is the inverse of \(\cconj{A}\text{,}\) by definition all that is required is to check the equalities
(By Proposition 6.5.4 and Proposition 6.5.6, it is only really necessary to check one of these two equalities.) And these two equalities are easily verified using Rule 1.b and Rule 1.f of Proposition 11.4.1.
Then, formula \(\inv{\bigl(\adjoint{A}\bigr)} = \adjoint{\bigl(\inv{A}\bigr)}\) can be verified using formula \(\inv{\cconj{A}} = \lcconj{\inv{A}}\) along with Proposition 5.5.8.
All of the other theory we developed around inverses and elementary matrices in Chapter 5 and Chapter 6 remains true for complex matrices. In particular, Rule 1.g and Rule 2.g of Proposition 11.4.1 remain true for negative exponents in the complex context, and also the major result Theorem 6.5.2 is true for complex invertible matrices.
Subsubsection 11.4.2.3 Determinants
When you unravel a determinant-by-cofactors calculation, ultimately it is a complicated computation involving only the entries of the matrix and the operations of multiplication, addition, and subtraction. Since the complex conjugate plays nicely with those operations (Proposition A.2.11), and since the determinant of a transpose is the same as the determinant of the original matrix (Lemma 9.4.3), we immediately obtain the following facts.
Proposition 11.4.3. Determinant of a complex conjugate or adjoint.
If \(A\) is a square matrix, then
Finally, all the theory we developed about determinants and their connection to row reduction and to invertibility in Chapter 8, Chapter 9, and Chapter 10 are all still true for complex matrices. In particular, Theorem 10.5.3 remains true for complex matrices.
Subsubsection 11.4.2.4 Self-adjoint matrices
Here we'll record the pattern of Example 11.3.9, then apply the second formula of Proposition 11.4.3 to the case of self-adjoint matrices.
Proposition 11.4.4. Pattern of entries in self-adjoint matrices.
A complex matrix \(A\) is self-adjoint precisely when each entry \(a_{ij}\) is equal to the complex conjugate of the corresponding symmetric entry \(a_{ji}\text{.}\) In particular, the entries along the main diagonal in a self-adjoint matrix must be purely real.
Proposition 11.4.5. Determinant of a self-adjoint matrix.
The determinant of a self-adjoint matrix must be a purely real number.
Proof.
Suppose \(A\) is a complex, square, self-adjoint matrix. From Proposition 11.4.3, we have
But if \(A\) is self-adjoint, then by definition we have \(\adjoint{A} = A\text{,}\) and so
Combining these two determinant formulas says that
But only a real number is equal to its own complex conjugate (Statement 2 of Proposition A.2.12), and so we can conclude that the complex number \(\det A\) must actually be purely real.