Skip to main content

Section 11.4 Theory

Subsection 11.4.1 Complex linear systems

There is nothing new to add to our theoretical knowledge of linear systems when considering complex ones — the following is still all true:

  • a complex linear system has either no solution, one unique solution, or an infinite number of solutions;
  • a complex matrix is row equivalent to one unique RREF matrix; and
  • the number of parameters required to describe the general solution of a system that has an infinite number of solutions is equal to the number of variables minus the number of leading ones in the RREF of the matrix for the system.

The above is only a partial list — everything we learned about linear systems and reducing matrices in Chapter 1 and Chapter 2 remains true in the complex context.

Subsection 11.4.2 Complex matrices

Subsubsection 11.4.2.1 Basic algebra

The rules of matrix algebra contained in Proposition 4.5.1 all remain true for complex matrices, but we have a few rules to add for our new operations of matrix conjugate and complex adjoint.

Subsubsection 11.4.2.2 Inverses

As we saw in Example 11.3.7 and Example 11.3.8, our new complex operations play nicely with inverses as well.

Suppose \(A\) is an invertible, square, complex matrix. To verify that \(\lcconj{\inv{A}}\) is the inverse of \(\cconj{A}\text{,}\) by definition all that is required is to check the equalities

\begin{align*} \cconj{A} \lcconj{\inv{A}} \amp = I, \amp \lcconj{\inv{A}} \cconj{A} \amp = I. \end{align*}

(By Proposition 6.5.4 and Proposition 6.5.6, it is only really necessary to check one of these two equalities.) And these two equalities are easily verified using Rule 1.b and Rule 1.f of Proposition 11.4.1.

Then, formula \(\inv{\bigl(\adjoint{A}\bigr)} = \adjoint{\bigl(\inv{A}\bigr)}\) can be verified using formula \(\inv{\cconj{A}} = \lcconj{\inv{A}}\) along with Proposition 5.5.8.

All of the other theory we developed around inverses and elementary matrices in Chapter 5 and Chapter 6 remains true for complex matrices. In particular, Rule 1.g and Rule 2.g of Proposition 11.4.1 remain true for negative exponents in the complex context, and also the major result Theorem 6.5.2 is true for complex invertible matrices.

Subsubsection 11.4.2.3 Determinants

When you unravel a determinant-by-cofactors calculation, ultimately it is a complicated computation involving only the entries of the matrix and the operations of multiplication, addition, and subtraction. Since the complex conjugate plays nicely with those operations (Proposition A.2.11), and since the determinant of a transpose is the same as the determinant of the original matrix (Lemma 9.4.3), we immediately obtain the following facts.

Finally, all the theory we developed about determinants and their connection to row reduction and to invertibility in Chapter 8, Chapter 9, and Chapter 10 are all still true for complex matrices. In particular, Theorem 10.5.3 remains true for complex matrices.

Subsubsection 11.4.2.4 Self-adjoint matrices

Here we'll record the pattern of Example 11.3.9, then apply the second formula of Proposition 11.4.3 to the case of self-adjoint matrices.

Suppose \(A\) is a complex, square, self-adjoint matrix. From Proposition 11.4.3, we have

\begin{equation*} \det \adjoint{A} = \lcconj{\det A}\text{.} \end{equation*}

But if \(A\) is self-adjoint, then by definition we have \(\adjoint{A} = A\text{,}\) and so

\begin{equation*} \det \adjoint{A} = \det A\text{.} \end{equation*}

Combining these two determinant formulas says that

\begin{equation*} \lcconj{\det A} = \det A \text{.} \end{equation*}

But only a real number is equal to its own complex conjugate (Statement 2 of Proposition A.2.12), and so we can conclude that the complex number \(\det A\) must actually be purely real.