Skip to main content

Section 2.6 The great orthogonality theorem

We are now in a position to prove one of the most fundamental result in representation theory: the great orthogonality theorem. In essence, what the theorem says is that if we think of irreducible representations as being given by “vectors of matrices”, where the “vectors” are indexed by the group elements, then they satisfy orthogonality relations. This is key to determine the possible irreducible representations of given group.

This theorem follows from Schur's lemmas. Let \(T\) and \(S\) be inequivalent irreducible unitary representations of dimensions \(d\) and \(d'\) respectively. Consider an arbitrary \(d \times d'\) matrix \(X\text{,}\) and let

\begin{equation*} A = \sum_{g \in G} T(g) X S(g^{-1}) = \sum_{g \in G} T(g) X S^\dagger (g), \end{equation*}

where the second equality follows since \(S\) is unitary. Now multiply by \(T(h)\) on the left for any \(h \in G\text{:}\)

\begin{align*} T(h) A = \amp \sum_{g \in G} T(h) T(g) X S(g^{-1}) \\ = \amp \sum_{g \in G} T(h) T(g) X S(g^{-1}) S(h^{-1}) S(h)\\ = \amp \sum_{g \in G} T(h g) X S^\dagger(g) S^\dagger(h) S(h)\\ = \amp \sum_{g \in G} T(h g) X (S(h) S(g) )^\dagger S(h)\\ = \amp \left(\sum_{g \in G} T(h g) X S( (h g)^{-1} ) \right) S(h). \end{align*}

By the rearrangement theorem Theorem 1.4.2, multiplying all \(g \in G\) by \(h \in G\) does not change the sum over \(G\text{,}\) it simply rearranges the order of the terms in the sum. Thus we get:

\begin{equation*} T(h) A = A S(h). \end{equation*}

But then, by Schur's first lemma Lemma 2.5.1, and the fact that \(T\) and \(S\) are assumed to be inequivalent, we conclude that \(A = 0\text{.}\) Therefore

\begin{equation*} \sum_{g \in G} T(g) X S^\dagger (g) = 0. \end{equation*}

Now we can choose \(X\) to be any matrix. In particular, we can choose it to be a matrix with zero entries everywhere except in the \(j\)'th row and \(k\)'th column where it is one. Then we get:

\begin{equation*} \sum_{g \in G} T(g)_{ij} S^\dagger (g)_{k l} = 0, \end{equation*}

for all \(i,j,k,l\text{,}\) which is the first orthogonality relation.

For the second one, we start with the matrix

\begin{equation*} A = \sum_{g \in G} T(g) X T(g^{-1}) = \sum_{g \in G} T(g) X T^\dagger (g). \end{equation*}

Going through the same steps, we end up with the statement

\begin{equation*} T(h) A = A T(h). \end{equation*}

Schur's second lemma Lemma 2.5.2 now implies that \(A = \lambda_X I\) for some complex number \(\lambda_X \in \mathbb{C}\text{.}\) We wrote \(\lambda_X\) to remind ourselves that the constant depends on the choice of matrix \(X\) in the definition of \(A\text{.}\) Thus we have

\begin{equation*} \sum_{g \in G} T(g) X T(g^{-1}) = \lambda_X I. \end{equation*}

Next, we need to fix the constants \(\lambda_X\text{.}\) We take the trace on both sides of the equation. On the left-hand-side, we get:

\begin{align*} \sum_{g \in G} \Tr \left( T(g) X T(g^{-1}) \right)= \amp \sum_{g \in G} \Tr \left( X T(g^{-1}) T(g) \right)\\ = \amp \sum_{g \in G} \Tr X\\ = \amp |G| \Tr X. \end{align*}

On the right-hand-side, we get:

\begin{equation*} \lambda_X \Tr I = d \lambda_X, \end{equation*}

where \(d\) is the dimension of the representation. Therefore we conclude that

\begin{equation*} \lambda_X = \frac{|G|}{d} \Tr X, \end{equation*}

and the relation becomes

\begin{equation*} \sum_{g \in G} T(g) X T(g^{-1}) = I \frac{|G|}{d} \Tr X . \end{equation*}

To conclude the proof, we choose the matrices \(X\) as above, with zero entries everywhere except in position \((j,k)\text{,}\) where it is taken to be \(1\text{.}\) Using index notation, if we write the left-hand-side as \(\sum_{g \in G} T(g)_{ij} T^\dagger (g)_{k l}\text{,}\) we can write \(I = \delta_{i l}\text{,}\) and \(\Tr X = \delta_{j k}\text{,}\) since the trace is non-zero only if the entry \(1\) in \(X\) is on the diagonal. Therefore, we obtain the second orthogonality relation:

\begin{equation*} \sum_{g \in G} T(g)_{i j} T^\dagger(g)_{k l} =\frac{|G|}{d} \delta_{i l} \delta_{j k}. \end{equation*}

The great orthogonality theorem gives orthogonality relations between the matrices of the irreducible representations of any finite group \(G\text{.}\) Those are very useful, as will become even clearer when we introduce characters. But for the moment we can already deduce an immediate consequence of the orthogonality theorem:

This is clear from the orthogonality theorem. We can think of irreducible representations as giving “vectors of matrices” \(\left( T(g)_{ij} \right)_{g \in G}\) in a vector space of dimension \(|G|\text{.}\) The theorem tells us that those vectors must be orthogonal. But there are at most \(|G|\) orthogonal vectors in a vector space of dimension \(|G|\text{,}\) and so the number of irreducible representations must be finite. In fact we will calculate the number of irreducible representations for any finite group explicitly when we introduce characters.