Skip to main content

Section 5.2 Representations of Lie algebras and the adjoint representation

The irreducible representations of a Lie algebra are in one-to-one correspondence with the irreducible representations of its associated simply connected Lie group (the universal covering). Concretely, given a representation of a Lie algebra, one may obtain the corresponding representation of the Lie group by exponentiation. Since representations of Lie algebras are easier to construct than for the corresponding Lie groups, we now move on to the construction of irreducible representations of Lie algebras.

In this section we will define more precisely what representations of Lie algebras are, and then look at one important representation that exists for all Lie algebras: the adjoint representation.

Subsection 5.2.1 Definition

Recall that a Lie algebra \(\mathfrak{g}\) is a vector space with a bracket \([\cdot,\cdot]\) that satisfies a bunch of axioms. Suppose that \(\mathfrak{g}\) is \(n\)-dimensional, and pick a basis \(L_1,\ldots,L_n\) for \(\mathfrak{g}\text{.}\) Then we can write

\begin{equation} [L_i, L_j] = \sum_{k=1}^n c_{ijk} L_k,\label{equation-sc}\tag{5.2.1} \end{equation}

for some choice of structure constant \(c_{ijk}\text{.}\)

A representation of a Lie algebra is simply a mapping that represents the elements of \(\mathfrak{g}\) as matrices, with the bracket \([\cdot,\cdot]\) realized as commutator of matrices. Concretely, one maps the basis \(L_1,\ldots,L_n\) to \(n\) matrices that satisfy the commutation relations (5.2.1). As usual, if the matrices are \(d \times d\text{,}\) we call \(d\) the dimension of the representation.

More formally:

Definition 5.2.1. Representation of a Lie algebra.

Let \(\mathfrak{g}\) be a Lie algebra, and let \(V\) be a vector space. Let \(\mathfrak{gl}(V)\) be the Lie algebra of all linear endomorphisms (i.e. linear maps \(V \to V\)) with bracket given by commutator of matrices \([X,Y] = X Y - Y X\text{.}\) A representation \(\Gamma\) of \(\mathfrak{g}\) is a mapping

\begin{equation*} \Gamma: \mathfrak{g} \to \mathfrak{gl}(V) \end{equation*}

that is compatible with the brackets (this is called a Lie algebra homomorphism):

\begin{equation*} \Gamma([x,y]) = [\Gamma(x), \Gamma(y)] := \Gamma(x) \Gamma(y) - \Gamma(y) \Gamma(x), \end{equation*}

for all \(x,y \in \mathfrak{g}\text{.}\) The dimension of the representation is the dimension of \(V\text{.}\)

As for groups, we can talk about irreducible representations, which are representations such that \(V\) has no proper invariant subspace, that is, no proper subrepresentation.

Remark 5.2.2.

When we define representations of groups, we map group elements to linear operators in \(GL(V)\text{,}\) that is, invertible matrices. Invertibility is required because of the group properties, since we map the group operation to matrix multiplication. For Lie algebras, we map elements of the vector space to linear operators in \(\mathfrak{gl}(V)\text{,}\) i.e. matrices that are not necessarily invertible. This is an important distinction.

We have already seen examples of Lie algebra representations.

As for groups, the trivial representation \(\Gamma\) always exists for Lie algebra. It is defined by sending all elements of \(\mathfrak{g}\) to the zero endomorphism \(0\) of a one-dimensional vector space \(V\text{,}\) which maps all elements of \(V\) to the origin. Indeed, if \(\Gamma(L_i)=0\) for \(i=1,\ldots,n\text{,}\) where \(L_1,\ldots,L_n\) is a basis for \(\mathfrak{g}\text{,}\) then it certainly preserves the bracket, since

\begin{equation*} [\Gamma(L_i), \Gamma(L_j)] = \sum_{k=1}^n c_{ijk} \Gamma(L_k), \end{equation*}

for any choice of structure constants.

We constructed the Lie algebras \(\mathfrak{su}(2) \cong \mathfrak{so}(3)\) by looking at the infinitesimal generators of \(SU(2)\) and \(SO(3)\) in their fundamental, or defining, representations. This gives rise to a corresponding representation of the Lie algebras.

For \(SU(2)\text{,}\) we started with the \(2\)-dimensional representation of \(SU(2)\) consisting of \(2 \times 2\) special unitary matrices. We found in (4.4.1) that the generators are the Pauli matrices:

\begin{equation} T_1 = \frac{1}{2} \begin{pmatrix} 0 \amp 1 \\ 1 \amp 0 \end{pmatrix},\qquad T_2 = \frac{1}{2} \begin{pmatrix} 0 \amp -i \\ i \amp 0 \end{pmatrix},\qquad T_3 = \frac{1}{2} \begin{pmatrix} 1 \amp 0 \\ 0 \amp -1 \end{pmatrix}.\label{equation-pauli-bis}\tag{5.2.2} \end{equation}

Those satisfy the commutation relations

\begin{equation*} [T_i,T_j] = i \sum_{k=1}^3 \epsilon_{ijk} T_k, \end{equation*}

which defines the three-dimensional Lie algebra \(\mathfrak{su}(2)\text{.}\) More precisely, the Pauli matrices (5.2.2) furnish a two-dimensional representation of the Lie algebra \(\mathfrak{su}(2)\text{,}\) since they map the generators to \(2 \times 2\) matrices that obey the appropriate commutation relations.

In the case of \(SO(3)\text{,}\) we started with the \(3\)-dimensional representation of \(SO(3)\) consisting of \(3 \times 3\) special orthogonal matrices. We found in (4.2.4) that the infinitesimal generators are:

\begin{equation} L_1 = -i \begin{pmatrix} 0 \amp 0 \amp 0\\ 0 \amp 0 \amp 1 \\ 0 \amp -1 \amp 0 \end{pmatrix}, \qquad L_2 = -i \begin{pmatrix} 0 \amp 0 \amp -1\\ 0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \end{pmatrix}, \qquad L_3 = -i \begin{pmatrix} 0 \amp 1 \amp 0\\ -1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix}.\label{equation-so3-generators}\tag{5.2.3} \end{equation}

Those also satisfy the commutation relations:

\begin{equation*} [L_i, L_j] = i \sum_{k=1}^3 \epsilon_{ijk} L_k. \end{equation*}

Thus (5.2.3) defines another representation of the same Lie algebra \(\mathfrak{su}(2) \cong \mathfrak{so}(3)\text{,}\) this time a three-dimensional one.

Subsection 5.2.2 The adjoint representation

Before we move on to the study of irreducible representations of the Lie algebra \(\mathfrak{su}(2)\text{,}\) let us define an important representation that exists for all Lie algebras \(\mathfrak{g}\text{,}\) which is called the adjoint representation.

To show that it is a representation, we need to check that the mapping respects the bracket, that is, that the matrices \(\Gamma_i\) satisfy the commutation relations

\begin{equation*} [\Gamma_i, \Gamma_j] = \sum_{k=1}^n c_{ijk} \Gamma_k. \end{equation*}

Well, it turns out that this is a direct consequence of the Jacobi identity. Recall that for a Lie algebra \(\mathfrak{g}\text{,}\) the bracket \([\cdot,\cdot]\) satisfies the Jacobi identity

\begin{equation*} [x,[y,z]]+[y,[z,x]]+[z,[x,y]] = 0, \end{equation*}

for all \(x,y,z \in \mathfrak{g}\text{.}\) In particular, if we pick \(x=L_i, y=L_j, z=L_k,\) we get

\begin{equation} [L_i, [L_j,L_k]] + [L_j, [L_k,L_i]] + [L_k,[L_i,L_j]] = 0.\label{equation-jacobi-basis}\tag{5.2.4} \end{equation}

But \([L_i, L_j] = \sum_{k=1}^n c_{ijk} L_k.\) Thus we can rewrite (5.2.4) as

\begin{equation*} \sum_{a=1}^n \left(c_{jk a} [L_i,L_a] + c_{kia} [L_j,L_a] + c_{ija} [L_k, L_a] \right) = 0. \end{equation*}

Doing the same thing one more time we get:

\begin{equation*} \sum_{a=1}^n \sum_{b=1}^n \left( c_{jka} c_{iab} + c_{kia} c_{jab} + c_{ija} c_{kab} \right) L_b = 0. \end{equation*}

Since the \(L_b\) are linearly independent, this means that the coefficients of the linear combination must be identically zero, so we obtain the following condition on the structure constants:

\begin{equation*} \sum_{a=1}^n \left( c_{jka} c_{iab} + c_{kia} c_{jab} + c_{ija} c_{kab} \right) = 0. \end{equation*}

We note that \(c_{ijk} = - c_{ji k}\text{,}\) so we can write:

\begin{equation*} \sum_{a=1}^n \left( c_{ika} c_{jab} - c_{jka} c_{iab} \right) = - \sum_{a=1}^n c_{ija} c_{a kb}. \end{equation*}

Now if we use the definition of the matrices of the adjoint representation \((\Gamma_i)_{jk} =- c_{ijk}\text{,}\) we can rewrite this equation as

\begin{equation*} \sum_{a=1}^n \left((\Gamma_i)_{ka} (\Gamma_j)_{a b} - (\Gamma_j)_{ka} (\Gamma_i)_{a b} \right) = \sum_{a=1}^n c_{ija} (\Gamma_a)_{k b}. \end{equation*}

This is an equation for the \(kb\)'th component of a matrix, where the sum over \(a\) comes from matrix multiplication. Thus it can be rewritten in matrix form as the equation

\begin{equation*} \Gamma_i \Gamma_j - \Gamma_j \Gamma_i = \sum_{a=1}^n c_{ija} \Gamma_a, \end{equation*}

which is precisely the commutation relation of the Lie algebra.

We can define the adjoint representation a little more abstractly, for those who like this stuff. The adjoint representation is a representation that maps elements of the vector space \(\mathfrak{g}\) to linear operators on the vector space \(\mathfrak{g}\) itself. That is, we think of elements of the Lie algebra as operators acting on the algebra itself. The formal definition is the following:

Definition 5.2.6. The adjoint representation (bis).

Let \(\mathfrak{g}\) be a Lie algebra, and let \(\mathfrak{gl}(\mathfrak{g})\) be the Lie algebra of linear endomorphisms of \(\mathfrak{g}\text{.}\) The adjoint representation is defined as the mapping

\begin{align*} ad:\amp \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g})\\ \amp x \mapsto ad_x:= [ x,\cdot], \end{align*}

where \(ad_x= [x,\cdot]\) is the operator on \(\mathfrak{g}\) defined by

\begin{align*} ad_x:\amp \mathfrak{g} \to \mathfrak{g}\\ \amp y \mapsto [x,y]. \end{align*}

In this more abstract formulation, we send an element \(x \in \mathfrak{g}\) to the operator that acts on \(\mathfrak{g}\) by evaluating the bracket of any \(y \in \mathfrak{g}\) with the fixed \(x \in \mathfrak{g}\text{.}\) It may not be obvious at first that this is equivalent to the definition in terms of structure constants, but it is. We will leave the comparison as an exercise.

Show that the abstract definition of the adjoint representation is equivalent to the definition in terms of structure constants, by calculating the components of the matrices corresponding to the linear operators \(ad_{L_i}\text{.}\)

Let us now construct the adjoint representation of \(\mathfrak{su}(2)\text{.}\) We recall the commutation relations

\begin{equation*} [T_i,T_j] = i \sum_{k=1}^3 \epsilon_{ijk} T_k. \end{equation*}

The structure constants are thus \(c_{ijk} = i \epsilon_{ijk}\text{.}\) We construct the adjoint representation by defining the three matrices \(\Gamma_i\text{,}\) \(i=1,2,3\) with components given by

\begin{equation*} (\Gamma_i)_{jk} = -i \epsilon_{ijk}. \end{equation*}

Recalling the definition of the Levi-Civita symbol, we see that these matrices take the form

\begin{align*} \Lambda_1 = \amp -i \begin{pmatrix} 0 \amp 0 \amp 0\\ 0 \amp 0 \amp 1 \\ 0 \amp -1 \amp 0 \end{pmatrix},\\ \Lambda_2 =\amp -i \begin{pmatrix} 0 \amp 0 \amp -1\\ 0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \end{pmatrix},\\ \Lambda_3 =\amp -i \begin{pmatrix} 0 \amp 1 \amp 0\\ -1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix}. \end{align*}

Those are precisely the generators of \(SO(3)\text{,}\) see (5.2.3)! Therefore, the adjoint representation of \(\mathfrak{su}(2) \cong \mathfrak{so}(3)\) constructs the infinitesimal generators of \(SO(3)\text{,}\) and by exponentiation the defining representation of \(SO(3)\text{.}\) Note that this is particular to \(SO(3)\) though; in general, the adjoint representation of a Lie algebra does not construct the defining representation of its associated Lie group. For instance, for \(SO(n)\text{,}\) the defining representation is \(n\)-dimensional (consisting of \(n \times n\) rotation matrices), while the adjoint representation of the Lie algebra \(\mathfrak{so}(n)\) is \(\frac{1}{2}n (n-1)\)-dimensional (the dimension of the Lie algebra). These dimensions match only when \(n=3\text{.}\)