Skip to main content

Section 4.3 Lie algebras

In this section we generalize the discussion about two-dimensional and three-dimensional rotations to more general Lie groups. We will try to come up with an abstract definition for the algebra of generators of a Lie group.

We will proceed in three steps. First, we will study how to extract the Lie algebra corresponding to a given matrix Lie group. Second, we will define the abstract notion of a Lie algebra. Third, we will show how we can recover the group structure corresponding to a Lie algebra through exponentiation.

Subsection 4.3.1 The Lie algebra of a Lie group

Let us first describe how, given a matrix Lie group, one can extract the corresponding Lie algebra. The discussion here is very similar to what was done for two-dimensional and three-dimensional rotations in Section 4.2.

Geometrically, the Lie algebra is the linearization of the Lie group at the origin. In other words, we replace the group manifold locally near the origin by its tangent space. Since we are working with matrix Lie groups, we can calculate this linearization explicitly by expanding the matrices of the fundamental representation near the identity, and keeping only terms of first order. Let us be a little more precise.

We start with a matrix Lie group. That is, we start with its defining representation, so we can think of the group elements as matrices \(A\) that depend on \(n\) real parameters. We want to construct the associated Lie algebra. We proceed in three steps.

  1. We expand the group elements near the identity, to first order:
    \begin{equation*} A = I + i L. \end{equation*}
    We determine the properties of the matrices \(L\) by imposing the appropriate condition on \(A\text{.}\) For instance, for \(SO(n)\text{,}\) imposing that \(A\) is a real orthogonal matrix constrains \(L\) to be a purely imaginary Hermitian matrix.
  2. We find a basis \(L_1,\ldots,L_n\) for the \(n\)-dimensional vector space of matrices \(L\) satisfying the appropriate constraint. We call the \(L_i\) the infinitesimal generators of the Lie group. We write \(L\) as a linear combination of the \(L_i\text{:}\)
    \begin{equation*} L = \sum_{i=1}^n \theta_i L_i. \end{equation*}
  3. For any two group elements \(A, A'\) with first order expansions \(A =I + i L\text{,}\) \(A' = I + i L'\text{,}\) the commutator of matrices \([L,L']\) encodes whether the group elements commute. Since \(L\) and \(L'\) are linear combinations of the generators \(L_i\text{,}\) to know all such commutators we only need to calculate the commutators of the generators \(L_i\text{.}\) Because of the group structure we know that the commutator closes, that is, the commutator of two generators will be itself a linear combination of the generators. So we can write:
    \begin{equation} [L_i, L_j ] = \sum_{k=1}^d c_{ijk} L_k\label{equation-structure-constants}\tag{4.3.1} \end{equation}
    for some structure constants \(c_{ijk}\text{.}\) We calculate these structure constants from the form of the generators of the group.

This determines the Lie algebra associated to a Lie group. It is the vector space \(V\) of real linear combinations of the generators \(L_i\text{,}\) with a binary operation \([\cdot, \cdot]: V \times V \to V \) specified by (4.3.1).

Remark 4.3.1.

Note that in this approach, it is crucial that we start with a matrix representation of the group. This allows us to do an expansion near the identity matrix, and to determine the bracket \([\cdot, \cdot]\) by evaluating the commutator of the matrices corresponding to the infinitesimal generators. But in the end, we end up with an algebra that can be defined abstractly in terms of the generators and the relation (4.3.1). It does not depend on the matrix representation anymore. And in fact, had we started with a different matrix representation of the Lie group, we would have obtained a different matrix representation of the generators \(L_i\text{,}\) but the resulting algebra would have been the same, with the same relation (4.3.1).

Subsection 4.3.2 Abstract definition of a Lie algebra

In the previous section we have seen how we can determine the linearization of a matrix Lie group at the origin (that is, the associated Lie algebra). We obtained a vector space spanned by the infinitesimal generators of the Lie group, equipped with a bracket operation. We calculated the Lie algebra associated to a Lie group using a specific representation of the group (usually the defining representation). But the end result is an algebraic structure that can be defined entirely abstractly. Let us now define the abstract concept of a Lie algebra.

Definition 4.3.2. Lie algebra.

A Lie algebra is a vector space \(\mathfrak{g}\) over some field \(F\) (generally taken to be the real numbers in this class), together with a binary operation \([\cdot, \cdot] : \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}\text{,}\) called the Lie bracket, that satisfies the following axioms:

  1. Bilinearity:
    \begin{gather*} [a x+b y, z] = a [x,z] + b [y,z],\\ [z, ax + by] = a [z,x] + b[z,y], \end{gather*}
    for all \(a,b \in F\) and all \(x,y,z \in \mathfrak{g}\text{.}\)
  2. Antisymmetry:
    \begin{equation*} [x,y] = - [y,x], \end{equation*}
    for all \(x,y \in \mathfrak{g}.\)
  3. The Jacobi identity:
    \begin{equation*} [x,[y,z]] + [z,[x,y]] + [y,[z,x]] = 0, \end{equation*}
    for all \(x,y,z \in \mathfrak{g}.\)

This is the abstract definition of a Lie algebra. Concretely, in this course the Lie algebras that we will study will be obtained as linearizations of matrix Lie groups. That is, they will be constructed as in Subsection 4.3.1, starting from a matrix representation of a Lie group. Then the vector space \(\mathfrak{g}\) is constructed as real linear combinations of the infinitesimal generators of the group, and the Lie bracket is realized explicitly as a commutator of the matrix representation for these generators. We know that the Lie bracket closes, that is, for any two \(x,y \in \mathfrak{g}\text{,}\) \([x,y] \in \mathfrak{g}\text{,}\) as required for a Lie algebra.

In this context, the three axioms in the definition of Lie algebras are clear. The commutator of matrices is certainly bilinear and antisymmetric, by definition. As for the Jacobi identity, one can check that given any three matrices \(A,B,C\text{,}\) the commutator \([A,B] = A B - B A\) satisfies the Jacobi identity \([A,[B,C]] + [B,[C,A]] + [C,[A,B]] = 0\text{.}\)

Check that for any three matrices \(A,B,C\text{,}\) the commutator \([A,B] = A B - B A\) satisfies the Jacobi identity \([A,[B,C]] + [B,[C,A]] + [C,[A,B]] = 0\text{.}\)

Thus the procedure of Subsection 4.3.1 always produces a Lie algebra. The structure of the algebra is encoded in the structure constants as in (4.3.1).

Subsection 4.3.3 Recovering a Lie group from a Lie algebra

Now suppose that you are given a Lie algebra \(\mathfrak{g}\text{.}\) How can you reconstruct the group structure from the Lie algebra?

The procedure is simple: exponentiation. Start with the vector space \(\mathfrak{g}\text{.}\) We assume that there is a matrix representation for the Lie algebra, so that we can represent elements of the vector spaces in terms of matrices. Then:

  1. We pick a basis \(L_i\text{,}\) \(i=1,\ldots,n\) for \(\mathfrak{g}\text{.}\) Using the matrix representation we can think of the elements \(L_i\) as matrices. A general element of \(\mathfrak{g}\) can be written as a linear combination \(\sum_{i=1}^n t_i L_i\text{.}\)
  2. We construct a matrix representation for the Lie group by exponentiation:
    \begin{equation*} g(\vec{t}) = e^{i \sum_{i=1}^n t_i L_i}. \end{equation*}
    The matrices \(L_i\) form a representation of the infinitesimal generators of the Lie group.

The claim is that the matrices \(g(\vec{t})\) thus constructed form a group under matrix multiplication. Associativity is clear. We need to check that the identity matrix is in there; that inverses exist; and the matrix multiplication closes.

Exponentiation maps the origin of the vector space \(\vec{0} \in \mathfrak{g}\) to the matrix

\begin{equation*} g(\vec{0}) = e^{0} = I, \end{equation*}

which is the identity matrix. So this is good.

As for inverses, for any \(\sum_i t_i L_i \in \mathfrak{g}\text{,}\) \(-\sum_i t_i L_i\) is also in \(\mathfrak{g}\text{,}\) and by exponentiation those are mapped to

\begin{equation*} g(\vec{t}) g(- \vec{t}) = e^{i \sum_{i=1}^n t_i L_i}e^{ - i \sum_{i=1}^n t_i L_i} = I. \end{equation*}

This follows because for any matrix \(A\text{,}\) we have \(e^A e^{-A} = I\text{,}\) which can be checked by expanding the Taylor series for the exponential.

The tricky statement is whether the group multiplication closes. In other words, we need to show that given any two \(\sum_i t_i L_i\) and \(\sum_i s_i L_i\) in \(\mathfrak{g}\text{,}\) then there exists a \(\sum_i u_i L_i \in \mathfrak{g}\) such that

\begin{equation*} g(\vec{t}) g(\vec{s}) = g(\vec{u}). \end{equation*}

In other words, for any two \(A,B \in \mathfrak{g}\text{,}\) we want to find a \(C \in \mathfrak{g}\) such that

\begin{equation*} e^A e^B = e^C. \end{equation*}

To show such a thing, we would like to construct \(C\) explicitly from \(A\) and \(B\text{.}\) For \(C\) to be in \(\mathfrak{g}\text{,}\) this means that we should be able to write it as a linear combination of \(A, B\) and commutators of those.

First, if \(A\) and \(B\) commute, then it straightforward to show that

\begin{equation*} e^A e^B = e^{A+B}. \end{equation*}

Thus, in this case, the statement is certainly true since \(C = A +B\text{,}\) and exponentiation produces a (abelian) group.

Prove that \(e^A e^B = e^{A+B}\) for any two commuting matrices \(A\) and \(B\text{.}\)

However, \(e^A e^B\) is not equal to \(e^{A+B}\) anymore when \(A\) and \(B\) do not commute. If we define \(C\) by \(e^C = e^A e^B\text{,}\) expand the formal series on both sides of the equality, and determine \(C\) order by order in \(A\) and \(B\text{,}\) we obtain the expansion:

\begin{equation*} C = A + B + \frac{1}{2}[A,B] + \frac{1}{12} [A, [A,B]] + \ldots, \end{equation*}

where \(\ldots\) stands for an infinite formal series of terms that are expressed as linear combinations of commutators of \(A\) and \(B\text{.}\) This is essentially the Baker-Campbell-Haussdorff formula. The key point here is that \(C\) is a formal series of terms in \(\mathfrak{g}\text{.}\) And if \(A\) and \(B\) are close enough to the identity (this can be stated formally), then the formal series converges to a Lie algebra element \(C \in \mathfrak{g}\text{.}\) Thus exponentiation constructs a group, at least locally near the identity.

To summarize, from a Lie algebra, one can recover the local behaviour of a Lie group through exponentiation. Nice!