Skip to main content

Section 5.3 The highest weight construction for \(\mathfrak{su}(2)\)

Let us now do a systematic construction of irreducible representations of the Lie algebra \(\mathfrak{su}(2)\) using the highest weight construction. This approach is rather general and can be suitably generalized to construct irreducible representations of Lie algebras in general.

The idea is to construct representations by looking at how they act on vector spaces. In other words, we will construct first the space on which a representation acts, and from there extract the matrices of the representation in an appropriate basis of states.

Subsection 5.3.1 The highest weight construction

Let \(L_1,L_2,L_3\) be a basis for the three-dimensional Lie algebra \(\mathfrak{su}(2)\text{,}\) with bracket

\begin{equation*} [L_i,L_j] = i \sum_{k=1}^3 \epsilon_{ijk} L_k. \end{equation*}

We want to construct representations of this algebra. To construct a representation, we are looking for three matrices \(\Gamma_1, \Gamma_2, \Gamma_3\) with commutator

\begin{equation} [\Gamma_i, \Gamma_j] = i \sum_{k=1}^3 \epsilon_{ijk} \Gamma_k.\label{equation-comm-gamma}\tag{5.3.1} \end{equation}

Since we are constructing representations up to equivalence, we can always use similarity transformations to make the representation as simple as possible. Our goal is to use a similarity transformation to simultaneouly diagonalize as many of the matrices \(\Gamma_i\)s as possible. But since two matrices that can be diagonalized by the same similarity transformation must commute, we can only diagonalize one of the three \(\Gamma_i\)s. Its is conventional to choose \(\Gamma_3\) as being diagonal. In other words, we will be constructing representations of \(\mathfrak{su}(2)\) such that \(\Gamma_3\) is a diagonal matrix.

The idea is to construct the vector space on which the representation acts. We will call a state an element of the vector space on which the representation acts and that is an eigenvector of \(\Gamma_3\text{.}\) We use the standard Dirac “bra-ket” notation. We write the eigenstate of \(\Gamma_3\) with eigenvalue \(m\) as \(|m\rangle\text{:}\)

\begin{equation*} \Gamma_3 |m \rangle = m |m \rangle. \end{equation*}

At this stage, all that we know is that \(m\) is a real number, since representations of \(\mathfrak{su}(2)\) are composed of Hermitian matrices (and hence with real eigenvalues).

Remark 5.3.1. The Dirac bra-ket notation.

The Dirac bra-ket notation is conventional in the treatment of Lie algebras in both physics and mathematics. In this notation, we represent a complex vector \(v\) as \(|v \rangle\text{.}\) We represent its complex conjugate transpose \((v^*)^T\) as \(\langle v |\text{.}\) Thus the norm square of \(v\) can be written as

\begin{equation*} \langle v | v \rangle = (v^*)^T v = \sum_{i=1}^n |v_i|^2. \end{equation*}

In general, for two vectors \(u,v\text{,}\)

\begin{equation*} \langle u | v \rangle = (u^*)^T v = \sum_{i=1}^n u_i^* v_i. \end{equation*}

We are constructing finite-dimensional representations. So we know that \(\Gamma_3\) only has a finite number of eigenvectors. We pick the state with the largest eigenvalue, and call this state the highest weight state. We will denote it by \(|j \rangle\text{,}\) with the letter \(j\) reserved to the largest eigenvalue of \(\Gamma_3\) in a given representation. We normalize this state so that

\begin{equation*} \langle j | j \rangle = 1. \end{equation*}

We now define a lowering operator \(\Gamma_-\) and a raising operator \(\Gamma_+\) by

\begin{equation*} \Gamma_{\pm} = \frac{1}{\sqrt{2}} (\Gamma_1 \pm i \Gamma_2). \end{equation*}

It will be convenient, for reasons that will become clear shortly, to use \(\Gamma_-, \Gamma_+, \Gamma_3\) as basis for our representation, instead of \(\Gamma_1, \Gamma_2, \Gamma_3\text{.}\) So we should transform the commutation relations (5.3.1) into commutations relations for the new basis. A straightforward calculation shows that

\begin{equation} [\Gamma_3, \Gamma_{\pm}] = \pm \Gamma_{\pm}, \qquad [\Gamma_+, \Gamma_-] = \Gamma_3.\label{equation-comm-gammapm}\tag{5.3.2} \end{equation}

So instead of constructing matrices \(\Gamma_1, \Gamma_2, \Gamma_3\) satisfying (5.3.1), we are now trying to construct a diagonal matrix \(\Gamma_3\) and two matrices \(\Gamma_{\pm}\) satisfying (5.3.2).

\(\Gamma_{\pm}\) are called lowering and raising operator for the following reason. Pick a state \(|m\rangle\text{,}\) and act with \(\Gamma_3 \Gamma_{\pm}\text{.}\) We get:

\begin{align*} \Gamma_3 \Gamma_{\pm} |m \rangle =\amp (\Gamma_{\pm} \Gamma_3 \pm \Gamma_{\pm} ) |m \rangle\\ =\amp (m \pm 1) \Gamma_{\pm} | m \rangle. \end{align*}

In other words, \(\Gamma_{\pm} |m \rangle\) is also an eigenstate of \(\Gamma_3\text{,}\) with eigenvalue \((m \pm 1)\text{.}\) So acting with \(\Gamma_-\) on an eigenstate of \(\Gamma_3\) produces a new eigenstate with eigenvalue lowered by \(1\) (hence the name lowering operator), while acting with \(\Gamma_+\) produces a new eigenstate with eigenvalue raised by \(1\) (hence the name raising operator).

We will fix the normalization of the eigenstate of \(\Gamma_3\) using \(\Gamma_-\text{:}\)

\begin{equation*} |m-1 \rangle := \Gamma_- | m \rangle. \end{equation*}

Then, acting on the highest weight state \(|j \rangle\) with \(\Gamma_-\text{,}\) we produce a tower of states:

\begin{equation*} |j \rangle, \qquad |j-1 \rangle, \qquad \ldots, \qquad | q \rangle. \end{equation*}

We know that the tower must end at some point, since we are constructing a finite-dimensional representation. Thus there must be an eigenstate \(|q \rangle\) of \(\Gamma_3\) such that

\begin{equation*} \Gamma_- | q \rangle = 0. \end{equation*}

The next step is to figure out what \(q\) is for a given \(j\text{.}\) This will tell us the dimension of the corresponding representation. To this end, we notice that we have not fully determined how \(\Gamma_+\) acts on an eigenstate of \(\Gamma_3\text{.}\) All that we know is that

\begin{equation*} \Gamma_+ | m \rangle = N_m |m+1 \rangle \end{equation*}

for some normalization constant \(N_m\text{.}\)

We can compute \(N_m\) as follows:

\begin{align*} N_m |m+1 \rangle =\amp \Gamma_+ |m \rangle\\ =\amp \Gamma_+ \Gamma_- |m+1 \rangle\\ =\amp (\Gamma_- \Gamma_+ + \Gamma_3) |m+1 \rangle\\ =\amp (N_{m+1} + (m+1) )|m+1 \rangle. \end{align*}

Therefore, we get a recursion

\begin{equation*} N_m = N_{m+1} + (m+1). \end{equation*}

We can solve this recursion as follows. First, since \(|j \rangle\) is the highest weight state, we know that \(N_{j} = 0.\) Thus

\begin{equation*} N_{j-1} = j. \end{equation*}

Then

\begin{equation*} N_{j-2} = N_{j-1} + (j-1) = j + (j-1). \end{equation*}

By induction, we get

\begin{equation*} N_{j-s} = \sum_{k=0}^{s-1} (j-k) = s j - \frac{1}{2} s(s-1) = \frac{s}{2} (2 j - s + 1), \end{equation*}

for \(s = 0,\ldots,2 j.\) We redefine the index as \(m = j - s\) to get:

\begin{equation*} N_m = \frac{j-m}{2} (j + m + 1) = \frac{1}{2}j(j+1) - \frac{1}{2}m (m+1). \end{equation*}

With this result we can determine what the lowest weight state \(| q \rangle\) is. By definition, it is the end of the tower, so we must have \(\Gamma_- |q \rangle =0\text{.}\) Then:

\begin{align*} 0 = \amp \Gamma_+ \Gamma_- | q \rangle\\ = \amp (\Gamma_- \Gamma_+ + \Gamma_3) | q \rangle\\ = \amp (N_q + q) | q \rangle, \end{align*}

with \(N_q = \frac{1}{2} j (j+1) - \frac{1}{2} q (q+1).\) Thus we are looking for a solution to the equation

\begin{equation*} \frac{1}{2} j (j+1) - \frac{1}{2} q (q+1) + q = \frac{1}{2} j (j+1) - \frac{1}{2} q (q-1) = 0. \end{equation*}

There are two solutions: \(q=j+1\) and \(q = - j\text{.}\) The first one does not make sense, since \(|j \rangle\) is a highest weight state. Therefore \(q = -j.\)

What this means is that we have constructed the following tower of eigenstates for \(\Gamma_3\text{:}\)

\begin{equation*} |j \rangle, \qquad |j-1 \rangle, \qquad \ldots, \qquad |-j+1 \rangle, \qquad |-j \rangle. \end{equation*}

There are \(2j+1\) states. Since these states span the vector space on which our representation acts, \(2j+1\) must be a positive integer. That is, \(j\) must be a non-negative half-integer.

The outcome of this construction is the following important result:

We have not proved here that these representations are irreducible, and that we have obtained all irreducible representations: this is beyond the scope of this class.

Subsection 5.3.2 Normalization of states

In the construction above we define the states \(|j,m \rangle\) as:

\begin{equation*} \Gamma_3 |j,m \rangle = m |j,m \rangle, \qquad \Gamma_- |j,m \rangle = |j ,m-1 \rangle, \qquad \Gamma_+ |j,m \rangle = N_m |j,m+1 \rangle, \end{equation*}

with \(N_m = \frac{1}{2}j(j+1) - \frac{1}{2}m (m+1).\) One can show that these states are orthogonal, that is,

\begin{equation*} \langle j,m' | j,m \rangle = 0 \qquad \text{if } m' \neq m. \end{equation*}

Show that \(\langle j,m' | j,m \rangle = 0\) if \(m' \neq m\text{.}\)

However, the states \(|j,m \rangle\) are not normalized. We normalized the highest weight state by requiring that \(\langle j,j | j,j \rangle =1\text{,}\) but \(\langle j,m | j,m \rangle \neq 1\) for \(m \leq j-1\text{.}\) Let \(C_m^2 = \langle j,m | j,m \rangle\text{.}\) We can construct a new basis of states that are orthonormal as

\begin{equation*} \widetilde{|j,m \rangle } = \frac{1}{C_m} | j,m \rangle. \end{equation*}

We can construct a basis of orthonormal states from our previous basis as

\begin{equation*} \widetilde{|j,m \rangle } = \frac{1}{C_m} | j,m \rangle. \end{equation*}

Clearly, this does not change how \(\Gamma_3\) acts, since

\begin{equation*} \Gamma_3 \widetilde{|j,m \rangle } = m \widetilde{|j,m \rangle } . \end{equation*}

We want to find how \(\Gamma_+\) and \(\Gamma_-\) act on this new basis.

Since \(\Gamma_{\pm} = \frac{1}{\sqrt{2}} (\Gamma_1 \pm i \Gamma_2),\) and \(\Gamma_1, \Gamma_2\) are Hermitian, we find \(\Gamma_{+}^\dagger =\Gamma_-\) and \(\Gamma_-^\dagger = \Gamma_+\text{.}\) Thus we can write

\begin{equation*} \langle j,m-1 | j,m-1 \rangle = \langle j,m | \Gamma_+ \Gamma_- | j,m \rangle = N_{m-1} \langle j,m | j,m \rangle, \end{equation*}

where for the second equality we applied \(\Gamma_+ \Gamma-\) on the ket \(| j,m \rangle\text{.}\) Thus, if we write \(C_m^2 = \langle j,m | j,m \rangle,\) we get the relation

\begin{equation} C_{m-1}^2 = N_{m-1} C_m^2 = \frac{1}{2}(j+m)(j-m+1) C_m^2.\label{equation-relation-norm}\tag{5.3.3} \end{equation}

Now we know that

\begin{equation*} \Gamma_- |j,m \rangle = |j,m-1 \rangle. \end{equation*}

In terms of the normalized basis, this becomes

\begin{equation*} \Gamma_- \widetilde{|j,m \rangle } = \frac{C_{m-1}}{C_m} \widetilde{|j,m-1 \rangle } \end{equation*}

Substituting (5.3.3), we get:

\begin{equation*} \Gamma_- \widetilde{|j,m \rangle } = \sqrt{\frac{1}{2}(j+m)(j-m+1)} \widetilde{|j,m-1 \rangle }. \end{equation*}

As for \(\Gamma_+\text{,}\) we know that

\begin{equation*} \Gamma_+ |j,m \rangle = N_m |j,m+1 \rangle = \frac{j-m}{2} (j + m + 1) |j, m+1 \rangle. \end{equation*}

In terms of the normalized basis, we get:

\begin{equation*} \Gamma_+ \widetilde{|j,m \rangle } = \frac{C_{m+1}}{C_m}\frac{j-m}{2} (j + m + 1) \widetilde{|j,m+1 \rangle } . \end{equation*}

Substituting (5.3.3):

\begin{align*} \Gamma_+ \widetilde{|j,m \rangle } =\amp \sqrt{\frac{2}{(j+m+1)(j-m)}} \frac{j-m}{2} (j + m + 1) \widetilde{|j,m+1 \rangle }\\ =\amp \sqrt{\frac{1}{2}(j+m+1)(j-m)}\widetilde{|j,m+1 \rangle }. \end{align*}
Remark 5.3.6.

For clarity, we will now drop the tilde symbol on the normalized states. From now on all states will be assumed to be normalized.

Subsection 5.3.3 Matrix representations

We have constructed the irreducible representations of \(\mathfrak{su}(2)\) using the highest weight construction. We can now write down explicit matrices for these representations, using the orthonormal basis for the vector space on which they act. We will do explicitly the most common representations in physics, the spin \(1/2\) and spin \(1\) representations.

We consider the spin \(1/2\) representation, with \(j=1/2\text{.}\) There are two states: \(|1/2,1/2 \rangle\) and \(|1/2, -1/2 \rangle\text{.}\) Those are normalized eigenstates of \(\Gamma_3\) with eigenvalues \(1/2\) and \(-1/2\text{.}\)

Using Lemma 5.3.5, we get:

\begin{equation*} \Gamma_- |1/2, 1/2 \rangle = \frac{1}{\sqrt{2}} |1/2, -1/2 \rangle, \qquad \Gamma_- |1/2, - 1/2 \rangle = 0, \end{equation*}

and

\begin{equation*} \Gamma_+ |1/2, 1/2 \rangle = 0, \qquad \Gamma_+ |1/2, -1/2 \rangle = \frac{1}{\sqrt{2}} |1/2, 1/2 \rangle. \end{equation*}

To write down a matrix representation, we think of the states as basis vector of a two-dimensional vector space. We define

\begin{equation*} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = |1/2,1/2 \rangle, \qquad \begin{pmatrix} 0 \\ 1 \end{pmatrix} = |1/2, -1/2 \rangle. \end{equation*}

Then the action of the operators \(\Gamma_3\) and \(\Gamma_{\pm}\) can be rewritten in matrix form as

\begin{equation*} \Gamma_3 = \frac{1}{2} \begin{pmatrix} 1 \amp 0 \\ 0 \amp -1 \end{pmatrix}, \qquad \Gamma_+ = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \amp 1 \\ 0 \amp 0 \end{pmatrix}, \qquad \Gamma_- = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \amp 0 \\ 1 \amp 0 \end{pmatrix}. \end{equation*}

We are usually interested in going back to the \(\Gamma_1, \Gamma_2, \Gamma_3\) basis. Since \(\Gamma_{\pm} = \frac{1}{\sqrt{2}} (\Gamma_1 \pm i \Gamma_2),\) we have

\begin{equation} \Gamma_1 = \frac{1}{\sqrt{2}}(\Gamma_+ + \Gamma_-), \qquad \Gamma_2= -\frac{i}{\sqrt{2}}(\Gamma_+ - \Gamma-).\label{equation-gamma12}\tag{5.3.4} \end{equation}

Thus

\begin{equation*} \qquad \Gamma_1 = \frac{1}{2} \begin{pmatrix} 0 \amp 1 \\ 1 \amp 0 \end{pmatrix}, \qquad \Gamma_2 = -\frac{i}{2} \begin{pmatrix} 0 \amp 1 \\ -1 \amp 0 \end{pmatrix}, \qquad \Gamma_3 = \frac{1}{2} \begin{pmatrix} 1 \amp 0 \\ 0 \amp -1 \end{pmatrix}. \end{equation*}

Those are none other than the Pauli matrices giving the defining representation of \(SU(2)\text{!}\) See Example 5.2.4. Thus we have recovered the two-dimensional representation of \(\mathfrak{su}(2)\) that is obtained from the infinitesimal generators of \(SU(2)\text{.}\)

Consider now the spin \(1\) representation with \(j=1\text{,}\) with the three normalized states \(|1,1 \rangle\text{,}\) \(|1,0 \rangle\) and \(|1,-1 \rangle\text{.}\) The action of \(\Gamma_{\pm}\) can be calculated to be:

\begin{equation*} \Gamma_- |1,1 \rangle = |1,0 \rangle, \qquad \Gamma_- |1,0 \rangle = |1,-1 \rangle, \qquad \Gamma_- |1,-1 \rangle = 0, \end{equation*}

and

\begin{equation*} \Gamma_+ |1,1 \rangle = 0, \qquad \Gamma_+ |1,0 \rangle = |1,1 \rangle, \qquad \Gamma_+ |1,-1 \rangle = |1,0 \rangle. \end{equation*}

Using the basis vectors

\begin{equation*} \begin{pmatrix} 1\\ 0 \\ 0 \end{pmatrix} = |1,1 \rangle, \qquad \begin{pmatrix} 0\\ 1 \\ 0 \end{pmatrix} = |1,0 \rangle, \qquad \begin{pmatrix} 0\\ 0 \\ 1 \end{pmatrix} = |1,-1 \rangle, \end{equation*}

we get the matrices

\begin{equation*} \Gamma_+ = \begin{pmatrix} 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0 \end{pmatrix}, \qquad \Gamma_- = \begin{pmatrix} 0 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \end{pmatrix}, \qquad \Gamma_3 = \begin{pmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp -1 \end{pmatrix}. \end{equation*}

Using (5.3.4), we get:

\begin{equation*} \Gamma_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 0 \end{pmatrix}, \qquad \Gamma_2 = - \frac{i}{\sqrt{2}} \begin{pmatrix} 0 \amp 1 \amp 0 \\ -1 \amp 0 \amp 1 \\ 0 \amp - 1 \amp 0 \end{pmatrix}, \qquad \Gamma_3 = \begin{pmatrix} 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp -1 \end{pmatrix}. \end{equation*}

One can check that these matrices indeed satisfy the commutation relations of the \(\mathfrak{su}(2)\) Lie algebra. In fact, they are equivalent to the defining representation of \(SO(3)\) as presented in Example 5.2.4. Indeed, one can find a similarity transformation that brings the matrices of the defining representation of \(SO(3)\) into the matrices above (this is the similarity trasnformation that diagonalizes \(\Gamma_3\)).

Subsection 5.3.4 Casimir invariant

I will not develop the general theory of Casimir elements here, but let me just say a few words in the context of \(\mathfrak{su}(2)\text{.}\) Roughly speaking, a Casimir element is a combination of the elements of the Lie algebra that commutes with all elements of the algebra. However, to construct a Casimir element we may take products of matrices in a given representation, and hence we actually leave the algebra (more precisely, a Casimir element lives in the “universal enveloping algebra” of a Lie algebra).

An important Casimir element is the quadratic Casimir invariant. In the context of \(\mathfrak{su}(2)\text{,}\) this is the operator

\begin{equation*} \Gamma^2 := \Gamma_1^2 + \Gamma_2^2 + \Gamma_3^2 = \Gamma_+ \Gamma_- + \Gamma_- \Gamma_+ + \Gamma_3^2. \end{equation*}

Using simple properties of commutators, it is easy to see that

\begin{equation*} [\Gamma^2, \Gamma_1 ] = [\Gamma^2, \Gamma_2] = [\Gamma^2, \Gamma_3] = 0, \end{equation*}

and hence \(\Gamma^2\) commutes with all elements of the Lie algebra \(\mathfrak{su}(2)\text{.}\)

Since \(\Gamma^2\) commutes with all elements of the Lie algebra, it follows that \(\Gamma^2 |j,m \rangle\) can only depend on \(j\text{;}\) it cannot depend on \(m\text{.}\) (That's because it must be a multiple of the identity matrix, as an operator acting on the vector space spanned by the states \(|j,m \rangle\text{.}\)) Thus we can calculate \(\Gamma^2 |j,m \rangle\) by acting on the highest weight vector:

\begin{align*} \Gamma^2 |j,j \rangle =\amp \left(\Gamma_+ \Gamma_- + \Gamma_- \Gamma_+ + \Gamma_3^2 \right) |j,j \rangle\\ =\amp (j + j^2) |j,j \rangle. \end{align*}

Therefore

\begin{equation*} \Gamma^2 |j,m \rangle = j(j+1) |j,m \rangle. \end{equation*}

This Casimir invariant in physics corresponds to the total angular momentum, which you may have seen in your quantum mechanics class. It is the same for all states in a given irreducible representation of \(\mathfrak{su}(2)\text{.}\)

Subsection 5.3.5 Differential representation and spherical harmonics

In most of this class we work with matrix representations of Lie groups and Lie algebras. But we have already seen in Subsection 4.2.4 that we can also represent operators as differential operators acting on the space of functions of some variables. For instance, in (4.2.6) we wrote down a representation for the basis vectors \(L_1, L_2, L_3\) of the Lie algebra \(\mathfrak{su}(2)\) as differential operators acting on functions \(f(x,y,z)\text{,}\) which we reproduce here for convenience:

\begin{equation*} L_1 = - i \left( y \frac{\partial}{\partial z} - z \frac{\partial}{\partial y} \right),\qquad L_2 = - i \left( z \frac{\partial}{\partial x} - x \frac{\partial}{\partial z} \right),\qquad L_3 = - i \left( x \frac{\partial}{\partial y} - y \frac{\partial}{\partial x} \right). \end{equation*}

As mentioned there, these operators correspond to the angular momentum operators in quantum mechanics.

Now comes the power of abstract mathematics. We can apply everyting that we have found about representation theory of \(\mathfrak{su}(2) \cong \mathfrak{so}(3)\) to this particular representation of the Lie algebra. We know that we can decompose the space of functions of \((x,y,z)\) in terms of how they transform under the irreducible representations of \(SO(3)\text{.}\) It is easier to work in spherical coordinates \((r,\theta,\phi)\text{,}\) since the group of symmetries here is rotation, and so functions that are invariant under rotations will depend only on \((\theta,\phi)\text{.}\) Then we can write a basis for spherically symmetric functions \(f(\theta,\phi)\) given by functions \(Y^m_j(\theta,\phi)\) that are eigenfunctions of \(L_3\text{:}\)

\begin{equation*} L_3 Y^m_j(\theta,\phi) = m Y^m_j(\theta, \phi). \end{equation*}

Moreover, by the construction of the Casimir invariant we know that these functions satisfy:

\begin{equation*} L^2 Y^m_j(\theta,\phi) = j (j+1) Y^m_j(\theta,\phi). \end{equation*}

In other words, we decompose the space of spherically symmetric functions as a direct sum of subspaces that transform according to the irreducible representations of \(\mathfrak{su}(2) \cong \mathfrak{so}(3)\text{.}\) Here we want true representations of \(SO(3)\text{,}\) the group of rotations, so we keep only the representations with \(j\) a non-negative integer. As a result, any spherically symmetric function can be written as a linear combination of the functions \(Y^m_j(\theta,\phi)\) satisfying the properties above, with \(j=0,1,2,3,\ldots\) and \(m=-j,-j+1,\ldots,j-1,j\text{.}\) The functions \(Y^m_j(\theta,\phi)\) are known as spherical harmonics, and will certainly show up when you study for instance the hydrogen atom in quantum mechanics. In our context, this is just a different point of view on representation theory of the group of rotations \(SO(3)\text{.}\)