Skip to main content

Section 4.7 The pullback of a \(k\)-form

In Section 2.4 we defined the pullback of a one-form. We now generalize this concept to \(k\)-forms. We first do it using an axiomatic approach as for one-forms, and relate the result to the concept of Jacobian. We also introduce a more direct definition of pullback using the algebraic approach to basic \(k\)-forms introduced in Subsection 4.1.1, and show that it is consistent with our axiomatic approach..

Subsection 4.7.1 The pullback of a \(k\)-form: an axiomatic approach

In Section 2.4 we defined the pullback of a one-form. We used an “axiomatic” approach: we first defined the properties that we wanted the pullback to satisfy, and showed that there is a unique construction that satisfies these properties. The properties that we specified were, in words:

  1. The pullback of a sum of one-forms is the sum of the pullbacks.

  2. The pullback of a product of a zero-form and a one-form is the product of the pullbacks.

  3. The pullback of the exterior derivative of a zero-form is the exterior derivative of the pullback.

In this section we define the pullback of a \(k\)-form using a similar approach. All we need to do is generalize the first and second properties to \(k\)-forms.

More precisely, let \(\phi: V \to U\) be a smooth function, with \(V \subseteq \mathbb{R}^m\) and \(U\subseteq \mathbb{R}^n\) open subsets. We want the pullback \(\phi^*\) to satisfy the following properties:

  1. If \(\omega\) and \(\eta\) are \(k\)-forms on \(U\text{,}\) then

    \begin{equation*} \phi^*(\omega + \eta) = \phi^* \omega + \phi^* \eta. \end{equation*}

  2. If \(\omega\) is a \(k\)-form and \(\eta\) an \(\ell\)-form on \(U\text{,}\) then

    \begin{equation*} \phi^*(\omega \wedge \eta) = (\phi^* \omega) \wedge (\phi^* \eta). \end{equation*}

  3. If \(f\) is a zero-form (a function) on \(U\text{,}\) then

    \begin{equation*} \phi^*(df) = d(\phi^* f). \end{equation*}

The third property is unchanged, and the first two are naturally generalized to \(k\)-forms.

Imposing these properties gives us a unique definition for the pullback of a \(k\)-form.

Start with a basic \(k\)-form

\begin{equation*} dx_{i_1} \wedge \cdots \wedge dx_{i_k}. \end{equation*}

Since we impose (Property 2) that \(\phi^* (\omega \wedge \eta) = (\phi^* \omega) \wedge \phi^* \eta)\text{,}\) we know that

\begin{equation*} \phi^* (dx_{i_1} \wedge \cdots \wedge dx_{i_k}) = \phi^*(dx_{i_1}) \wedge \cdots \wedge \phi^*(dx_{i_k}).\text{.} \end{equation*}

But we already calculated how basic one-forms transform in Lemma 2.4.4; they transform as differentials would. This calculation followed from Property 3, which we still impose here, so it is still valid. This gives us the pullback of basic \(k\)-forms.

Then we use Property 2 to extended it to a function times a basic \(k\)-form, and then Property 1 to extend it to linear combination of such terms, to get the final result for the pullback of a generic \(k\)-form.

This formula certainly looks ugly, but it is really not that bad. Concretely, all you need to do is compose the component functions with \(\phi\text{,}\) and transform the basic one-forms one by one in the wedge product as in Lemma 2.4.4, i.e. they transform as you would expect differentials transform. That's it! This will certainly be clearer with examples.

Consider the following two-form on \(\mathbb{R}^3\text{:}\)

\begin{equation*} \omega = x y\ dy \wedge dz + (x z + y)\ dz \wedge dx + dx \wedge dy, \end{equation*}

and the smooth function \(\phi: \mathbb{R}^2 \to \mathbb{R}^3\) given by

\begin{equation*} \phi(u,v) = (u v, u^2, v^2) = (x(u,v), y(u,v), z(u,v) ). \end{equation*}

To calculate \(\phi^* \omega\text{,}\) let us start by calculating the pullback of the basic one-forms. We get:

\begin{align*} \phi^*(dx) =\amp \frac{\partial x}{\partial u}\ du + \frac{\partial x}{\partial v}\ dv = v\ du + u\ dv,\\ \phi^*(dy) =\amp \frac{\partial y}{\partial u}\ du + \frac{\partial y}{\partial v}\ dv = 2 u \ du,\\ \phi^*(dz) =\amp \frac{\partial z}{\partial u}\ du + \frac{\partial z}{\partial v}\ dv = 2 v \ dv. \end{align*}

Putting this together, we get:

\begin{align*} \phi^*\omega =\amp (uv)(u^2) \phi^*(dy) \wedge \phi^*(dz) + ((uv)(v^2)+u^2) \phi^*(dz) \wedge \phi^*(dx) + \phi^*(dx) \wedge \phi^*(dy)\\ =\amp u^3 v (2u\ du) \wedge (2 v\ dv) + (u v^3 + u^2) (2v\ dv) \wedge (v\ du + u \ dv) + (v\ du+ u \ dv) \wedge (2u \ du) \\ =\amp 4 u^4 v^2\ du \wedge dv + 2 u v^2 (v^3+u) \ dv \wedge du + 2 u^2\ dv \wedge du\\ =\amp (4 u^4 v^2 - 2 u v^5 - 2 u^2 v^2 - 2 u^2) du \wedge dv. \end{align*}

Note that we used the fact that \(du \wedge du = dv \wedge dv = 0\text{,}\) and \(dv \wedge du = - du \wedge dv\text{.}\)

Consider the following three-form on \(\mathbb{R}^3\text{:}\)

\begin{equation*} \omega = e^{x + y + z} dx \wedge dy \wedge dz, \end{equation*}

and the smooth function \(\phi: \mathbb{R}^3_{>0} \to \mathbb{R}^3\) given by

\begin{equation*} \phi(u,v,w) = (\ln(u v), \ln(v w), \ln(w u) ). \end{equation*}

The pullback of the basic one-forms is:

\begin{align*} \phi^*(dx) =\amp \frac{1}{u}\ du + \frac{1}{v}\ dv,\\ \phi^*(dy) =\amp \frac{1}{v}\ dv + \frac{1}{w}\ dw,\\ \phi^*(dz)=\amp \frac{1}{w}\ dw + \frac{1}{u}\ du. \end{align*}

We get:

\begin{align*} \phi^* \omega =\amp e^{\ln(uv)+\ln(vw)+\ln(wu)} \left(\frac{1}{u}\ du + \frac{1}{v}\ dv \right) \wedge \left( \frac{1}{v}\ dv + \frac{1}{w}\ dw \right) \wedge \left( \frac{1}{w}\ dw + \frac{1}{u}\ du \right)\\ =\amp u^2 v^2 w^2 \left( \frac{1}{uvw}\ du \wedge dv \wedge dw + \frac{1}{u v w}\ dv \wedge dw \wedge du \right) \\ =\amp 2 u v w\ du \wedge dv \wedge dw, \end{align*}

where we used the fact that the basic three-forms vanish whenever one of the factor is repeated, and \(dv \wedge dw \wedge du = du \wedge dv \wedge dw\) since we need to exchange two basic one-forms twice to relate the two basic three-forms.

Good, so we are now experts at computing pullbacks! Calculating the pullback of a \(k\)-form is not more difficult than calculating the pullback of a one-form, but the calculation may be longer, and you need to use the anti-symmetry properties of basic \(k\)-forms in Lemma 4.1.6 to simplify the result at the end of your calculation.

Property 3 in our axiomatic definition states that the pullback commutes with the exterior derivative for zero-form. It turns out that this property, which is very important, holds in general for \(k\)-forms. Let us prove that.

Recall the definition of the exterior derivative Definition 4.3.1. Let \(\displaystyle \omega = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} f_{i_1 \cdots i_k} dx_{i_1} \wedge \cdots \wedge dx_{i_k}\text{.}\) Then

\begin{equation*} d\omega =\sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} d (f_{i_1 \cdots i_k} ) \wedge dx_{i_1} \wedge \cdots \wedge dx_{i_k}. \end{equation*}

Using Properties 1 and 2, we can write

\begin{equation*} \phi^*(d \omega) = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} \phi^*(d (f_{i_1 \cdots i_k} )) \wedge \phi^*(dx_{i_1}) \wedge \cdots \wedge \phi^*(dx_{i_k}). \end{equation*}

Now we can use Property 3, which states that

\begin{equation*} \phi^*(d (f_{i_1 \cdots i_k} )) = d ( \phi^* f_{i_1 \cdots i_k} ), \end{equation*}

since the \(f_{i_1 \cdots i_k}\) are just functions (zero-forms). Thus we have:

\begin{align*} \phi^*(d \omega) =\amp \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n}d ( \phi^* f_{i_1 \cdots i_k} ) \wedge \phi^*(dx_{i_1}) \wedge \cdots \wedge \phi^*(dx_{i_k})\\ =\amp d\left( \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} (\phi^* f_{i_1 \cdots i_k}) \phi^*(dx_{i_1}) \wedge \cdots \wedge \phi^*(dx_{i_k}) \right)\\ =\amp d( \phi^* \omega), \end{align*}

where the last line follows from Properties 1 and 2 again.

Subsection 4.7.2 The pullback of a top form in \(\mathbb{R}^n\) and the Jacobian determinant

There is a special case for which the pullback takes a very nice form. This case will play a role shortly in our theory of integration, as it will be related to the transformation formula (the generalization of the substitution formula) for multiple integrals.

Let us start by recalling the definition of the Jacobian of a smooth function.

Definition 4.7.5. The Jacobian.

Let \(\phi:V \to U\) be a smooth function with \(U,V \subseteq \mathbb{R}^n\) open subsets. Let us write \(\mathbf{t} = (t_1, \ldots, t_n) \in V\text{,}\) and

\begin{equation*} \phi(\mathbf{t}) = (x_1(\mathbf{t}), \ldots, x_n(\mathbf{t}) ). \end{equation*}

The Jacobian of \(\phi\text{,}\) which we denote by \(\frac{\partial(x_1, \ldots, x_n)}{\partial(t_1, \ldots, t_n)}\) or \(J_\phi\) or \(D \phi\) (lots of different notations!), is the \(n \times n\) matrix of first partial derivatives:

\begin{equation*} D \phi = J_\phi = \frac{\partial(x_1, \ldots, x_n)}{\partial(t_1, \ldots, t_n)} = \begin{pmatrix} \frac{\partial x_1}{\partial t_1} \amp \cdots \amp \frac{\partial x_1}{\partial t_n} \\ \vdots \amp \ddots \amp \vdots \\ \frac{\partial x_n}{\partial t_1} \amp \cdots \amp \frac{\partial x_n}{\partial t_n} \end{pmatrix} \end{equation*}

Its determinant is called the Jacobian determinant.

It turns out that if we pullback an \(n\)-form with respect to a \(\phi\) as in Definition 4.7.5, we can write \(\phi^* \omega\) in terms of the Jacobian determinant. First, let us introduce the common name “top form” for an \(n\)-form on an open subset \(U \subseteq \mathbb{R}^n\text{.}\)

Definition 4.7.6. Top form.

We call an \(n\)-form on an open subset \(U \subseteq \mathbb{R}^n\) a top form.

Such forms are called “top forms” because all forms with \(k \geq n\) necessarily vanish on \(\mathbb{R}^n\text{.}\) Going back to the pullback, we get the nice following result when we pullback a top form:

It is not so easy to write a general proof for \(\mathbb{R}^n\) using the computational approach for the pullback that we have used so far. We will be able to write down a general proof easily in the next subsection after having introduced a more algebraic approach to the pullback. For the time being, let us prove the statement explicitly for \(\mathbb{R}^2\) and \(\mathbb{R}^3\text{.}\)

For \(\mathbb{R}^2\text{,}\) our function \(\phi\) takes the form

\begin{equation*} \phi(t_1, t_2) = (x_1(t_1, t_2), x_2(t_1, t_2)). \end{equation*}

What we need to show is that

\begin{equation*} \phi^*(dx_1 \wedge dx_2) = (\det J_\phi) dt_1 \wedge dt_2, \end{equation*}

where

\begin{equation*} \det J_\phi = \det \begin{pmatrix} \frac{\partial x_1}{\partial t_1} \amp \frac{\partial x_1}{\partial t_2} \\ \frac{\partial x_2}{\partial t_1} \amp \frac{\partial x_2}{\partial t_2} \end{pmatrix} = \frac{\partial x_1}{\partial t_1} \frac{\partial x_2}{\partial t_2} - \frac{\partial x_2}{\partial t_1} \frac{\partial x_1}{\partial t_2}. \end{equation*}

But

\begin{align*} \phi^*(dx_1 \wedge dx_2) =\amp \left(\frac{\partial x_1}{\partial t_1}\ dt_1 + \frac{\partial x_1}{\partial t_2}\ dt_2 \right) \wedge \left( \frac{\partial x_2}{\partial t_1} \ dt_1 + \frac{\partial x_2}{\partial t_2}\ dt_2 \right) \\ =\amp \left( \frac{\partial x_1}{\partial t_1} \frac{\partial x_2}{\partial t_2} - \frac{\partial x_2}{\partial t_1} \frac{\partial x_1}{\partial t_2} \right) dt_1 \wedge dt_2, \end{align*}

and the lemma is proved.

The calculation is similar but more involved for \(\mathbb{R}^3\text{.}\) We have:

\begin{equation*} \phi(t_1, t_2,t_3) = (x_1(t_1, t_2,t_3), x_2(t_1, t_2,t_3), x_3(t_1,t_2,t_3)). \end{equation*}

What we need to show is that

\begin{equation*} \phi^*(dx_1 \wedge dx_2 \wedge dx_3) = (\det J_\phi) dt_1 \wedge dt_2 \wedge dt_3, \end{equation*}

where

\begin{align*} \det J_\phi =\amp \det \begin{pmatrix} \frac{\partial x_1}{\partial t_1} \amp \frac{\partial x_1}{\partial t_2} \amp \frac{\partial x_1}{\partial t_3}\\ \frac{\partial x_2}{\partial t_1} \amp \frac{\partial x_2}{\partial t_2} \amp \frac{\partial x_2}{\partial t_3} \\ \frac{\partial x_3}{\partial t_1} \amp \frac{\partial x_3}{\partial t_2} \amp \frac{\partial x_3}{\partial t_3} \end{pmatrix}\\ =\amp \frac{\partial x_1}{\partial t_1} \frac{\partial x_2}{\partial t_2} \frac{\partial x_3}{\partial t_3} +\frac{\partial x_1}{\partial t_2} \frac{\partial x_2}{\partial t_3} \frac{\partial x_3}{\partial t_1}+\frac{\partial x_1}{\partial t_3} \frac{\partial x_2}{\partial t_1} \frac{\partial x_3}{\partial t_2} \\ \amp-\frac{\partial x_1}{\partial t_2} \frac{\partial x_2}{\partial t_1} \frac{\partial x_3}{\partial t_3}-\frac{\partial x_1}{\partial t_1} \frac{\partial x_2}{\partial t_3} \frac{\partial x_3}{\partial t_2}-\frac{\partial x_1}{\partial t_3} \frac{\partial x_2}{\partial t_2} \frac{\partial x_3}{\partial t_1}. \end{align*}

But

\begin{align*} \phi^*(dx_1 \wedge dx_2 \wedge dx_3) =\amp \left( \frac{\partial x_1}{\partial t_1}\ dt_1 + \frac{\partial x_1}{\partial t_2}\ dt_2 + \frac{\partial x_1}{\partial t_3} \ dt_3\right) \\ \amp \wedge \left( \frac{\partial x_2}{\partial t_1}\ dt_1 + \frac{\partial x_2}{\partial t_2}\ dt_2 + \frac{\partial x_2}{\partial t_3} \ dt_3\right)\\ \amp \wedge \left( \frac{\partial x_3}{\partial t_1}\ dt_1 + \frac{\partial x_3}{\partial t_2}\ dt_2 + \frac{\partial x_3}{\partial t_3} \ dt_3\right)\\ =\amp \Big( \frac{\partial x_1}{\partial t_1} \frac{\partial x_2}{\partial t_2} \frac{\partial x_3}{\partial t_3} +\frac{\partial x_1}{\partial t_2} \frac{\partial x_2}{\partial t_3} \frac{\partial x_3}{\partial t_1}+\frac{\partial x_1}{\partial t_3} \frac{\partial x_2}{\partial t_1} \frac{\partial x_3}{\partial t_2}\\ \amp -\frac{\partial x_1}{\partial t_2} \frac{\partial x_2}{\partial t_1} \frac{\partial x_3}{\partial t_3}-\frac{\partial x_1}{\partial t_1} \frac{\partial x_2}{\partial t_3} \frac{\partial x_3}{\partial t_2}-\frac{\partial x_1}{\partial t_3} \frac{\partial x_2}{\partial t_2} \frac{\partial x_3}{\partial t_1} \Big) dt_1 \wedge dt_2 \wedge dt_3, \end{align*}

which completes the proof in \(\mathbb{R}^3\text{.}\) Phew, this was painful.

The appearance of the Jacobian determinant here is quite nice. It will be related to the transformation formula for multiple integrals, when we integrate a top form over a bounded region in \(\mathbb{R}^n\text{.}\) We note however that we obtain the Jacobian determinant here, not its absolute value (in comparison to what you may have seen in previous calculus classes); this will be related to the fact that our theory of integration is oriented.

Subsection 4.7.3 The Jacobian matrix and two important theorems in multivariable calculus (optional)

In the previous section we introduced the Jacobian of a function and its determinant. As you may already know, the Jacobian determinant plays an important role in multivariable calculus. In this section we review two important theorems that involve the Jacobian determinant, for the sake of completeness. This section however is tangential to the rest of the text.

The first theorem is the Inverse Function Theorem, which gives a sufficient condition to determine whether a function is invertible on an open neighbhorhood of a point in its domain. Remarkably, invertibility of the function is related to invertibility of the Jacobian matrix; indeed, if the Jacobian matrix is invertible at a point, i.e., its determinant is non-zero at this point, then the function is invertible on an open neighbhorhood of this point. Neat!

This is a very powerful theorem, which relates invertibility of a function to invertibility of its Jacobian matrix.

Another important theorem that involves the Jacobian is the Implicit Function Theorem. This theorem concerns the following question: suppose that we are given a number of relations between a set of variables, can we represent these relations as the graph of a function? Let us be more precise.

Start with single variable calculus. Let \(f: \mathbb{R}^2 \to \mathbb{R}\) be a \(C^1\)-function, which we write as \(f(x,y)\text{.}\) Setting \(f(x,y)=0\) gives a relation between the variables \(x\) and \(y\text{.}\) Can we think of this relation as implicitly defining \(y\) as a function of \(x\text{?}\) The answer is yes, under mild conditions, and only locally.

More precisely, the statement is that, for any point \((a,b)\) in the domain of \(f\) such that \(f(a,b)=0\text{,}\) if \(\frac{\partial f}{\partial y}(a,b) \neq 0\text{,}\) then there exists an open neighborhood \(U \subseteq \mathbb{R}\) of \(a\) such that there exists a unique \(C^1\)-function \(g: U \to \mathbb{R}\) such that \(g(a) = b\) and \(f(x, g(x)) = 0\) for all \(x \in U\text{.}\) In other words, if \(\frac{\partial f}{\partial y}(a,b) \neq 0\text{,}\) the relation \(f(x,y)=0\) implicitly and uniquely defines \(y\) as a \(C^1\)-function \(y=g(x)\) locally near the point \((a,b)\text{.}\)

This statement can be naturally generalized to multivariable calculus using the Jacobian determinant.

This is the natural multivariable generalization. If the Jacobian matrix of the function \(f(\mathbf{x}, \mathbf{y})\) with respect to the \(y\)-variables is invertible at a point \((\mathbf{a}, \mathbf{b})\text{,}\) the relation \(f(\mathbf{x},\mathbf{y}) = 0\) implicitly and uniquely defines the variables \(\mathbf{y}\) as \(C^1\)-functions \(\mathbf{y} = (g_1(\mathbf{x}), \ldots, g_m(\mathbf{x}))\) locally near \((\mathbf{a},\mathbf{b})\text{.}\)

Subsection 4.7.4 The pullback of a \(k\)-form: an algebraic approach (optional)

In this section we introduce a direct definition of the pullback using the algebraic approach to basic \(k\)-forms introduced in Subsection 4.1.1. We then show that the three fundamental properties that we used to define the pullback are satisfied, thus justifying our original axiomatic approach.

Recall from Definition 4.1.1 (naturally generalized to \(\mathbb{R}^n\)) that a basic one-form \(dx_i\) is a linear map \(dx_i: \mathbb{R}^n \to \mathbb{R}\) which acts as

\begin{equation*} dx_i(u_1, \ldots, u_n) = u_i, \end{equation*}

i.e. it outputs the \(i\)'th component of the vector \(\mathbf{u} \in \mathbb{R}^n\text{.}\) As we are now thinking of the basic one-forms \(dx_i\) as linear maps, we can define their pullback by composition, as we originally did for functions in Definition 2.4.1.

We first define and study the pullback when \(\phi\) is a linear map between vector spaces. We will generalize to the case of a smooth function afterwards.

Definition 4.7.10. The pullback of a basic one-form with respect to a linear map.

Let \(dx_i: \mathbb{R}^n \to \mathbb{R}\) be a basic one-form, and let \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) be a linear map. We define the pullback \(\phi^*(dx_i): \mathbb{R}^m \to \mathbb{R}\) by composition:

\begin{equation*} \phi^*(dx_i) = dx_i \circ \phi: \mathbb{R}^m \to \mathbb{R}. \end{equation*}

Let us now give an explicit formula for the pullback of a basic one-form.

Write \(\mathbf{v} = (v_1, \ldots, v_m) \in \mathbb{R}^m\text{.}\) Then

\begin{align*} \phi^*(dx_i)(\mathbf{v}) =\amp dx_i(\phi(\mathbf{v}))\\ =\amp dx_i(a_{11} v_1 + \ldots + a_{1m} v_m, \cdots, a_{n1} v_1 + \ldots a_{nm} v_m )\\ =\amp a_{i1} v_1 + \ldots + a_{im} v_m \\ =\amp a_{i1} dt_1(\mathbf{v}) + \ldots + a_{im} dt_m(\mathbf{v}) \\ =\amp \left( a_{i1} dt_1 + \ldots + a_{im} dt_m \right) (\mathbf{v}). \end{align*}

It is then easy to generalize the definition of pullback to the basic \(k\)-forms.

Definition 4.7.12. The pullback of a basic \(k\)-form with respect to a linear map.

Let \(dx_{i_1} \wedge \ldots \wedge dx_{i_k}: (\mathbb{R}^n)^k \to \mathbb{R}\) be a basic \(k\)-form, and let \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) be a linear map. Let \(\mathbf{v}^1, \ldots, \mathbf{v}^k \in \mathbb{R}^m\) be vectors. We define the pullback \(\phi^*(dx_{i_1}\wedge \ldots \wedge dx_{i_k}): (\mathbb{R}^m)^k \to \mathbb{R}\) by:

\begin{equation*} \phi^*(dx_{i_1}\wedge \ldots \wedge dx_{i_k})(\mathbf{v}^1, \ldots, \mathbf{v}^k) = dx_{i_1}\wedge \ldots \wedge dx_{i_k}(\phi(\mathbf{v}^1), \ldots, \phi(\mathbf{v}^k)). \end{equation*}

It follows directly from the definition that the pullback commutes with the wedge product:

By definition,

\begin{align*} \phi^*(dx_{i_1}) \wedge \ldots \wedge \phi^*(dx_{i_k})( \mathbf{v}^1, \ldots, \mathbf{v}^k) = \amp dx_{i_1}\wedge \ldots \wedge dx_{i_k} ( \phi(\mathbf{v}^1), \ldots, \phi(\mathbf{v}^k) ) \\ =\amp \phi^*(dx_{i_1}\wedge \ldots \wedge dx_{i_k})(\mathbf{v}^1, \ldots, \mathbf{v}^k). \end{align*}

As a corollary, we obtain an explicit formula to calculate the pullback of a basic \(k\)-form.

We can also write down an explicit formula in terms of the determinant when we pullback a basic \(n\)-form from \(\mathbb{R}^n\) to \(\mathbb{R}^n\text{.}\)

Let \(\mathbf{v}^1, \ldots, \mathbf{v}^n \in \mathbb{R}^n\text{.}\) Then

\begin{align*} \phi^*(dx_1 \wedge \cdots \wedge dx_n)(\mathbf{v}^1, \ldots, \mathbf{v}^n) =\amp dx_1 \wedge \cdots \wedge dx_n( \phi(\mathbf{v}^1), \ldots, \phi(\mathbf{v}^n) )\\ =\amp \det \begin{pmatrix} A_{11} v^1_1 + \ldots + A_{1 n} v^1_n \amp \cdots \amp A_{11} v^n_1 + \ldots + A_{1 n} v^n_n \\ \vdots \amp \ddots \amp \vdots \\ A_{n1} v^1_1 + \ldots + A_{n n} v^1_n \amp \cdots \amp A_{n1} v^n_1 + \ldots + A_{n n} v^n_n\end{pmatrix}\\ =\amp \det \left(\begin{pmatrix} A_{11} \amp \cdots \amp A_{1n} \\ \vdots \amp \ddots \amp \vdots \\ A_{n1} \amp \cdots \amp A_{nn} \end{pmatrix} \begin{pmatrix} v^1_1 \amp \cdots \amp v_1^n \\ \vdots \amp \ddots \amp \vdots \\ v_n^1 \amp \cdots \amp v_n^n \end{pmatrix} \right)\\ =\amp (\det A) \det \begin{pmatrix} v^1_1 \amp \cdots \amp v_1^n \\ \vdots \amp \ddots \amp \vdots \\ v_n^1 \amp \cdots \amp v_n^n \end{pmatrix}\\ =\amp (\det A) dx_1 \wedge \ldots \wedge dx_n (\mathbf{v}^1, \ldots, \mathbf{v}^n), \end{align*}

which concludes the proof.

This is all very nice, but so far we only looked at basic \(k\)-forms, and linear maps \(\phi: \mathbb{R}^m \to \mathbb{R}^n\text{.}\) How do we generalize this to general differential \(k\)-forms on \(U \subseteq \mathbb{R}^n\text{,}\) and to smooth functions \(\phi: V \to U\) with \(V \subseteq \mathbb{R}^m\text{?}\) The idea is simple. For any point \(\mathbf{t} \in V\text{,}\) the Jacobian matrix of \(\phi\) (i.e. the matrix of first partial derivatives), also called the “total derivative of \(\phi\)”, provides a linear map \(\mathbb{R}^m \to \mathbb{R}^n\text{.}\) 1 

This is called the “differential” or “total derivative” of the smooth function \(\phi\text{:}\) in fancier differential geometry, one would say that it is a linear map from the tangent space of \(V\) at \(\mathbf{t}\) to the tangent space of \(U\) at \(\phi(\mathbf{t})\text{.}\)

In other words, given a smooth function \(\phi: V \to U\text{,}\) if we write \(\phi(\mathbf{t}) = (x_1(\mathbf{t}), \ldots, x_n(\mathbf{t}) )\text{,}\) we can construct the pullback of \(k\)-forms exactly as above, but with the specific linear map \(\mathbb{R}^m \to \mathbb{R}^n\) given by the Jacobian matrix:

\begin{equation*} A = (a_{ij} ) = \left( \frac{\partial x_i}{\partial t_j} \right). \end{equation*}

Then we see that we recover all the formulae that we obtained previously, and that our three fundamental properties are satisfied! The pullback of a basic one-form becomes

\begin{equation*} \phi^*(dx_i) = \frac{\partial x_i}{\partial t_1} \ dt_1 + \ldots + \frac{\partial x_i}{\partial t_n} \ dt_n, \end{equation*}

as before. Property 1 is obviously satisfied by definition. Lemma 4.7.13 becomes Property 2, and Corollary 4.7.14 becomes our general formula for the pullback of \(k\)-forms in Lemma 4.7.1. It is easy to check that Property 3 is satisfied. Finally, we obtain a general proof of Lemma 4.7.7 on \(\mathbb{R}^n\text{,}\) as this is simply Lemma 4.7.15. Neat!

Exercises 4.7.5 Exercises

1.

Consider the basic two-form \(dx \wedge dy\) on \(\mathbb{R}^2\text{,}\) and the polar coordinate transformation \(\alpha: \mathbb{R}^2 \to \mathbb{R}^2\) with

\begin{equation*} \alpha(r,\theta) = (r \cos \theta, r \sin \theta)\text{.} \end{equation*}

Show by explicit calculation that

\begin{equation*} \alpha^* (dx \wedge dy) = r\ dr \wedge d\theta = (\det J_\alpha) dr \wedge d \theta. \end{equation*}
Solution.

Let us start by calculating the pullback two-form. We get:

\begin{align*} \alpha^* (dx \wedge dy) =\amp (\cos \theta\ dr - r \sin \theta\ d \theta) \wedge (\sin \theta\ dr + r \cos \theta\ d \theta)\\ =\amp r \cos^2\theta\ dr \wedge d \theta - r \sin^2 \theta\ d \theta \wedge dr\\ =\amp r(\cos^2 \theta + \sin^2 \theta) dr \wedge d \theta\\ =\amp r\ dr \wedge d \theta. \end{align*}

Next, we show that \(\det J_\alpha = r\text{.}\) By definition of the Jacobian matrix, we have:

\begin{align*} \det J_\alpha =\amp \det \begin{pmatrix} \frac{\partial x}{\partial r} \amp \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} \amp \frac{\partial y}{\partial \theta} \end{pmatrix}\\ =\amp \det \begin{pmatrix} \cos \theta \amp - r \sin \theta \\ \sin \theta \amp r \cos \theta \end{pmatrix}\\ =\amp r \cos^2 \theta + r \sin^2 \theta\\ =\amp r. \end{align*}

Therefore,

\begin{equation*} \alpha^* (dx \wedge dy) = r\ dr \wedge d\theta = (\det J_\alpha) dr \wedge d \theta \end{equation*}

as claimed.

2.

Let

\begin{equation*} \omega = (x^2+y^2+z^2)\ dx \wedge dy \wedge dz \end{equation*}

on \(\mathbb{R}^3\text{,}\) and \(\alpha: \mathbb{R}^3 \to \mathbb{R}^3\) the spherical transformation

\begin{equation*} \alpha(r, \theta, \phi) = (r \sin(\theta) \cos(\phi), r \sin(\theta) \sin(\phi), r \cos(\theta) ). \end{equation*}
  1. Show by explicit calculation that

    \begin{equation*} \alpha^*(dx \wedge dy \wedge dz) = r^2 \sin(\theta)\ dr \wedge d \theta \wedge d \phi = (\det J_\alpha) dr \wedge d \theta \wedge d \phi. \end{equation*}

  2. Use this to find \(\alpha^* \omega\text{.}\)

Solution.
  1. We calculate the pullback:

    \begin{align*} \alpha^*(dx \wedge dy \wedge dz) =\amp (\sin(\theta) \cos(\phi) dr + r \cos(\theta)\cos(\phi) d \theta - r \sin(\theta)\sin(\phi) d\phi)\\ \amp \wedge (\sin(\theta)\sin(\phi) dr + r \cos(\theta) \sin(\phi) d\theta + r \sin(\theta) \cos(\phi) d\phi)\\ \amp \wedge (\cos(\theta) dr - r \sin(\theta) d \theta) \\ =\amp (r^2 \sin^3 (\theta)\cos^2 (\phi) + r^2 \sin(\theta)\cos^2(\theta)\cos^2(\phi) +r^2 \sin^3(\theta) \sin^2(\phi)\\ \amp + r^2 \sin(\theta)\cos^2(\theta)\sin^2(\phi) ) dr \wedge d \theta \wedge d \phi\\ =\amp r^2 (\sin^3(\theta) + \sin(\theta) \cos^2(\theta) )dr \wedge d \theta \wedge d \phi\\ =\amp r^2 \sin(\theta) dr \wedge d \theta \wedge d \phi. \end{align*}

    Next we show that this \(\det J_\alpha = r^2 \sin(\theta)\text{.}\) By definition of the Jacobian matrix, we get:

    \begin{align*} \det J_\alpha =\amp \det \begin{pmatrix} \frac{\partial x}{\partial r} \amp \frac{\partial x}{\partial \theta} \amp \frac{\partial x}{\partial \phi}\\ \frac{\partial y}{\partial r} \amp \frac{\partial y}{\partial \theta} \amp \frac{\partial y}{\partial \phi}\\ \frac{\partial z}{\partial r} \amp \frac{\partial z}{\partial \theta} \amp \frac{\partial z}{\partial \phi}\end{pmatrix}\\ =\amp \det \begin{pmatrix} \sin(\theta) \cos(\phi) \amp r \cos(\theta) \cos(\phi) \amp - r \sin(\theta)\sin(\phi) \\ \sin(\theta)\sin(\phi) \amp r \cos(\theta) \sin(\phi) \amp r \sin(\theta) \cos(\phi) \\ \cos(\theta) \amp - r \sin(\theta) \amp 0 \end{pmatrix}\\ =\amp r^2 \sin(\theta)\cos^2(\theta)\cos^2(\phi) +r^2 \sin^3(\theta)\sin^2(\phi) + r^2 \sin^3(\theta)\cos^2(\phi)\\ \amp +r^2 \sin(\theta)\cos^2(\theta)\sin^2(\phi)\\ =\amp r^2 \sin (\theta). \end{align*}

    Therefore

    \begin{equation*} \alpha^*(dx \wedge dy \wedge dz) = r^2 \sin(\theta)\ dr \wedge d \theta \wedge d \phi = (\det J_\alpha) dr \wedge d \theta \wedge d \phi \end{equation*}

    as claimed.

  2. To find \(\alpha^* \omega\) we can use the result in (a). We get:

    \begin{align*} \alpha^* \omega =\amp (r^2 \sin^2(\theta) \cos^2(\phi) + r^2 \sin^2(\theta) \sin^2(\phi)+ r^2 \cos^2(\theta) ) \alpha^*(dx \wedge dy \wedge dz)\\ =\amp (r^2 \sin^2(\theta) + r^2 \cos^2(\theta) ) r^2 \sin(\theta) dr \wedge d \theta \wedge d \phi\\ =\amp r^4 \sin(\theta) dr \wedge d \theta \wedge d \phi. \end{align*}

3.

Let

\begin{equation*} \omega = (x^2+y^2) \left( x\ dy \wedge dz + y\ dz \wedge dx + z\ dx \wedge dy \right) \end{equation*}

on \(\mathbb{R}^3\text{,}\) and \(\alpha: \mathbb{R}^3 \to \mathbb{R}^3\) be the cylindrical transformation

\begin{equation*} \alpha(r,\theta,w) = (r \cos \theta, r \sin \theta, w). \end{equation*}
  1. Find \(\alpha^* \omega\text{.}\)

  2. Show by explicit calculation that \(d(\alpha^* \omega) = \alpha^*( d \omega)\text{.}\)

Solution.
  1. To simplify the calculation of the pullback, let us first calculate the pullback of the basic two-forms:

    \begin{align*} \alpha^*(dy \wedge dz) =\amp (\sin(\theta) dr + r \cos(\theta) d\theta) \wedge dw\\ =\amp \sin(\theta) dr \wedge dw + r \cos(\theta) d \theta \wedge dw, \end{align*}
    \begin{align*} \alpha^*(dz \wedge dx)=\amp dw \wedge (\cos(\theta) dr - r \sin(\theta) d\theta) \\ =\amp - \cos(\theta) dr \wedge dw + r \sin(\theta) d\theta \wedge dw, \end{align*}

    and

    \begin{align*} \alpha^*( dx \wedge dy) =\amp (\cos(\theta) dr - r \sin(\theta) d\theta) \wedge (\sin(\theta) dr + r \cos(\theta) d\theta)\\ =\amp r \cos^2(\theta) dr \wedge d \theta + r \sin^2(\theta) dr \wedge d \theta\\ =\amp r dr \wedge d \theta. \end{align*}

    We also observe that

    \begin{equation*} \alpha^*(x^2+y^2) = r^2 \cos^2(\theta) + r^2 \sin^2(\theta) = r^2. \end{equation*}

    Putting this together, we get:

    \begin{align*} \alpha^* \omega =\amp \alpha^*(x^2+y^2) \alpha^*\left( x\ dy \wedge dz + y\ dz \wedge dx + z\ dx \wedge dy \right)\\ =\amp r^2 ( r \cos(\theta) (\sin(\theta) dr \wedge dw + r \cos(\theta) d \theta \wedge dw)\\ \amp + r \sin(\theta) ( - \cos(\theta) dr \wedge dw + r \sin(\theta) d\theta \wedge dw) + r w dr \wedge d \theta ) \\ =\amp r^3 (r d \theta \wedge dw + w dr \wedge d \theta). \end{align*}
  2. On the one hand, we calculated in (a) the pullback \(\alpha^* \omega = r^3 (r d \theta \wedge dw + w dr \wedge d \theta).\)We can calculate its exterior derivative:

    \begin{align*} d(\alpha^* \omega) =\amp d(r^4) \wedge d \theta \wedge dw + d(r^3 w) \wedge dr \wedge d\theta\\ =\amp (4 r^3 + r^3) dr \wedge d\theta \wedge dw.\\ =\amp 5 r^3 dr \wedge d \theta \wedge dw. \end{align*}

    On the other hand, we can calculate first the exterior derivative of \(\omega\text{.}\) We get:

    \begin{align*} d \omega =\amp d( (x^2+y^2) x) \wedge dy \wedge dz + d( (x^2+y^2) y) \wedge dz \wedge dx + d((x^2+y^2) z) \wedge dx \wedge dy\\ =\amp ((3 x^2 + y^2) + (3 y^2 + x^2) + (x^2+y^2) ) dx \wedge dy \wedge dz \\ =\amp 5(x^2+y^2) dx \wedge dy \wedge dz. \end{align*}

    We then calculate its pullback:

    \begin{align*} \alpha^* (d \omega)=\amp 5 \alpha^*(x^2+y^2)\alpha^* (dx \wedge dy \wedge dz)\\ =\amp 5 r^2 (\cos(\theta) dr - r \sin(\theta) d\theta) \wedge (\sin(\theta) dr + r \cos(\theta) d\theta) \wedge dw\\ =\amp 5 r^2 (r \cos^2(\theta) + r \sin^2(\theta) ) dr \wedge d\theta \wedge dw\\ =\amp 5 r^3\ dr \wedge d\theta \wedge dw. \end{align*}

    We conclude that

    \begin{equation*} d(\alpha^* \omega) = \alpha^*(d \omega), \end{equation*}

    as claimed.

4.

Let

\begin{equation*} \omega = z e^{x y}\ dx \wedge dy \end{equation*}

on \(\mathbb{R}^3\text{,}\) and \(\phi: (\mathbb{R}_{\neq 0})^2 \to \mathbb{R}^3\) with:

\begin{equation*} \phi(u,v) = \left( \frac{u}{v},\frac{v}{u}, u v \right). \end{equation*}

Find \(\phi^* \omega\text{.}\)

Solution.

We calculate the pullback:

\begin{align*} \phi^* \omega=\amp u v e^{ \frac{u}{v} \frac{v}{u} } \left( \frac{1}{v} du - \frac{u}{v^2} dv \right) \wedge \left( -\frac{v}{u^2} du + \frac{1}{u} dv \right)\\ =\amp u v e \left( \frac{1}{uv} - \frac{1}{uv} \right) du \wedge dv\\ =\amp 0. \end{align*}

5.

Show that the pullback of an exact form is always exact.

Solution.

Let \(\omega\) be an exact \(k\)-form on \(U \subseteq \mathbb{R}^n\text{,}\) and let \(\phi: V \to U\) be a smooth function for \(V \subseteq \mathbb{R}^m\text{.}\) We want to prove that the pullback of \(\omega\) is exact.

Since \(\omega\) is exact, we know that there exists a \((k-1)\)-form \(\eta\) on \(U\) such that \(\omega = d \eta\text{.}\) Then

\begin{equation*} \phi^* \omega = \phi^*(d \eta) = d(\phi^* \eta), \end{equation*}

where we used the fact that the pullback commutes with the exterior derivative (see Lemma 4.7.4). Therefore, the \(k\)-form \(\phi^* \omega\) on \(V\) is the exterior derivative of a \((k-1)\)-form \(\phi^* \eta\) on \(V\text{,}\) and hence it is exact.

6.

Let \(\omega\) be a \(k\)-form on \(\mathbb{R}^n\text{,}\) and \(Id: \mathbb{R}^n \to \mathbb{R}^n\) be the identity function defined by \(Id(x_1, \ldots, x_n) = (x_1, \ldots, x_n)\text{.}\) Show that

\begin{equation*} Id^* \omega = \omega. \end{equation*}
Solution.

We can write a general \(k\)-form on \(\mathbb{R}^n\) as

\begin{equation*} \omega = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} f_{i_1 \cdots i_k} dx_{i_1} \wedge \cdots \wedge dx_{i_k} \end{equation*}

for some smooth functions \(f_{i_1 \cdots i_k} : \mathbb{R}^n \to \mathbb{R}\text{.}\) We know that \(Id^*(dx_i) = dx_i\) for all \(i \in \{1,\ldots,n\}\text{,}\) by definition of the identity map. Similarly, for any function \(f: \mathbb{R}^n \to \mathbb{R}\text{,}\) \(Id^* f = f\text{.}\) As a result, we get:

\begin{align*} Id^* \omega =\amp \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} Id^* (f_{i_1 \cdots i_k}) Id^*(dx_{i_1}) \wedge \cdots \wedge Id^*(dx_{i_k})\\ =\amp \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n}f_{i_1 \cdots i_k} dx_{i_1} \wedge \cdots \wedge dx_{i_k}\\ =\amp \omega. \end{align*}

7.

Let \(\omega\) be a \(k\)-form on \(U \subseteq \mathbb{R}^n\text{.}\) Let \(\phi:V \to U\) and \(\alpha: W \to V\) be smooth functions, with \(V \subseteq \mathbb{R}^m\) and \(W \subseteq \mathbb{R}^\ell\text{.}\) Show that

\begin{equation*} (\phi \circ \alpha)^* \omega = \alpha^*(\phi^* \omega). \end{equation*}

In other words, it doesn't matter whether we pullback in one or two steps in the chain of maps

\begin{equation*} W \overset{\alpha}{\rightarrow} V \overset{\phi}{\rightarrow} U. \end{equation*}
Solution.

We can write a general \(k\)-form on \(U\) as

\begin{equation*} \omega = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} f_{i_1 \cdots i_k} dx_{i_1} \wedge \cdots \wedge dx_{i_k} \end{equation*}

for some smooth functions \(f_{i_1 \cdots i_k} : U \to \mathbb{R}\text{.}\) On the one hand, the pullback by \(\phi \circ \alpha\) is

\begin{equation*} (\phi \circ \alpha)^* \omega = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n}(\phi \circ \alpha)^* (f_{i_1 \cdots i_k}) (\phi \circ \alpha)^*(dx_{i_1}) \wedge \cdots \wedge (\phi \circ \alpha)^*(dx_{i_k}). \end{equation*}

On the other hand, pulling back in two steps, we get:

\begin{equation*} \alpha^*(\phi^* \omega) = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} \alpha^*(\phi^* f_{i_1 \cdots i_k})\alpha^*(\phi^* dx_{i_1}) \wedge \cdots \wedge \alpha^*(\phi^* dx_{i_k}). \end{equation*}

So all we have to show is that

\begin{equation*} (\phi \circ \alpha)^* f = \alpha^*(\phi^* f) \end{equation*}

for any smooth function \(f:U \to \mathbb{R}\text{,}\) and

\begin{equation*} (\phi \circ \alpha)^* dx_i = \alpha^*(\phi^* dx_i) \end{equation*}

for any \(i \in \{1, \ldots, n \}\text{.}\)

First, for any function \(f:U \to \mathbb{R}\text{,}\)

\begin{equation*} (\phi \circ \alpha)^*f = f \circ \phi \circ \alpha, \end{equation*}

while

\begin{equation*} \alpha^*(\phi^* f) = \alpha^* (f \circ \phi) = f \circ \phi \circ \alpha. \end{equation*}

Thus

\begin{equation*} (\phi \circ \alpha)^*f = \alpha^*(\phi^* f). \end{equation*}

As for the basic one-forms, let us introduce further notation for the maps \(\phi: V \to U\) and \(\alpha: W \to V\text{.}\) Let us write \(\mathbf{z}=(z_1,\ldots,z_\ell)\) for coordinates on \(W\text{;}\) \(\mathbf{y}=(y_1,\ldots,y_m)\) for coordinates on \(V\text{;}\) and \(\mathbf{x}=(x_1,\ldots,x_n)\) for coordinates on \(U\text{.}\) We write \(\phi(\mathbf{y}) = (\phi_1(\mathbf{y}, \ldots, \phi_n(\mathbf{y}) )\text{,}\) and \(\alpha(\mathbf{z}) = (\alpha_1(\mathbf{z}), \ldots, \alpha_m(\mathbf{z} ) )\text{.}\) Then, we have:

\begin{equation*} \phi^* dx_i = \sum_{a=1}^m \frac{\partial \phi_i}{\partial y_a} d y_a, \end{equation*}

and

\begin{equation*} \alpha^*(\phi^* dx_i) = \sum_{b=1}^\ell \sum_{a=1}^m \frac{\partial \phi_i}{\partial y_a} \Big|_{\mathbf{y} = \alpha(\mathbf{z})} \frac{\partial \alpha_a}{\partial z_b} dz_b. \end{equation*}

On the other hand, if we pullback by the composition of the maps, we get:

\begin{equation*} (\phi \circ \alpha)^* dx_i = \sum_{b=1}^\ell \frac{\partial (\phi \circ \alpha)_i}{\partial z_b} dz_b. \end{equation*}

But the equality

\begin{equation*} \frac{\partial (\phi \circ \alpha)_i}{\partial z_b} = \sum_{a=1}^m \frac{\partial \phi_i}{\partial y_a} \Big|_{\mathbf{y} = \alpha(\mathbf{z})} \frac{\partial \alpha_a}{\partial z_b} \end{equation*}

is precisely the chain rule for multi-variable functions (written in terms of composition of functions). Thus we conclude that

\begin{equation*} \alpha^*(\phi^* dx_i) =(\phi \circ \alpha)^* dx_i. \end{equation*}

Putting all of this together, we conclude that

\begin{equation*} (\phi \circ \alpha)^* \omega = \alpha^*(\phi^* \omega), \end{equation*}

which is the statement of the question.