Section 4.7 The pullback of a \(k\)-form
In Section 2.4 we defined the pullback of a one-form. We now generalize this concept to \(k\)-forms. We first do it using an axiomatic approach as for one-forms, and relate the result to the concept of Jacobian. We also introduce a more direct definition of pullback using the algebraic approach to basic \(k\)-forms introduced in Subsection 4.1.1, and show that it is consistent with our axiomatic approach..
Objectives
You should be able to:
Determine the pullback of a \(k\)-form.
Show that the pullback of a \(2\)-form in \(\mathbb{R}^2\) and a \(3\)-form in \(\mathbb{R}^3\) is given by the Jacobian determinant.
Subsection 4.7.1 The pullback of a \(k\)-form: an axiomatic approach
In Section 2.4 we defined the pullback of a one-form. We used an “axiomatic” approach: we first defined the properties that we wanted the pullback to satisfy, and showed that there is a unique construction that satisfies these properties. The properties that we specified were, in words:
The pullback of a sum of one-forms is the sum of the pullbacks.
The pullback of a product of a zero-form and a one-form is the product of the pullbacks.
The pullback of the exterior derivative of a zero-form is the exterior derivative of the pullback.
In this section we define the pullback of a \(k\)-form using a similar approach. All we need to do is generalize the first and second properties to \(k\)-forms.
More precisely, let \(\phi: V \to U\) be a smooth function, with \(V \subseteq \mathbb{R}^m\) and \(U\subseteq \mathbb{R}^n\) open subsets. We want the pullback \(\phi^*\) to satisfy the following properties:
If \(\omega\) and \(\eta\) are \(k\)-forms on \(U\text{,}\) then
\begin{equation*} \phi^*(\omega + \eta) = \phi^* \omega + \phi^* \eta. \end{equation*}If \(\omega\) is a \(k\)-form and \(\eta\) an \(\ell\)-form on \(U\text{,}\) then
\begin{equation*} \phi^*(\omega \wedge \eta) = (\phi^* \omega) \wedge (\phi^* \eta). \end{equation*}If \(f\) is a zero-form (a function) on \(U\text{,}\) then
\begin{equation*} \phi^*(df) = d(\phi^* f). \end{equation*}
The third property is unchanged, and the first two are naturally generalized to \(k\)-forms.
Imposing these properties gives us a unique definition for the pullback of a \(k\)-form.
Lemma 4.7.1. The pullback of a \(k\)-form.
Let \(\phi: V \to U\) be a smooth function, with \(V \subseteq \mathbb{R}^m\) and \(U\subseteq \mathbb{R}^n\) open subsets. We write \(\mathbf{t} = (t_1, \ldots, t_m) \in V\text{,}\) and \(\phi(\mathbf{t}) = (x_1(\mathbf{t}), \ldots, x_n(\mathbf{t}) ).\) Let \(\omega\) be a \(k\)-form on \(U\text{,}\) with
for some functions \(f_{i_1 \cdots i_k}: U \to \mathbb{R}\text{.}\) Then the pullback \(\phi^* \omega\) is a \(k\)-form on \(V\) given by:
Proof.
Start with a basic \(k\)-form
Since we impose (Property 2) that \(\phi^* (\omega \wedge \eta) = (\phi^* \omega) \wedge \phi^* \eta)\text{,}\) we know that
But we already calculated how basic one-forms transform in Lemma 2.4.4; they transform as differentials would. This calculation followed from Property 3, which we still impose here, so it is still valid. This gives us the pullback of basic \(k\)-forms.
Then we use Property 2 to extended it to a function times a basic \(k\)-form, and then Property 1 to extend it to linear combination of such terms, to get the final result for the pullback of a generic \(k\)-form.
This formula certainly looks ugly, but it is really not that bad. Concretely, all you need to do is compose the component functions with \(\phi\text{,}\) and transform the basic one-forms one by one in the wedge product as in Lemma 2.4.4, i.e. they transform as you would expect differentials transform. That's it! This will certainly be clearer with examples.
Example 4.7.2. The pullback of a two-form.
Consider the following two-form on \(\mathbb{R}^3\text{:}\)
and the smooth function \(\phi: \mathbb{R}^2 \to \mathbb{R}^3\) given by
To calculate \(\phi^* \omega\text{,}\) let us start by calculating the pullback of the basic one-forms. We get:
Putting this together, we get:
Note that we used the fact that \(du \wedge du = dv \wedge dv = 0\text{,}\) and \(dv \wedge du = - du \wedge dv\text{.}\)
Example 4.7.3. The pullback of a three-form.
Consider the following three-form on \(\mathbb{R}^3\text{:}\)
and the smooth function \(\phi: \mathbb{R}^3_{>0} \to \mathbb{R}^3\) given by
The pullback of the basic one-forms is:
We get:
where we used the fact that the basic three-forms vanish whenever one of the factor is repeated, and \(dv \wedge dw \wedge du = du \wedge dv \wedge dw\) since we need to exchange two basic one-forms twice to relate the two basic three-forms.
Good, so we are now experts at computing pullbacks! Calculating the pullback of a \(k\)-form is not more difficult than calculating the pullback of a one-form, but the calculation may be longer, and you need to use the anti-symmetry properties of basic \(k\)-forms in Lemma 4.1.6 to simplify the result at the end of your calculation.
Property 3 in our axiomatic definition states that the pullback commutes with the exterior derivative for zero-form. It turns out that this property, which is very important, holds in general for \(k\)-forms. Let us prove that.
Lemma 4.7.4. The pullback commutes with the exterior derivative.
Let \(\phi: V \to U\) be a smooth function, with \(V \subseteq \mathbb{R}^m\) and \(U\subseteq \mathbb{R}^n\) open subsets. Let \(\omega\) be a \(k\)-form on \(U\text{.}\) Then:
Proof.
Recall the definition of the exterior derivative Definition 4.3.1. Let \(\displaystyle \omega = \sum_{1 \leq i_1 \lt \cdots \lt i_k \leq n} f_{i_1 \cdots i_k} dx_{i_1} \wedge \cdots \wedge dx_{i_k}\text{.}\) Then
Using Properties 1 and 2, we can write
Now we can use Property 3, which states that
since the \(f_{i_1 \cdots i_k}\) are just functions (zero-forms). Thus we have:
where the last line follows from Properties 1 and 2 again.
Subsection 4.7.2 The pullback of a top form in \(\mathbb{R}^n\) and the Jacobian determinant
There is a special case for which the pullback takes a very nice form. This case will play a role shortly in our theory of integration, as it will be related to the transformation formula (the generalization of the substitution formula) for multiple integrals.
Let us start by recalling the definition of the Jacobian of a smooth function.
Definition 4.7.5. The Jacobian.
Let \(\phi:V \to U\) be a smooth function with \(U,V \subseteq \mathbb{R}^n\) open subsets. Let us write \(\mathbf{t} = (t_1, \ldots, t_n) \in V\text{,}\) and
The Jacobian of \(\phi\text{,}\) which we denote by \(\frac{\partial(x_1, \ldots, x_n)}{\partial(t_1, \ldots, t_n)}\) or \(J_\phi\) or \(D \phi\) (lots of different notations!), is the \(n \times n\) matrix of first partial derivatives:
Its determinant is called the Jacobian determinant.
It turns out that if we pullback an \(n\)-form with respect to a \(\phi\) as in Definition 4.7.5, we can write \(\phi^* \omega\) in terms of the Jacobian determinant. First, let us introduce the common name “top form” for an \(n\)-form on an open subset \(U \subseteq \mathbb{R}^n\text{.}\)
Definition 4.7.6. Top form.
We call an \(n\)-form on an open subset \(U \subseteq \mathbb{R}^n\) a top form.
Such forms are called “top forms” because all forms with \(k \geq n\) necessarily vanish on \(\mathbb{R}^n\text{.}\) Going back to the pullback, we get the nice following result when we pullback a top form:
Lemma 4.7.7. The pullback of a top form in \(\mathbb{R}^n\) in terms of the Jacobian determinant.
Suppose that \(\omega\) is a top form on \(U \subseteq \mathbb{R}^n\text{,}\) and \(\phi:V \to U\) a smooth function with \(V \subseteq \mathbb{R}^n\text{.}\) As above we write
with \(\mathbf{t} = (t_1, \ldots, t_n) \in V\text{,}\) and
for some function \(f: U \to \mathbb{R}\text{.}\) Then
where \(\det J_\phi\) is the Jacobian determinant.
Proof for \(\mathbb{R}^2\) and \(\mathbb{R}^3\).
It is not so easy to write a general proof for \(\mathbb{R}^n\) using the computational approach for the pullback that we have used so far. We will be able to write down a general proof easily in the next subsection after having introduced a more algebraic approach to the pullback. For the time being, let us prove the statement explicitly for \(\mathbb{R}^2\) and \(\mathbb{R}^3\text{.}\)
For \(\mathbb{R}^2\text{,}\) our function \(\phi\) takes the form
What we need to show is that
where
But
and the lemma is proved.
The calculation is similar but more involved for \(\mathbb{R}^3\text{.}\) We have:
What we need to show is that
where
But
which completes the proof in \(\mathbb{R}^3\text{.}\) Phew, this was painful.
The appearance of the Jacobian determinant here is quite nice. It will be related to the transformation formula for multiple integrals, when we integrate a top form over a bounded region in \(\mathbb{R}^n\text{.}\) We note however that we obtain the Jacobian determinant here, not its absolute value (in comparison to what you may have seen in previous calculus classes); this will be related to the fact that our theory of integration is oriented.
Subsection 4.7.3 The Jacobian matrix and two important theorems in multivariable calculus (optional)
In the previous section we introduced the Jacobian of a function and its determinant. As you may already know, the Jacobian determinant plays an important role in multivariable calculus. In this section we review two important theorems that involve the Jacobian determinant, for the sake of completeness. This section however is tangential to the rest of the text.
The first theorem is the Inverse Function Theorem, which gives a sufficient condition to determine whether a function is invertible on an open neighbhorhood of a point in its domain. Remarkably, invertibility of the function is related to invertibility of the Jacobian matrix; indeed, if the Jacobian matrix is invertible at a point, i.e., its determinant is non-zero at this point, then the function is invertible on an open neighbhorhood of this point. Neat!
Theorem 4.7.8. Inverse Function Theorem.
Let \(\phi:U \to \mathbb{R}^n\) be a \(C^1\) (continously differentiable) function, with \(U \subseteq \mathbb{R}^n\) an open subset. Let \(a \in U\text{,}\) and suppose that the determinant of the Jacobian of \(\phi\) at \(a\) is non-zero, that is, \(\det J_\phi (a) \neq 0\text{.}\) Then there exists open neighborhoods \(A \subseteq U\) of \(a\) and \(B \subseteq \mathbb{R}^n\) of \(b=\phi(a)\) such that \(\phi: A \to B\) is bijective (that is invertible).
This is a very powerful theorem, which relates invertibility of a function to invertibility of its Jacobian matrix.
Another important theorem that involves the Jacobian is the Implicit Function Theorem. This theorem concerns the following question: suppose that we are given a number of relations between a set of variables, can we represent these relations as the graph of a function? Let us be more precise.
Start with single variable calculus. Let \(f: \mathbb{R}^2 \to \mathbb{R}\) be a \(C^1\)-function, which we write as \(f(x,y)\text{.}\) Setting \(f(x,y)=0\) gives a relation between the variables \(x\) and \(y\text{.}\) Can we think of this relation as implicitly defining \(y\) as a function of \(x\text{?}\) The answer is yes, under mild conditions, and only locally.
More precisely, the statement is that, for any point \((a,b)\) in the domain of \(f\) such that \(f(a,b)=0\text{,}\) if \(\frac{\partial f}{\partial y}(a,b) \neq 0\text{,}\) then there exists an open neighborhood \(U \subseteq \mathbb{R}\) of \(a\) such that there exists a unique \(C^1\)-function \(g: U \to \mathbb{R}\) such that \(g(a) = b\) and \(f(x, g(x)) = 0\) for all \(x \in U\text{.}\) In other words, if \(\frac{\partial f}{\partial y}(a,b) \neq 0\text{,}\) the relation \(f(x,y)=0\) implicitly and uniquely defines \(y\) as a \(C^1\)-function \(y=g(x)\) locally near the point \((a,b)\text{.}\)
This statement can be naturally generalized to multivariable calculus using the Jacobian determinant.
Theorem 4.7.9. Implicit Function Theorem.
Let \(f : \mathbb{R}^{n+m} \to \mathbb{R}^m\) be a \(C^1\)-function. Let us denote by \((\mathbf{x}, \mathbf{y}) = (x_1, \ldots, x_n, y_1, \ldots, y_m)\) coordinates on \(\mathbb{R}^{n+m}\text{.}\) We can write the function \(f\) in terms of its component functions as
Let \((\mathbf{a}, \mathbf{b}) \in \mathbb{R}^{n+m}\) be a point such that \(f(\mathbf{a}, \mathbf{b}) = 0\text{.}\) Suppose that \(\det J_{f,y} (\mathbf{a},\mathbf{b}) \neq 0\text{,}\) where \(J_{f,y}\) is the Jacobian of \(f\) with respect to the variables \(y\text{:}\)
(In other words, this Jacobian matrix is invertible.) Then there exists an open neighborhood \(U \subseteq \mathbb{R}^n\) of \(\mathbf{a}\) such that there exists a unique \(C^1\)-function \(g : U \to \mathbb{R}^m\) such that \(g(\mathbf{a}) = \mathbf{b}\) and \(f(\mathbf{x}, g(\mathbf{x})) = 0\) for all \(\mathbf{x} \in U\text{.}\)
This is the natural multivariable generalization. If the Jacobian matrix of the function \(f(\mathbf{x}, \mathbf{y})\) with respect to the \(y\)-variables is invertible at a point \((\mathbf{a}, \mathbf{b})\text{,}\) the relation \(f(\mathbf{x},\mathbf{y}) = 0\) implicitly and uniquely defines the variables \(\mathbf{y}\) as \(C^1\)-functions \(\mathbf{y} = (g_1(\mathbf{x}), \ldots, g_m(\mathbf{x}))\) locally near \((\mathbf{a},\mathbf{b})\text{.}\)
Subsection 4.7.4 The pullback of a \(k\)-form: an algebraic approach (optional)
In this section we introduce a direct definition of the pullback using the algebraic approach to basic \(k\)-forms introduced in Subsection 4.1.1. We then show that the three fundamental properties that we used to define the pullback are satisfied, thus justifying our original axiomatic approach.
Recall from Definition 4.1.1 (naturally generalized to \(\mathbb{R}^n\)) that a basic one-form \(dx_i\) is a linear map \(dx_i: \mathbb{R}^n \to \mathbb{R}\) which acts as
i.e. it outputs the \(i\)'th component of the vector \(\mathbf{u} \in \mathbb{R}^n\text{.}\) As we are now thinking of the basic one-forms \(dx_i\) as linear maps, we can define their pullback by composition, as we originally did for functions in Definition 2.4.1.
We first define and study the pullback when \(\phi\) is a linear map between vector spaces. We will generalize to the case of a smooth function afterwards.
Definition 4.7.10. The pullback of a basic one-form with respect to a linear map.
Let \(dx_i: \mathbb{R}^n \to \mathbb{R}\) be a basic one-form, and let \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) be a linear map. We define the pullback \(\phi^*(dx_i): \mathbb{R}^m \to \mathbb{R}\) by composition:
Let us now give an explicit formula for the pullback of a basic one-form.
Lemma 4.7.11. An explicit formula for the pullback of a basic one-form.
Let \(dx_i: \mathbb{R}^n \to \mathbb{R}\text{,}\) \(i=1,\ldots,n\) be the basic one-forms on \(\mathbb{R}^n\text{,}\) and \(dt_j: \mathbb{R}^m \to \mathbb{R}\text{,}\) \(j=1,\ldots,m\) be the basic one-forms on \(\mathbb{R}^m\text{.}\) The linear map \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) can be represented by a matrix \(A = (a_{ij})\text{.}\) Then we can write
Proof.
Write \(\mathbf{v} = (v_1, \ldots, v_m) \in \mathbb{R}^m\text{.}\) Then
It is then easy to generalize the definition of pullback to the basic \(k\)-forms.
Definition 4.7.12. The pullback of a basic \(k\)-form with respect to a linear map.
Let \(dx_{i_1} \wedge \ldots \wedge dx_{i_k}: (\mathbb{R}^n)^k \to \mathbb{R}\) be a basic \(k\)-form, and let \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) be a linear map. Let \(\mathbf{v}^1, \ldots, \mathbf{v}^k \in \mathbb{R}^m\) be vectors. We define the pullback \(\phi^*(dx_{i_1}\wedge \ldots \wedge dx_{i_k}): (\mathbb{R}^m)^k \to \mathbb{R}\) by:
It follows directly from the definition that the pullback commutes with the wedge product:
Lemma 4.7.13. The pullback commutes with the wedge product.
Under the setup above,
Proof.
By definition,
As a corollary, we obtain an explicit formula to calculate the pullback of a basic \(k\)-form.
Corollary 4.7.14. An explicit formula for the pullback of a basic \(k\)-form.
Let \(dx_{i_1} \wedge \ldots \wedge dx_{i_k}: (\mathbb{R}^n)^k \to \mathbb{R}\) be a basic \(k\)-form, and let \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) be a linear map. Let \(dt_j: \mathbb{R}^m \to \mathbb{R}\text{,}\) \(j=1,\ldots,m\) be the basic one-forms on \(\mathbb{R}^m\text{.}\) The linear map \(\phi: \mathbb{R}^m \to \mathbb{R}^n\) can be represented by a matrix \(A = (a_{ij})\text{.}\) Then we can write
We can also write down an explicit formula in terms of the determinant when we pullback a basic \(n\)-form from \(\mathbb{R}^n\) to \(\mathbb{R}^n\text{.}\)
Lemma 4.7.15. The pullback of a basic \(n\)-form in \(\mathbb{R}^n\).
Let \(dx_1 \wedge \cdots \wedge dx_n : (\mathbb{R}^n)^n \to \mathbb{R}\text{,}\) and let \(\phi: \mathbb{R}^n \to \mathbb{R}^n\) be a linear map, which can be representated by an \(n \times n\) matrix \(A = (a_{ij})\text{.}\) Then
Proof.
Let \(\mathbf{v}^1, \ldots, \mathbf{v}^n \in \mathbb{R}^n\text{.}\) Then
which concludes the proof.
This is all very nice, but so far we only looked at basic \(k\)-forms, and linear maps \(\phi: \mathbb{R}^m \to \mathbb{R}^n\text{.}\) How do we generalize this to general differential \(k\)-forms on \(U \subseteq \mathbb{R}^n\text{,}\) and to smooth functions \(\phi: V \to U\) with \(V \subseteq \mathbb{R}^m\text{?}\) The idea is simple. For any point \(\mathbf{t} \in V\text{,}\) the Jacobian matrix of \(\phi\) (i.e. the matrix of first partial derivatives), also called the “total derivative of \(\phi\)”, provides a linear map \(\mathbb{R}^m \to \mathbb{R}^n\text{.}\) 1
In other words, given a smooth function \(\phi: V \to U\text{,}\) if we write \(\phi(\mathbf{t}) = (x_1(\mathbf{t}), \ldots, x_n(\mathbf{t}) )\text{,}\) we can construct the pullback of \(k\)-forms exactly as above, but with the specific linear map \(\mathbb{R}^m \to \mathbb{R}^n\) given by the Jacobian matrix:
Then we see that we recover all the formulae that we obtained previously, and that our three fundamental properties are satisfied! The pullback of a basic one-form becomes
as before. Property 1 is obviously satisfied by definition. Lemma 4.7.13 becomes Property 2, and Corollary 4.7.14 becomes our general formula for the pullback of \(k\)-forms in Lemma 4.7.1. It is easy to check that Property 3 is satisfied. Finally, we obtain a general proof of Lemma 4.7.7 on \(\mathbb{R}^n\text{,}\) as this is simply Lemma 4.7.15. Neat!
Exercises 4.7.5 Exercises
1.
Consider the basic two-form \(dx \wedge dy\) on \(\mathbb{R}^2\text{,}\) and the polar coordinate transformation \(\alpha: \mathbb{R}^2 \to \mathbb{R}^2\) with
Show by explicit calculation that
Let us start by calculating the pullback two-form. We get:
Next, we show that \(\det J_\alpha = r\text{.}\) By definition of the Jacobian matrix, we have:
Therefore,
as claimed.
2.
Let
on \(\mathbb{R}^3\text{,}\) and \(\alpha: \mathbb{R}^3 \to \mathbb{R}^3\) the spherical transformation
Show by explicit calculation that
\begin{equation*} \alpha^*(dx \wedge dy \wedge dz) = r^2 \sin(\theta)\ dr \wedge d \theta \wedge d \phi = (\det J_\alpha) dr \wedge d \theta \wedge d \phi. \end{equation*}Use this to find \(\alpha^* \omega\text{.}\)
-
We calculate the pullback:
\begin{align*} \alpha^*(dx \wedge dy \wedge dz) =\amp (\sin(\theta) \cos(\phi) dr + r \cos(\theta)\cos(\phi) d \theta - r \sin(\theta)\sin(\phi) d\phi)\\ \amp \wedge (\sin(\theta)\sin(\phi) dr + r \cos(\theta) \sin(\phi) d\theta + r \sin(\theta) \cos(\phi) d\phi)\\ \amp \wedge (\cos(\theta) dr - r \sin(\theta) d \theta) \\ =\amp (r^2 \sin^3 (\theta)\cos^2 (\phi) + r^2 \sin(\theta)\cos^2(\theta)\cos^2(\phi) +r^2 \sin^3(\theta) \sin^2(\phi)\\ \amp + r^2 \sin(\theta)\cos^2(\theta)\sin^2(\phi) ) dr \wedge d \theta \wedge d \phi\\ =\amp r^2 (\sin^3(\theta) + \sin(\theta) \cos^2(\theta) )dr \wedge d \theta \wedge d \phi\\ =\amp r^2 \sin(\theta) dr \wedge d \theta \wedge d \phi. \end{align*}Next we show that this \(\det J_\alpha = r^2 \sin(\theta)\text{.}\) By definition of the Jacobian matrix, we get:
\begin{align*} \det J_\alpha =\amp \det \begin{pmatrix} \frac{\partial x}{\partial r} \amp \frac{\partial x}{\partial \theta} \amp \frac{\partial x}{\partial \phi}\\ \frac{\partial y}{\partial r} \amp \frac{\partial y}{\partial \theta} \amp \frac{\partial y}{\partial \phi}\\ \frac{\partial z}{\partial r} \amp \frac{\partial z}{\partial \theta} \amp \frac{\partial z}{\partial \phi}\end{pmatrix}\\ =\amp \det \begin{pmatrix} \sin(\theta) \cos(\phi) \amp r \cos(\theta) \cos(\phi) \amp - r \sin(\theta)\sin(\phi) \\ \sin(\theta)\sin(\phi) \amp r \cos(\theta) \sin(\phi) \amp r \sin(\theta) \cos(\phi) \\ \cos(\theta) \amp - r \sin(\theta) \amp 0 \end{pmatrix}\\ =\amp r^2 \sin(\theta)\cos^2(\theta)\cos^2(\phi) +r^2 \sin^3(\theta)\sin^2(\phi) + r^2 \sin^3(\theta)\cos^2(\phi)\\ \amp +r^2 \sin(\theta)\cos^2(\theta)\sin^2(\phi)\\ =\amp r^2 \sin (\theta). \end{align*}Therefore
\begin{equation*} \alpha^*(dx \wedge dy \wedge dz) = r^2 \sin(\theta)\ dr \wedge d \theta \wedge d \phi = (\det J_\alpha) dr \wedge d \theta \wedge d \phi \end{equation*}as claimed.
-
To find \(\alpha^* \omega\) we can use the result in (a). We get:
\begin{align*} \alpha^* \omega =\amp (r^2 \sin^2(\theta) \cos^2(\phi) + r^2 \sin^2(\theta) \sin^2(\phi)+ r^2 \cos^2(\theta) ) \alpha^*(dx \wedge dy \wedge dz)\\ =\amp (r^2 \sin^2(\theta) + r^2 \cos^2(\theta) ) r^2 \sin(\theta) dr \wedge d \theta \wedge d \phi\\ =\amp r^4 \sin(\theta) dr \wedge d \theta \wedge d \phi. \end{align*}
3.
Let
on \(\mathbb{R}^3\text{,}\) and \(\alpha: \mathbb{R}^3 \to \mathbb{R}^3\) be the cylindrical transformation
Find \(\alpha^* \omega\text{.}\)
Show by explicit calculation that \(d(\alpha^* \omega) = \alpha^*( d \omega)\text{.}\)
-
To simplify the calculation of the pullback, let us first calculate the pullback of the basic two-forms:
\begin{align*} \alpha^*(dy \wedge dz) =\amp (\sin(\theta) dr + r \cos(\theta) d\theta) \wedge dw\\ =\amp \sin(\theta) dr \wedge dw + r \cos(\theta) d \theta \wedge dw, \end{align*}\begin{align*} \alpha^*(dz \wedge dx)=\amp dw \wedge (\cos(\theta) dr - r \sin(\theta) d\theta) \\ =\amp - \cos(\theta) dr \wedge dw + r \sin(\theta) d\theta \wedge dw, \end{align*}and
\begin{align*} \alpha^*( dx \wedge dy) =\amp (\cos(\theta) dr - r \sin(\theta) d\theta) \wedge (\sin(\theta) dr + r \cos(\theta) d\theta)\\ =\amp r \cos^2(\theta) dr \wedge d \theta + r \sin^2(\theta) dr \wedge d \theta\\ =\amp r dr \wedge d \theta. \end{align*}We also observe that
\begin{equation*} \alpha^*(x^2+y^2) = r^2 \cos^2(\theta) + r^2 \sin^2(\theta) = r^2. \end{equation*}Putting this together, we get:
\begin{align*} \alpha^* \omega =\amp \alpha^*(x^2+y^2) \alpha^*\left( x\ dy \wedge dz + y\ dz \wedge dx + z\ dx \wedge dy \right)\\ =\amp r^2 ( r \cos(\theta) (\sin(\theta) dr \wedge dw + r \cos(\theta) d \theta \wedge dw)\\ \amp + r \sin(\theta) ( - \cos(\theta) dr \wedge dw + r \sin(\theta) d\theta \wedge dw) + r w dr \wedge d \theta ) \\ =\amp r^3 (r d \theta \wedge dw + w dr \wedge d \theta). \end{align*} -
On the one hand, we calculated in (a) the pullback \(\alpha^* \omega = r^3 (r d \theta \wedge dw + w dr \wedge d \theta).\)We can calculate its exterior derivative:
\begin{align*} d(\alpha^* \omega) =\amp d(r^4) \wedge d \theta \wedge dw + d(r^3 w) \wedge dr \wedge d\theta\\ =\amp (4 r^3 + r^3) dr \wedge d\theta \wedge dw.\\ =\amp 5 r^3 dr \wedge d \theta \wedge dw. \end{align*}On the other hand, we can calculate first the exterior derivative of \(\omega\text{.}\) We get:
\begin{align*} d \omega =\amp d( (x^2+y^2) x) \wedge dy \wedge dz + d( (x^2+y^2) y) \wedge dz \wedge dx + d((x^2+y^2) z) \wedge dx \wedge dy\\ =\amp ((3 x^2 + y^2) + (3 y^2 + x^2) + (x^2+y^2) ) dx \wedge dy \wedge dz \\ =\amp 5(x^2+y^2) dx \wedge dy \wedge dz. \end{align*}We then calculate its pullback:
\begin{align*} \alpha^* (d \omega)=\amp 5 \alpha^*(x^2+y^2)\alpha^* (dx \wedge dy \wedge dz)\\ =\amp 5 r^2 (\cos(\theta) dr - r \sin(\theta) d\theta) \wedge (\sin(\theta) dr + r \cos(\theta) d\theta) \wedge dw\\ =\amp 5 r^2 (r \cos^2(\theta) + r \sin^2(\theta) ) dr \wedge d\theta \wedge dw\\ =\amp 5 r^3\ dr \wedge d\theta \wedge dw. \end{align*}We conclude that
\begin{equation*} d(\alpha^* \omega) = \alpha^*(d \omega), \end{equation*}as claimed.
4.
Let
on \(\mathbb{R}^3\text{,}\) and \(\phi: (\mathbb{R}_{\neq 0})^2 \to \mathbb{R}^3\) with:
Find \(\phi^* \omega\text{.}\)
We calculate the pullback:
5.
Show that the pullback of an exact form is always exact.
Let \(\omega\) be an exact \(k\)-form on \(U \subseteq \mathbb{R}^n\text{,}\) and let \(\phi: V \to U\) be a smooth function for \(V \subseteq \mathbb{R}^m\text{.}\) We want to prove that the pullback of \(\omega\) is exact.
Since \(\omega\) is exact, we know that there exists a \((k-1)\)-form \(\eta\) on \(U\) such that \(\omega = d \eta\text{.}\) Then
where we used the fact that the pullback commutes with the exterior derivative (see Lemma 4.7.4). Therefore, the \(k\)-form \(\phi^* \omega\) on \(V\) is the exterior derivative of a \((k-1)\)-form \(\phi^* \eta\) on \(V\text{,}\) and hence it is exact.
6.
Let \(\omega\) be a \(k\)-form on \(\mathbb{R}^n\text{,}\) and \(Id: \mathbb{R}^n \to \mathbb{R}^n\) be the identity function defined by \(Id(x_1, \ldots, x_n) = (x_1, \ldots, x_n)\text{.}\) Show that
We can write a general \(k\)-form on \(\mathbb{R}^n\) as
for some smooth functions \(f_{i_1 \cdots i_k} : \mathbb{R}^n \to \mathbb{R}\text{.}\) We know that \(Id^*(dx_i) = dx_i\) for all \(i \in \{1,\ldots,n\}\text{,}\) by definition of the identity map. Similarly, for any function \(f: \mathbb{R}^n \to \mathbb{R}\text{,}\) \(Id^* f = f\text{.}\) As a result, we get:
7.
Let \(\omega\) be a \(k\)-form on \(U \subseteq \mathbb{R}^n\text{.}\) Let \(\phi:V \to U\) and \(\alpha: W \to V\) be smooth functions, with \(V \subseteq \mathbb{R}^m\) and \(W \subseteq \mathbb{R}^\ell\text{.}\) Show that
In other words, it doesn't matter whether we pullback in one or two steps in the chain of maps
We can write a general \(k\)-form on \(U\) as
for some smooth functions \(f_{i_1 \cdots i_k} : U \to \mathbb{R}\text{.}\) On the one hand, the pullback by \(\phi \circ \alpha\) is
On the other hand, pulling back in two steps, we get:
So all we have to show is that
for any smooth function \(f:U \to \mathbb{R}\text{,}\) and
for any \(i \in \{1, \ldots, n \}\text{.}\)
First, for any function \(f:U \to \mathbb{R}\text{,}\)
while
Thus
As for the basic one-forms, let us introduce further notation for the maps \(\phi: V \to U\) and \(\alpha: W \to V\text{.}\) Let us write \(\mathbf{z}=(z_1,\ldots,z_\ell)\) for coordinates on \(W\text{;}\) \(\mathbf{y}=(y_1,\ldots,y_m)\) for coordinates on \(V\text{;}\) and \(\mathbf{x}=(x_1,\ldots,x_n)\) for coordinates on \(U\text{.}\) We write \(\phi(\mathbf{y}) = (\phi_1(\mathbf{y}, \ldots, \phi_n(\mathbf{y}) )\text{,}\) and \(\alpha(\mathbf{z}) = (\alpha_1(\mathbf{z}), \ldots, \alpha_m(\mathbf{z} ) )\text{.}\) Then, we have:
and
On the other hand, if we pullback by the composition of the maps, we get:
But the equality
is precisely the chain rule for multi-variable functions (written in terms of composition of functions). Thus we conclude that
Putting all of this together, we conclude that
which is the statement of the question.