Skip to main content

Section 3.6 Poincare's lemma for one-forms

We left one question unanswered: how do we determine whether a one-form is exact? How do we know whether a vector field is conservative? With the Fundamental Theorem of line integrals under our belts, we will be able to answer this question, which is known as Poincare's lemma.

Subsection 3.6.1 One-forms defined on all of \(\mathbb{R}^n\)

Recall the definition of closed one-forms from Definition 2.2.9 and Definition 2.2.14. We know that exact one-forms are necessarily closed, see Lemma 2.2.10 and Lemma 2.2.15. But is the converse statement true? Are closed one-forms necessarily exact? We know that this cannot always be true, as we have already studied an example of a closed one-form that was not exact (see Exercise 3.4.3.2). So when are closed forms necessarily exact?

This is an important question, because showing that a one-form is closed is much easier than showing that it is exact: one only needs to calculate the partial derivatives of its cooordinate functions and show that they satisfy the requirements in Definition 2.2.9 and Definition 2.2.14.)

It turns out that the answer to the question is fairly subtle. There is one simple case however when the statement is always true: it is when the one-form is defined (and smooth, by definition) on all of \(\mathbb{R}^n\text{.}\)

The proof is rather interesting, and in fact constructive, as it provides a way of calculating the function \(f\) such that \(\omega = d f\) if \(\omega\) is closed. We will write the proof only for \(\mathbb{R}^2\text{,}\) but a similar proof works in \(\mathbb{R}^n\text{.}\)

First, we notice that one direction of implication is clear: we already know from Lemma 2.2.10 that exact one-forms are closed. So all we need to show is the other direction of implication, namely that closed one-forms are exact.

Assume that \(\omega = f(x,y)\ dx + g(x,y)\ dy\) is defined on all of \(\mathbb{R}^2\text{,}\) and that it is closed. From Definition 2.2.9, this means that \(\frac{\partial f}{\partial y} = \frac{\partial g}{\partial x}\text{.}\)

Let us now construct a function \(q\) as follows. We take our one-form \(\omega\text{,}\) and we integrate it along a curve in \(\mathbb{R}^2\) which consists of, first, a horizontal line from the origin \((0,0)\) to the point \((x_0,0)\) for some fixed \(x_0>0\text{,}\) and then a vertical line from the point \((x_0,0)\) to the point \((x_0,y_0)\) for some fixed \(y_0>0\text{.}\) This is a piecewise-parametric curve, but we can easily parametrize each line segment. For the line segment from \((0,0)\) to \((x_0,0)\text{,}\) we use the parametrization \(\alpha_1: [0,x_0] \to \mathbb{R}^2\) with \(\alpha_1(t) = (t,0)\text{,}\) and for the second line segment from \((x_0,0)\) to \((x_0,y_0)\text{,}\) we use the parametrization \(\alpha_2: [0,y_0] \to \mathbb{R}^2\) with \(\alpha_2(t)= (x_0, t)\text{.}\) The pullbacks of the one-form \(\omega\) are \(\alpha_1^* \omega = f(t,0)\ dt\) and \(\alpha_2^* \omega = g(x_0,t)\ dt\text{.}\) We construct our new function as \(q\) as the line integral of \(\omega\) along this curve:

\begin{equation*} q(x_0,y_0) = \int_{\alpha_1} \omega + \int_{\alpha_2} \omega = \int_0^{x_0} f(t,0)\ dt + \int_0^{y_0} g(x_0,t)\ dt. \end{equation*}

\(q\) is a function of \((x_0,y_0)\text{.}\) Now we rename the variables \((x_0,y_0)\) to be \((x,y)\text{,}\) and extend the function to all \((x,y) \in \mathbb{R}^2\text{,}\) not just positive numbers, as the integrals on the right-hand-side remain well defined. So we get the function

\begin{equation*} q(x,y) = \int_0^{x} f(t,0)\ dt + \int_0^{y} g(x,t)\ dt \end{equation*}

defined on \(\mathbb{R}^2\text{.}\)

Our claim is that this new function \(q(x,y)\) is in fact the potential function for \(\omega\text{,}\) i.e., \(\omega = d q\text{,}\) which would of course show that \(\omega\) is exact. So let us compute \(d q\text{.}\) To do so, we need \(\frac{\partial q}{\partial x}\) and \(\frac{\partial q}{\partial y}\text{.}\) First,

\begin{align*} \frac{\partial q}{\partial y} =\amp \frac{\partial}{\partial y} \int_0^x f(t,0)\ dt + \frac{\partial}{\partial y} \int_0^y g(x,t)\ dt\\ = \amp g(x,y) \end{align*}

where we used the Fundamental Theorem of Calculus part 1 for the second integral (recalling that \(x\) is kept fixed when we evaluate the partial derivative with respect to \(y\)) and the fact that the first integral does not depend on \(y\) at all. As for the partial derivative with respect to \(x\text{,}\) we get:

\begin{align*} \frac{\partial q}{\partial x} =\amp \frac{\partial}{\partial x} \int_0^x f(t,0)\ dt + \frac{\partial}{\partial x} \int_0^y g(x,t)\ dt\\ =\amp f(x,0) + \int_0^y \frac{\partial g(x,t)}{\partial x}\ dt \qquad \text{by FTC Part 1 for the first term}\\ =\amp f(x,0) + \int_0^y \frac{\partial f(x,t)}{\partial t}\ dt \qquad \text{since } \frac{\partial g(x,t)}{\partial x} = \frac{\partial f(x,t)}{\partial t}, \text{ as }\omega \text{ is closed, }\\ =\amp f(x,0) + f(x,y) - f(x,0) \\ =\amp f(x,y). \end{align*}

Therefore,

\begin{equation*} d q = \frac{\partial q}{\partial x}\ dx + \frac{\partial q}{\partial y}\ dy = f(x,y)\ dx + g(x,y)\ dy = \omega, \end{equation*}

and we have shown that \(\omega\) is exact. Moreover, we found an explicit expression for the potential function as a line integral of \(\omega\text{.}\)

So we now have a clear criterion to determine whether a one-form on \(\mathbb{R}^n\) is exact: we simply need to show that is closed. In terms of vector fields, all we have to do is show that it passes the screening test.

Consider the example from Example 3.4.4. The one-form was \(\omega = y^2 z\ dx + 2 x y z \ dy + x y^2\ dz\text{,}\) and we know that it is exact, as it can be written as \(\omega = d f\) for \(f(x,y,z) = x y^2 z\text{.}\) But suppose that we don't know that. How can we determine quickly whether it is exact or not?

First, we notice that \(\omega\) is well defined on all of \(\mathbb{R}^3\text{.}\) So to determine that it is exact, all that we need to do is show that it is closed.

Let us write \(\omega = f_1\ dx + f_2\ dy + f_3\ dz\text{.}\) We calculate partial derivatives:

\begin{equation*} \frac{\partial f_1}{\partial y} = 2 y z, \qquad \frac{\partial f_1}{\partial z} = y^2, \qquad \frac{\partial f_2}{\partial z} = 2 x y, \end{equation*}

and

\begin{equation*} \frac{\partial f_2}{\partial x} = 2 y z, \qquad \frac{\partial f_3}{\partial x} = y^2, \qquad \frac{\partial f_3}{\partial y} = 2 x y. \end{equation*}

The statement that \(\omega\) is closed is just that the partial derivatives in the first line are equal to the partial derivatives in the second line, which is indeed true. Thus \(\omega\) is closed, and by Poincare's lemma we can conclude that it must be exact.

That doesn't tell us how to find the potential function \(f\) though. To find \(f\text{,}\) we proceed as usual. Let us do it here for completeness.

We want to find a function \(f=f(x,y,z)\) such that \(df = \frac{\partial f}{\partial x}\ dx + \frac{\partial f}{\partial y}\ dy + \frac{\partial f}{\partial z}\ dz = y^2 z\ dx + 2 x y z \ dy + x y^2\ dz\text{.}\) First, we want:

\begin{equation*} \frac{\partial f}{\partial x} = y^2 z. \end{equation*}

We can integrate the partial derivative -- the “constant of integration” here will be any function \(g(y,z)\) that is independent of \(x\text{.}\) Thus we get:

\begin{equation*} f = \int_0^x y^2 z\ dx + g(y,z) = x y^2 z + g(y,z). \end{equation*}

Next, we want

\begin{equation*} \frac{\partial f}{\partial y} = 2 x y z. \end{equation*}

Using the fact that \(f = x y^2 z + g(y,z)\text{,}\) this equation reads

\begin{equation*} 2 x y z + \frac{\partial g}{\partial y} = 2 x y z \qquad \Leftrightarrow \qquad \frac{\partial g}{\partial y} = 0. \end{equation*}

Integrating, we get:

\begin{equation*} g(y,z) = h(z), \end{equation*}

where \(h(z)\) is a function of \(z\) alone. Putting this together, we have \(f = x y^2 z + h(z)\text{.}\) Finally, we must satisfy the remaining equation:

\begin{equation*} \frac{\partial f}{\partial z} = x y^2. \end{equation*}

Using the fact that \(f = x y^2 z + h(z)\text{,}\) this becomes

\begin{equation*} x y^2 + \frac{dh}{dz} = x y^2 \qquad \Leftrightarrow \qquad \frac{dh}{dz} = 0 \qquad \Leftrightarrow \qquad h = C, \end{equation*}

for some constant \(C\text{.}\) As we are only interested in one function \(f\) such that \(d f = \omega\text{,}\) we can set the constant \(C = 0\text{.}\) We obtain that \(\omega = df \) with \(f(x,y,z) = x y^2 z\text{,}\) as stated in Example 3.4.4.

In fact, we can go a little further and state the following theorem, which gives equivalent formulations of what it means for a one-form to be exact (or a vector field to be conservative) on all of \(\mathbb{R}^n\text{.}\)

We want to prove equivalence of the four statements. To do so, it is sufficient to prove that \((1) \Rightarrow (2)\text{,}\) \((2) \Rightarrow (3)\text{,}\) \((3) \Rightarrow (4)\text{,}\) and \((4) \Rightarrow (1)\text{.}\) We will write a proof only for \(\mathbb{R}^2\text{.}\)

\((1) \Rightarrow (2)\text{.}\) All exact one-forms are closed, see Lemma 2.2.10.

\((2) \Rightarrow (3)\text{.}\) By Poincare's lemma, Theorem 3.6.1, we know that closed one-forms defined on all of \(\mathbb{R}^n\) are exact, so \((1) \Leftrightarrow (2)\text{.}\) We also know that if \(\omega\) is exact, then its line integral along closed curves always vanishes: this is Corollary 3.4.3, which follows from the Fundamental Theorem of line integrals. So \((2) \Rightarrow (3)\text{.}\)

\((3) \Rightarrow (4)\text{.}\) This follows from Exercise 3.4.3.5. Indeed, suppose that \(P_0\) is on the closed curve that you are integrating along. Pick \(P_1 = P_0\text{.}\) Then we know that the line integral of \(\omega\) is path independent for all curves starting at \(P_0\) and ending at \(P_1 = P_0\text{,}\) since by (3) the line integrals all vanish. It then follows from Exercise 3.4.3.5 that the line integrals are path independent everywhere, which is (4).

\((4) \Rightarrow (1)\text{.}\) For this one we need to do a bit more work. We want to show that if the line integrals of \(\omega = f\ dx+ g\ dy\) are path independent, then \(\omega\) is exact. We proceed like in the proof of Theorem 3.6.1. First, consider a curve \(C_1\) which consists in a horizontal line from \((0,0)\) to a fixed point \((x_0,0)\text{,}\) and then a vertical line from \((x_0,0)\) to a fixed point \((x_0,y_0)\text{,}\) with \(x_0,y_0 > 0\text{.}\) We parametrize it by \(\alpha_1: [0,x_0] \to \mathbb{R}^2\) with \(\alpha_1(t) = (t,0)\text{,}\) and \(\alpha_2: [0,y_0] \to \mathbb{R}^2\) with \(\alpha_2(t) = (x_0,t)\text{.}\) The pullbacks are \(\alpha_1^* \omega = f(t,0)\ dt\text{,}\) \(\alpha_2^*\omega = g(x_0,t)\ dt\text{.}\) The line integral then reads

\begin{equation*} q(x_0,y_0) := \int_{C_1} \omega = \int_0^{x_0} f(t,0)\ dt + \int_0^{y_0} g(x_0,t)\ dt. \end{equation*}

Next, we consider a second curve \(C_2\) which consists in a vertical line from \((0,0)\) to \((0,y_0)\text{,}\) and then a horizontal line from \((0,y_0)\) to \((x_0,y_0)\text{.}\) A parametrization is \(\beta_1: [0,y_0] \to \mathbb{R}^2\) with \(\beta_1(t) = (0,t)\text{,}\) and \(\beta_2: [0,x_0] \to \mathbb{R}^2\) with \(\beta_2(t) = (t,y_0)\text{.}\) The pullbacks are \(\beta_1^* \omega = g(0,t)\ dt\text{,}\) and \(\beta_2^* \omega = f(t,y_0)\ dt\text{.}\) The line integral reads

\begin{equation*} p(x_0,y_0) := \int_{C_2}\omega = \int_0^{y_0} g(0,t)\ dt + \int_0^{x_0} f(t,y)\ dt. \end{equation*}

Note that the two curves \(C_1\) and \(C_2\) start at \((0,0)\) and end at the same point \((x_0,y_0)\text{.}\) Since by (4) we know that the line integrals are path independent, we know that

\begin{equation*} q(x_0,y_0) = p(x_0,y_0). \end{equation*}

As in the proof of Theorem 3.6.1, we then rename the variables \((x_0,y_0) \to (x,y)\) and extend the domain of definition of the function \(q(x,y) = p(x,y)\) to all \((x,y) \in \mathbb{R}^2\text{,}\) since the integrals on the right-hand-side remain well defined.

Since \(q(x,y) = p(x,y)\text{,}\) the partial derivatives of \(q\) and \(p\) are also equal. In particular,

\begin{equation*} \frac{\partial q}{\partial y} = \frac{\partial}{\partial y} \int_0^y g(x,t)\ dt = g(x,y), \end{equation*}

and

\begin{equation*} \frac{\partial q}{\partial x} = \frac{\partial p}{\partial x} = \frac{\partial}{\partial x} \int_0^x f(t,y)\ dt = f(x,y), \end{equation*}

where in both cases we used FTC part 1. We conclude that

\begin{equation*} d q = \frac{\partial q}{\partial x}\ dx + \frac{\partial q}{\partial y}\ dy = f(x,y)\ dx + g(x,y)\ dy = \omega, \end{equation*}

and hence \(\omega\) is exact, which is (1).

Subsection 3.6.2 One-forms on simply connected subsets of \(\mathbb{R}^n\)

Going back to Poincare's lemma, the proof of Theorem 3.6.1 relied on the fact that we could take \(x\) and \(y\) to be any two real numbers, which was possible because \(\omega\) was assumed to be a (smooth) one-form on all of \(\mathbb{R}^2\text{.}\) But if the one-form is defined on an open subset \(U \subseteq \mathbb{R}^2\text{,}\) does the proof still work? The answer is: not always. For instance, it would work if \(U\) is an open rectangle in \(\mathbb{R}^2\text{;}\) however, it wouldn't work if \(U\) is \(\mathbb{R}^2 \setminus \{(0,0)\}\text{,}\) that is \(\mathbb{R}^2\) minus the origin.

In other words, it isn't always true that closed one-forms are exact. The precise statement is that it is true if \(\omega\) is a one-form defined on an open subset \(U \subseteq \mathbb{R}^n\) that is “simply connected”. What does this mean?

We say that a set \(U\) is path connected if any two points can be connected by a path (or a parametric curve). In other words, the set contains only one piece. Then, we say that it is simply connected if it is path connected, with the extra property that any simple closed curve (loop) in \(U\) can be continuously contracted to a point. Intuitively, a simply connected region in \(\mathbb{R}^2\) consists of only one piece and has no holes.

For instance, open rectangles and open disks in \(\mathbb{R}^2\) are simply connected. However, if you consider \(U = \mathbb{R}^2 \setminus \{ (0,0) \}\text{,}\) while it is path connected as you can connect any two points by a path, it is not simply connected, since loops around the origin cannot be contracted to a point within \(U\) (there is a hole at the origin).

We will state the more general Poincare's lemma here for completeness, but without a proof, as it would go beyond the scope of this course.

We saw in Example 2.2.13 an example of a one-form that is closed but not exact. The one-form was

\begin{equation*} \omega = - \frac{y}{x^2+y^2}\ dx + \frac{x}{x^2+y^2}\ dy. \end{equation*}

We showed that it is closed. But we also showed in Exercise 3.4.3.2 that it is not exact, since its line integral around a closed curve is non-vanishing. Does that contradict Poincare's lemma? No. The reason is that \(\omega\) is not defined on all of \(\mathbb{R}^2\text{.}\) Indeed, its coefficient functions are only defined for \((x,y) \in \mathbb{R}^2 \setminus \{ (0,0) \}\text{,}\) as at the origin one would be dividing by zero. As \((x,y) \in \mathbb{R}^2 \setminus \{ (0,0) \}\) is not simply connected, Poincare's lemma does not apply.

Exercises 3.6.3 Exercises

1.

Determine whether the one-form \(\omega = y^2 e^{x y}\ dx + (1+x y) e^{x y}\ dy\) is exact. If it is, find a function \(f\) such that \(\omega = df\text{.}\)

Solution.

First, we notice that the component functions are smooth on \(\mathbb{R}^2\text{,}\) so we know that \(\omega\) is exact if and only if it is closed. We calculate the partial derivatives of the component functions:

\begin{equation*} \frac{\partial}{\partial y}(y^2 e^{x y}) = 2 y e^{x y} + x y^2 e^{x y}, \qquad \frac{\partial}{\partial x} \left( (1+x y) e^{x y} \right)= y e^{x y} + y(1+x y) e^{x y}. \end{equation*}

We see that the two expressions are equal. Thus \(\omega\) is closed, and hence it is also exact.

We are looking for a function \(f\) such that

\begin{equation*} d f = \frac{\partial f}{\partial x}\ dx + \frac{\partial f}{\partial y}\ dy = y^2 e^{x y}\ dx + (1+x y) e^{x y}\ dy. \end{equation*}

Integrating the partial derivative in \(x\text{,}\) we get

\begin{equation*} f = y e^{x y} + g(y). \end{equation*}

Substituting in the partial derivative for \(y\text{,}\) we get

\begin{equation*} e^{x y} + x y e^{x y} + g'(y) = (1+xy)e^{x y}, \end{equation*}

from which we conclude that \(g'(y) = 0\text{,}\) i.e. \(g(y) = C\text{,}\) which we set to zero. We thus have found a function \(f\) such that \(\omega = d f\text{:}\)

\begin{equation*} f(x,y) = y e^{x y}. \end{equation*}

2.

Determine whether the field \(\mathbf{F}(x,y) = (e^x + e^y, x e^y + x)\) is conservative. If it is, find a potential function.

Solution.

The component functions are smooth on \(\mathbb{R}^2\text{,}\) so the vector field is conservative if and only if it passes the screening test. We calculate the partial derivatives:

\begin{equation*} \frac{\partial}{\partial y} (e^x + e^y) = e^y, \qquad \frac{\partial}{\partial x}(x e^y + x) = e^y + 1. \end{equation*}

As these two expressions are not equal, we conclude that the vector field is not conservative on \(\mathbb{R}^2\text{.}\)

3.

Determine whether or not the following sets are (a) open, (b) path connected, and (c) simply connected:

  1. \(\displaystyle S = \{(x,y)\in \mathbb{R}^2 ~|~ y \geq 0 \}\)

  2. \(\displaystyle U = \{(x,y) \in \mathbb{R}^2 ~|~ (x,y) \neq (1,1) \}\)

  3. \(\displaystyle T = \{(x,y) \in \mathbb{R}^2 ~|~ y \neq 0 \}\)

  4. The unit circle in \(\mathbb{R}^2\)

  5. The unit sphere in \(\mathbb{R}^3\)

  6. \(\displaystyle V = \mathbb{R}^3 \setminus \{ (0,0,0) \}\)

Solution.

Recall that a set is open if for all points in the set, there is an open ball centered at that point that lies within the set. It is path connected if any two points in the set can be connected by a path. It is simply connected if it is path connected, and all closed curves can be contracted to a point within the set.

  1. \(S\) consists in the upper half of the \(xy\)-plane, including the \(x\)-axis. First, it is not open, since any point on the \(x\)-axis cannot be the centre of an open disk within \(S\) (as points below the \(x\)-axis are not in \(S\)). It is however path connected, as any two points can be connected by a path, and it is simply connected, as all closed curves can be contracted to a point within \(S\text{.}\)

  2. \(U\) is the \(xy\)-plane minus the point \((1,1)\text{.}\) It is certainly open and path connected, but it is not simply connected as any closed curve surrounding \((1,1)\) cannot be contracted to a point in \(U\) (as there is a hole at \((1,1)\)).

  3. \(T\) is the \(xy\)-plane with the \(x\)-axis removed. It is an open set. However, it is not path connected, since two points on both sides of the \(x\)-axis cannot be connected by a path within \(T\text{.}\) It then follows that it is also not simply connected.

  4. The unit circle in \(\mathbb{R}^2\text{,}\) i.e. the solutions to the equation \(x^2+y^2=1\text{,}\) is not open in \(\mathbb{R}^2\text{.}\) Indeed, there is no point on the unit circle that can be surrounded by an open disk within the circle itself (note that we are considering only the circle itself here, not the disk). It is path connected, as you can connect any two points on the circle by a path on the circle (the arc between the two points). But it is not simply connected, as the circle itself (which is a closed loop) cannot be contracted to a point (recall that the interior of the circle is not part of our set).

  5. The unit sphere in \(\mathbb{R}^3\) is not open, just as for the circle in \(\mathbb{R}^2\text{.}\) It is path connected, and in this case it is also simply connected, as all closed loops on the sphere can be contracted to a point within the sphere.

  6. \(V\) is \(\mathbb{R}^3\) minus the origin. It is certainly open, as we are just removing one point from \(\mathbb{R}^3\text{,}\) and it is path connected, as any two points can be connected by a path within the set \(V\text{.}\) Is it also simply connected? The answer is yes, it is simply connected. Indeed, pick any closed curve within \(V\text{;}\) you can always contract it to a point in \(V\text{.}\) The hole at the origin does not create any issue here, because we are in \(\mathbb{R}^3\text{;}\) informally, the point is that even if you pick, say, a closed curve in the \(xy\)-plane that surrounds the origin, then you can still contract it to a point within the set \(V\text{,}\) since you can move up in the \(z\)-direction while you contract (you don't have to stick to the \(xy\)-plane in the contraction process, as \(V\) is a subset in \(\mathbb{R}^3\)). The upshot here is that it's important to keep in mind that the interpretation of simply-connectedness as meaning “no holes” is only true in \(\mathbb{R}^2\text{.}\) For instance, in \(\mathbb{R}^3\text{,}\) you can convince yourself that the requirement that all closed curves can be contracted to a point within the set could instead be interpreted as meaning that there are no “missing lines” in the set. Ultimately, it is easier to just use the definition of simply connectedness, i.e. that the set is path connected and that all closed curves can be contracted to a point within the set, to check whether a set is simply connected.

4.

Consider the one-form \(\omega = 2 \frac{x}{|y|}\ dx - \frac{x^2}{y^2}\ dy\) on the open subset \(U = \{(x,y) \in \mathbb{R}^2~|~y \lt 0\}\text{.}\) Determine whether \(\omega\) is exact, and if it is find a function \(f\) such that \(\omega = d f\text{.}\)

Solution.

Since we are restricting to \(y \lt 0\text{,}\) we can replace \(|y| = - y\text{.}\) The one-form is then

\begin{equation*} \omega = - 2 \frac{x}{y}\ dx - \frac{x^2}{y^2}\ dy\text{.} \end{equation*}

Since \(U\) is simply connected, Poincare's lemma applies. We calculate partial derivatives:

\begin{equation*} \frac{\partial}{\partial y}\left(- 2 \frac{x}{y} \right) = 2 \frac{x}{y^2}, \qquad \frac{\partial}{\partial x}\left( - \frac{x^2}{y^2} \right) = - 2 \frac{x}{y^2}. \end{equation*}

As the two expressions are not equal, we conclude that \(\omega\) is not closed on \(U\text{,}\) and thus it cannot be exact.

We remark here that the choice of \(U\) was very important. If we had instead defined the one-form on the open subset \(V = \{(x,y) \in \mathbb{R}^2~|~y > 0\}\text{,}\) then we would have obtained a different result! Indeed, on \(V\text{,}\) \(|y| = y\text{.}\) It then follows that \(\omega\) is closed, and since Poincare's lemma applies as \(V\) is simply connected, it is also exact on \(V\text{.}\) Indeed, one can check that \(\omega = d f\) with \(f(x,y) = \frac{x^2}{y}\) on \(V\text{.}\)

In fact, the largest domain of definition for \(\omega\) would be \(U \cup V\text{,}\) i.e. the set \(\{(x,y)\in\mathbb{R}^2~|~y\neq 0\}\text{.}\) However, this is not a simply connected set, so we cannot apply Poincare's lemma. In any case, \(\omega\) is not exact on this larger set, as it cannot be written as \(\omega = d f\) on all of \(U \cup V\text{.}\)