Is it possible to attach a number to the “area” of the region bounded between the graph of \(f(t) = 1/t^2\) and the horizontal axis from \(t = 1\) onwards? Or is the area simply infinite because the domain being considered is unbounded?
As usual in calculus, our strategy is to approximate in a systematic way and then look for a trend in these approximations. Here we do not want to approximate with a Riemann sum because that would require adding up the areas of an infinite number of rectangles. Such a strategy might be feasible after we study infinite series in Chapter 25, but there is a simpler approach: we simply compute some of the area over a bounded domain \(1 \le t \le b\text{,}\) then compute some more of the area over a larger bounded domain (that is, over \(1 \le t \le b\) for a larger value of \(b\)), and so on, and look for a trend in these partial areas.
However, we will take a more continuous approach compared to the step-by-step approach described in the previous paragraph. As usual, we can consider the domain endpoint as a variable, and then the partial area values can all be collected together as an integral function:
\begin{equation*}
\text{Total area } \approx A(t) = \ccmint{1}{t}{\frac{1}{u^2}}{u} \text{.}
\end{equation*}
A larger value of \(t\) will account for more of the area over the full, unbounded domain \(t \ge 1\text{,}\) and so should be a better approximation. Is there any trend to these increasingly better approximations? That is, what is the long-term behaviour of the function \(A(t)\) as \(t \to \infty\text{?}\)
To investigate further, it would be nice to have a different expression for \(A(t)\text{,}\) one that better fits our previous strategies for investigating long-term behaviour. Using Formula 2 from Pattern 7.4.2, we have
and it seems reasonable to say that the area of the region bounded between the graph of \(f(t) = 1/t^2\) and the horizontal axis from \(t = 1\) onwards is equal to \(1\text{.}\)
In the definition of the “two-sided” improper integral over the whole real number line, any “break” value can be used in place of \(0\text{.}\) If both
for every value of \(a\text{,}\) and it should always be the case that the sum of each of these pairs of integrals will be the same value as the sum of any other such pair of integrals.
should be equal to zero because, just like the last example in Example 24.1.5, the function being integrated exhibits odd symmetry, and the negative area over domain \(t \le 0\) should “cancel” with the positive area over domain \(t \ge 0\text{.}\)
and we have no conceptual basis we can use to justify cancelling a negatively-oriented area of infinite magnitude with a positively-oriented area of infinite magnitude. Instead, it’s best to say that (✶) diverges because the two integrals in (✶✶) do.
Whether this limit exists depends on whether the exponent on \(t\) is negative or positive. If \(p \gt 1\) then the exponent is negative and \(t^{1 - p} \to 0\) as \(t \to \infty\text{,}\) so that
The following records the fact that if we are able to integrate as \(t \to \infty\) (or as \(t \to -\infty\)), then we should be able to do so from any starting point.
Note that the relationships between improper integrals with different starting/ending points in Fact 24.2.1 agrees with the pattern in Property 5 of Pattern 6.3.11. Here are some other familiar patterns from Pattern 6.3.11.
Do not get Fact 24.2.3 backwards! Example 24.1.8 demonstrates that knowing \(f(t) \to 0\) as \(t \to \infty\) is not sufficient for an improper integral to converge as \(t \to \infty\text{.}\)
While Fact 24.2.3 says that we need \(f(t) \to 0\)as \(t \to \infty\) for an improper integral to converge as \(t \to \infty\text{,}\) if \(f(t)\) is too slow to converge to \(0\text{,}\) then the integral will not converge. This is precisely what is happening with the two integrals
The function \(f(t) = 1/t\) converges to zero as \(t \to \infty\text{,}\) but it does so too slowly and so the accumulating area grows too quickly for it to converge to a specific value. But the square in \(g(t) = 1/t^2\) allows this function to converge to zero more quickly, limiting the growth of the area as \(t \to \infty\) and allowing trend of partial areas to stay bounded.
We saw that an improper integral of exponential decay converges in Integral 2 of Example 24.1.5, which is not surprising because exponential decay decays very quickly. And this integral squares \(t\) before inputting it into exponential decay, so it decays even faster. Because of this, we should expect this integral to converge. But how could we be sure? The function \(f(t) = e^{-t^2}\) is (famously) an example of a function where is it impossible to write down an antiderivative without using an integral (that is, using the Fundamental Theorem of Calculus).
The comparison tests in this section allow us to determine convergence without actually computing the integral value, by comparing the integral to one whose convergence is already known.
In this section, we will only concern ourselves with integrals where \(t \to \infty\text{,}\) but similar facts/patterns hold for integrals where \(t \to -\infty\text{.}\)
If one improper integral converges, and the partial areas of a second improper integral are all smaller in magnitude than those of the first, than it stands to reason that the second improper integral should converge as well.
If the improper integral of \(g\) converges, then \(G(t)\) must be bounded, hence \(F(t)\) must be bounded as well. Since \(F(t)\) is non-decreasing, it must converge as \(t \to \infty\) (Monotone Convergence Theorem). Hence the improper integral of \(f\) converges as well.
On the other hand, suppose we start with the assumption that the improper integral of \(f\) diverges. As the partial area function \(F(t)\) is non-decreasing, we can conclude that it is unbounded. Since \(G(t) \ge F(t)\) for \(t \ge a\text{,}\)\(G(t)\) must be unbounded as well, in which case the improper integral of \(g\) diverges as well.
hence this integral converges. For the comparison, first consider that exponential values are always positive. Further more, for \(t \ge 1\) we have \(t^2 \ge t\text{,}\) and so \(e^{t^2}\) will be further out on the exponential growth graph than \(e^t\text{.}\) Thus,
If we wanted some idea of the value of the improper integral of \(e^{-t^2}\text{,}\) we could approximate it numerically with partial integrals. The Sage cell below will allow you to do that by increasing the upper bound of integration. What do you notice as you increase the upper bound \(t\text{?}\) What does this say about the graph of \(f(t) = e^{-t^2}\text{?}\)
Sometimes an improper integral of a function that is oscillating between positive and negative values is difficult to analyze; the following fact sometimes allows us to convert the question of convergence to one about a non-negative function.
and so the improper integral of \(\abs{e^{-t} \sin(t)}\) converges by comparison with the improper integral of \(e^{-t}\text{.}\) And then the improper integral of \(e^{-t} \sin(t)\) converges by Fact 24.3.6.
An improper integral converges if it accumulates area slowly enough that the area doesn’t become unbounded. But this is directly related to how quickly the height of the graph tends to \(0\text{.}\) So we should be able to compare improper integrals based on how quickly the functions being integrated tend to \(0\text{.}\)
either both converge or both diverge. But we know that the second integral diverges by combining Example 24.1.8 and Pattern 24.2.2, and therefore so does the first integral.
We can think of vertical asymptotes as “inverses” of horizontal asymptotes. If we can sometimes attach a number to the area squeezed between a graph and a horizontal asymptote at \(y = 0\text{,}\) we should be able to sometimes attach a number to the area squeezed between a graph and a vertical asymptote. To compute such an improper integral, we take a limit as one of the boundaries approaches the location of the vertical asymptote.
(a)A left-handed improper integral at a vertical asymptote. The partial area \(\ccmint{a}{t}{f(u)}{u}\) approximates the total area \(\ccmint{a}{c}{f(t)}{t}\text{.}\)
(b)A right-handed improper integral at a vertical asymptote. The partial area \(\ccmint{t}{b}{f(u)}{u}\) approximates the total area \(\ccmint{c}{b}{f(t)}{t}\text{.}\)
The divergence of this integral should not be a surprise: the function is self-inverse (that is, \(\inv{f}(t) = f(t)\)) and so its graph is symmetric about the line \(y = t\text{.}\) Thus the accumulation of area as \(t \to 0^{+}\) should mirror the accumulation of area as \(t \to \infty\text{.}\)
It might be a surprise that this integral also diverges, since the integral of \(g\)converges as \(t \to \infty\text{.}\) But this function’s graph is not symmetric about the line \(y = t\text{,}\) so the two improper integrals (as \(t \to 0^{+}\) versus as \(t \to \infty\)) are not related. In fact, \(\inv{g}(t) = 1/\sqrt{t}\text{,}\) the integral of which diverges as \(t \to \infty\) (see Fact 24.1.9), which coincides with the divergence of the integral of \(g\) as \(t \to 0^{+}\text{.}\)
A function with a non-zero horizontal asymptote will not have a convergent integral as \(t \to \infty\text{,}\) because the area bounded between the asymptote and the horizontal axis will be infinite. That is, if we take a function whose integral converges as \(t \to \infty\text{,}\) (which necessarily means the function has a horizontal asymptote at \(y = 0\text{,}\) see Fact 24.2.3), we cannot shift that function up or down without causing the integral of the shifted function to diverge as \(t \to \infty\) (see Corollary 24.2.5). In contrast, we can shift vertical asymptotes around without affecting convergence of improper integrals near that asymptote.
The function \(f(t) = 1/\sqrt{t - 2}\) is a shift to the right of a previous example we have computed, and has a vertical asymptote at \(t = 2\text{.}\) Let’s verify that the shifted integral converges as well: