Chapter 26 Power and Taylor series
Section 26.1 Motivation
In Section 22.1, we looked at approximating with Taylor polynomials of increasing degree. We now have the framework to recognize these approximate values as a sequence of partial sums for the infinite series
Using Taylor polynomials of higher degree should yield better approximations. This corresponds to taking partial sums including more terms to better approximate the sum value of the infinite series. Will the full infinite series recover the exact value of
Before we explore that question, also consider that we may use Taylor polynomials based at a specific point to approximate output values of the function at all input values sufficiently near that point. In other words, the infinite series in the example of approximating above is only one example in an infinite family of such approximation examples. Using the pattern of Taylor polynomials based at arrived at in Example 21.3.9, for any input value with we could say
There are several variables involved in the expressions above. There is index variable for the summation notation. There is also the degree of the Taylor polynomial used, but as above we are going to let that trend to as we consider partial sums of an infinite series. And finally there is the input variable that makes a Taylor polynomial a function. In this chapter we will study infinite series that involve such an “input variable”.
Section 26.2 Power series
Subsection 26.2.1 Basics
Example 26.2.2.
The power series has partial sum functions
If you completed Checkpoint 21.3.8, then you should recognize these partial sums as precisely the Maclaurin polynomials for
If we attempt to use a power series as a function we may run into trouble.
Example 26.2.3.
Define
For we have
which is the alternating harmonic series. We saw in Example 25.5.12 that this series converges conditionally, and in Example 25.1.9 we investigated its sum value numerically, determining that On the other hand, for we have
the regular harmonic series, which we know diverges.
Example 26.2.3 demonstrates that, viewed as a function in the indeterminate variable, the domain of a power series is tied to convergence/divergence of the infinite series that results from substituting particular values for that variable. We will say that a power series converges at if the infinite series converges. For a domain set of input values, we will say that the power set converges on if it converges for all in And similarly with “converges” replaced by “diverges.”
Example 26.2.4.
The power series is a geometric series for each value of and we know the base values for which a geometric series converges. So we can say that this power series converges on domain and diverges on domains and
We know that convergence of an infinite series depends on the terms being added trending to sufficiently quickly. So if a power series converges at a particular -value, it should also converge at all smaller -values.
Fact 26.2.5.
- If
converges at then it also converges absolutely on domain - If
diverges at then it also diverges on both and
Justification for Statement 1.
If converges at this means that the infinite series converges. We must then have and so there is some tail completely contained in the range For every value of in the domain we have and so the geometric series converges (absolutely). But with for we have
Justification for Statement 2.
This is effectively the contrapositive of Statement 1. If diverges, then it is not possible for to converge for any because if it did then Statement 1 says that would also converge for all values of of smaller magnitude, which includes But diverges. And similarly for
Subsection 26.2.2 Radius of convergence
For a specific power series Fact 26.2.5 gives us a pretty clear picture of the pattern of convergence at different -values:
- If the series converges at a particular non-zero
-value, then it also converges at every -value of smaller magnitude, positive and negative. This creates a symmetric interval of convergence on the real number line, centred at - If the series diverges at a particular non-zero
-value, then it also diverges at every -value of larger magnitude, positive and negative. This creates a symmetric “void” region on the real number line, centred at outside of which the series diverges.
It follows that there must be a boundary between the two patterns above: a boundary between -values with small enough magnitude for the series to converge, and -values magnitude too large for convergence. A symmetric pair of points on either size of with convergence at all points in between and divergence at all points outside.
Pattern 26.2.6.
For a power series there exists a non-negative (possibly infinite) value called the radius of convergence, for which the series converges for all -values satisfying and diverges for all -values satisfying or
Justification.
Consider the set of non-negative -values for which the series converges. Note that is non-empty, since every power series of the form converges to at If set is unbounded, then Fact 26.2.5 implies that the series must converge at all -values, so take Otherwise, if set is bounded then it must have a least upper bound. Take to be that least upper bound value.
Remark 26.2.7.
Pattern 26.2.6 does not say anything about convergence/divergence at for a reason. In the examples below, we will see that different power series can exhibit different behaviours at the boundaries of its interval of convergence: some series converge at both some diverge at both points, and some converge at one and diverge at the other.
In the examples below, we will most often use the Ratio test, as two consecutive powers of will cancel to a single in a ratio:
Also, we will generally test for absolute convergence, which implies convergence (Fact 25.5.1). Because we know that convergence is dependent on the magnitude of the -values and not the sign (except possibly right at ), and that the interval of convergence is symmetric on the number line about introducing an absolute value will help measure the maximum magnitude of for convergence.
Example 26.2.8. A power series that only converges within its radius of convergence.
We know that the Geometric Series converges only for and not at either or at any -value of larger magnitude.
Example 26.2.9. Two series that converge only at one endpoint of the interval of convergence.
-
Let’s apply the Ratio test to test the power seriesfor absolute convergence. We haveRecall that the Ratio Test says that the series converges if this limit is less than
diverges if it is greater than and the test provides no information if the limit is exactly So we have (absolute) convergence when which corresponds to the interval and have divergence when which corresponds to the regions and Our radius of convergence isBut what about at the endpoints At we get the harmonic series, which we know diverges, but at we get the alternating harmonic series, which we know (conditionally) converges. So the complete interval of convergence is - Analyzing absolute convergence of the power seriesleads to the same calculation and the results as above, since the absolute value brackets will nullify the alternating
factor in the terms. So again we have with (absolute) convergence when on domain and divergence on domains and But this time the behaviour at the endpoints has flipped, since substituting will again nullify the alternating sign pattern in the coefficient sequence and leave us with the divergent harmonic series, whereas substituting will maintain the alternating pattern and leave us with the convergent alternating harmonic series. So the complete interval of convergence is
Example 26.2.10. A series that converges at both endpoints.
Again we use the Ratio test to test the power series
for absolute convergence:
So the radius of convergence is At one endpoint of the interval of convergence, when we have the convergent -series At the other endpoint, when we have the alternating -series But this series converges absolutely, so it converges as well, and our complete interval of convergence is
Example 26.2.11. A series with .
Example 26.2.12. A series with .
From our examples so far you might think that and are the only possibilities for a radius of convergence. Here is an example with radius of convergence But if you replace the occurrences of the number in this example by any other positive number, you can generate an example with that radius of convergence, demonstrating that any positive number is possible as a radius of convergence of some power series.
Example 26.2.13. .
No limit is necessary, as this result is constant relative to the index variable But as with the Ratio Test we need this “limit” result to be less than to have convergence, and occurs when so our radius of convergence is
Warning 26.2.14.
When applying the Ratio or Root tests to a power series, make sure you are considering the limit as not as
Subsection 26.2.3 Shifted power series
With inspiration from Taylor series, we may also wish to consider horizontal shifts of power series:
Using a change of variables we can express this as a “normal” power series,
which we know will have a radius of convergence so that the series converges for all values of satisfying In terms of the original indeterminate we (unsurprisingly) end up with a shifted interval of convergence as well:
So we still have a radius of convergence but the interval of convergence is now centred at instead of at As before, the series will diverge outside of that interval, and convergence/divergence at the endpoints needs to be separately investigated.
Example 26.2.15.
The shifted power series
will have an interval of convergence centred at As usual, we use the Ratio test to test the absolute convergence:
as So the series converges for
From the second inequality above, our radius of convergence is and the interval of convergence has endpoints and At those endpoints, we know from the third inequality above that and the series becomes
Whether the series is alternating (at ) or not (at ), it diverges because the magnitude of the terms diverge to infinity. Thus the interval of convergence is as initially stated above, where both endpoints are excluded.
Section 26.3 Taylor series
In this section our analysis will focus primarily on Maclaurin series, as Taylor series are just shifts of Maclaurin series. But since Maclaurin series are just special cases of Taylor series, we will refer to all as Taylor series.
Subsection 26.3.1 Creating Taylor series
Recall that the general pattern for a Taylor polynomial based at (that is, a Maclaurin polynomial) for a function is
where represents the derivative of (and is just itself). If exists for all then we can consider as a partial sum of a power series, where we just allow the coefficient pattern to continue indefinitely.
Definition 26.3.1. Taylor series at .
Taylor series for different functions will have different radii of convergence, depending on the pattern of derivative values at On its interval of convergence, we can in some sense think of the sequence of Taylor polynomials “converging” to the Taylor series:
Example 26.3.2. Taylor series for the natural exponential function.
Using the Ratio test we have
Since this limit is always less than the Taylor series for the natural exponential function converges for all values of
Example 26.3.3. Taylor series for shifted logarithm.
Consider Since applying the Chain Rule to the derivatives of is simple. Let’s look for the pattern in the derivatives.
So for the Taylor series coefficient pattern is
The initial term does not fit into this pattern, but in fact so it doesn’t have to. Thus the Taylor series is
where we begin the sum at index because The calculations in Example 26.2.9 demonstrate that this series converges absolutely. At this is an alternating harmonic series, which we know converges. At this is the negative of the harmonic series, and so diverges. (But is not in the domain of anyway.) So the Taylor series for at converges on interval
Example 26.3.4. Taylor series for cosine.
In Example 21.3.6 we determined the general pattern for a Maclaurin polynomial for cosine to be
Recall that the polynomial only involves even powers of the indeterminate because every second higher derivative value evaluates to at To obtain the Taylor series at for cosine we use the same pattern:
Using the Ratio test we have
Checkpoint 26.3.5. Taylor series for sine.
Use the Maclaurin polynomial pattern you developed in Checkpoint 21.3.7 to create the Taylor series at for the sine function, and investigate its convergence.
Warning 26.3.6.
When computing a Taylor series, do not substitute until you see the pattern of all higher derivatives of
Subsection 26.3.2 Limits of Taylor series
We have looked at a number of examples of Taylor series Subsection 26.3.1 and investigated their convergence. But to what do they converge? When we created Taylor polynomials, our goal was to approximate functions with simpler polynomial functions, and we reasoned that higher-degree polynomials should be better approximations by capturing more of the behaviour of the original function. Being an “infinite-degree” polynomial, does the Taylor series for a function become an “infinitely-good” approximation of the original function?
To answer this question, we should first remember what it means for a series to converge to a sum value: the sequence of partial sums should approach that value. Partial sums for a Taylor series are simply Taylor polynomials, and to measure how close a Taylor polynomial value (evaluated at some particular -value in the interval of convergence for the Taylor series) is to a potential limiting value, we can consider the difference between the partial sum value and the potential limiting value. (See Fact 22.3.31.) And the “potential limiting value” we propose is the original function value at
Definition 26.3.7. Remainder sequence for a Taylor series.
Fact 26.3.8.
Justification.
By definition, at a particular -value the series
converges if the sequence of partial sum values converges for that value of But since Fact 22.3.31 says that is equivalent to
Remark 26.3.9.
It is tempting to think of as a sequence of functions, converging to the function But in Fact 26.3.8, the reality is that is a sequence of numbers at a specific value of converging to a specific function output value We have not developed the conceptual tools necessary to discuss sequences of functions and their convergence. At this point, the correct thing to say about is that this convergence occurs pointwise. (That is, this convergence may be valid or invalid at different points on the number line.)
Recall that we have already stated a bound on the error in approximating for a specific value of (Fact 21.3.10). Since is measures that error, we have
Example 26.3.10. Remainder of a Taylor approximation to the natural exponential function.
In Example 26.3.2 we determined that Taylor series for the exponential function at converges for all values of For all we have and since the exponential function is an increasing function, on any given domain we have Thus
In the ratio
each of the numerator and denominator is a product of factors. But the factors in the numerator are all the same magnitude, while those in the denominator become arbitrarily large, and we conclude that
By the Squeeze Theorem, we must also have And a similar argument can be made to demonstrate the same for negative We conclude that the Taylor series for the exponential function converges to the value of the exponential function at all points; that is,
is always true. In particular, using gives us an alternative definition of Euler’s number:
Checkpoint 26.3.11. Evaluating Taylor series for sine and cosine.
Demonstrate that the Taylor series for sine and cosine at both converge to the value of their respective originating function for all values of
Section 26.4 Functions defined by power series
Any input-output process defines a function. And any power series defines such a process: if the input value is in the interval of convergence of the series, then the output value is the limiting value of the sequence of partial sums for that value of For example, we have already seen that the natural exponential function is equivalent to the power-series-defined function
If we are to view power series as functions, a number of questions arise.
- How do we determine where power series functions are continuous, hence integrable? How do we integrate power series functions?
- How do we determine where power series functions are differentiable, and how do we compute their derivatives?
Throughout this chapter we will explore and state results for Taylor series based at and for power series of the form but all results are true for Taylor series at other base points and for “shifted” power series.
Subsection 26.4.1 Term-by-term integration/differentiation
Let’s first consider a power series that is a Taylor series at for a function for which is defined for all
Let’s line this Taylor series up with the one for f.
It appears that each term of the series for can be obtained by term-by-term integration of the series for (see Formula 1 in Pattern 7.4.2), along with the introduction of an arbitrary constant term
Now consider the Taylor series based at for the derivative of Let represent the derivative of so that
Again, let’s line this Taylor series up with the one for f.
This time it appears that each term of the series for can be obtained by term-by-term differentiation of the series for (see Pattern 12.4.3).
It’s important to note at this point that the Taylor series for and above don’t necessarily converge to those respective functions for all We’ll address questions of convergence in Subsection 26.4.2, but for now the next two examples give us hope that term-by-term operations yield the desired results.
Example 26.4.1. Integrating/differentiating the exponential Taylor series.
Applying term-by-term integration to this series we obtain
However, consolidating denominators in each term, we see that this series is identical to (✶), except that it is missing the constant term. So term-by-term integration has resulted in a power series that converges to which conveniently agrees with Pattern 9.6.1.
which simplifies to the Taylor series for the exponential function, agreeing with Pattern 12.5.7.
Example 26.4.2. Integrating/differentiating the Taylor series for cosine.
Let’s integrate each term of this series:
If you completed Checkpoint 26.3.5, you will recognized this result as precisely the Taylor series at for the sine function, conveniently agreeing with Pattern 10.5.2.
Now let’s differentiate each term of the Taylor series for cosine. Since the first term in the series is a constant whose derivative is we will simply shift indices in the term-by-term differentiated series.
and our differentiated sum becomes
Again, this new series is the Taylor series at for the sine function, except for the extra negative sign out front, and so this result agrees with Pattern 12.6.7.
Subsection 26.4.2 Convergence of integrated/differentiated power series
Naturally, questions of convergence arise.
- For
an antiderivative of do the Taylor series for and all have the same interval of convergence? - For what values of
does each series converge to the value of the function from which it was created?
We’ll address the question of convergence more generally.
Fact 26.4.3. Convergence of differentiated/integrated power series.
Justification for the differentiated series.
First suppose lies in (A similar argument as to the one provided below can be made for negative ) We know from Fact 26.2.5 that converges absolutely, and in fact so does for any value of between and Choose such a value of (for example, could be the average of and ). We will compare the absolute convergence of the differentiated series with
One may easily verify with the Ratio Test that the series has radius of convergence Since the series converges absolutely, and the sequence of terms must converge to Therefore, any constant multiple of this sequence must converge to In particular, if we divide each term by we have
As such, there must exist some tail of this sequence (starting at index say) completely contained in the open range As both and are positive, for all we have
We may multiply this inequality by the non-negative value and take absolute value of everything since these are all positive values, to obtain
Now suppose is outside of Again we will assume that is positive, as a similar argument will work in the case that is negative. We essentially make a “reflected” argument of the one above, reflected at Again choose between and which will also lie outside of the radius of convergence of Then and so
Any constant multiple of this sequence will also diverge to infinity. In particular, dividing every term by gives us
and so there is some tail of this sequence where the terms become and stay greater than Performing similar manipulations as in the convergent case above, we may say that there is an where
Justification for the integrated series.
Therefore, converges absolutely.
Now consider a value of outside of If converged, then would be within the interval of convergence of the series If we differentiate this series term-by-term, the resulting series would have the same radius of convergence. But differentiating that series term-by-term yields
which is simply a tail of the original series and hence has the same radius of convergence Since we assumed to be outside of this radius of convergence, we have a contradiction, and we conclude that must diverge.
So integrated/differentiated power series converge on the same interval that the original power series does. But to what do they converge? Thankfully, the results of Example 26.4.1 and Example 26.4.2 are not coincidences or special cases. But justifying this is beyond the scope of this text.
Theorem 26.4.4. Derivatives/antiderivatives of power series.
Suppose the power series has positive (possibly infinite) radius of convergence and let be the function defined by this power series on its (open) interval of convergence
- The function
is continuous on and the term-by-term-integrated series on that domain. - The function
is differentiable on and its derivative is equal to the function determined by the term-by-term-differentiated power series
While the justification for Statement 2 of Theorem 26.4.4 is beyond the scope of this text, it can be used to justify Statement 1. Every differentiable function is continuous (Pattern 11.7.11), and we can use term-by-term differentiation of the power series expression for to arrive back at demonstrating that
We can also extend Theorem 26.4.4 to higher derivatives.
Corollary 26.4.5.
Subsection 26.4.3 Using differentiation/integration to compute power series
Instead of computing higher and higher derivatives of a function in order to compute its Taylor series, an alternative is to use term-by-term integration of a known power series representation for a function to obtain a power series representation for its antiderivative or its derivatives.
Example 26.4.6. Derivative of the geometric series.
Checkpoint 26.4.7.
Use Corollary 26.4.5 to continue the pattern of Example 26.4.6, obtaining a general power series representation for every power
Example 26.4.8. Integral of the geometric series.
We know that
Integrating both sides, we obtain
(On domain there is no need to include absolute value brackets inside the logarithm.) Both the infinite series and the logarithm evaluate to at so we conclude that Shifting indices in the series using the substitution we arrive at
In particular, for we have
This provides us with an interesting representation of as an infinite series:
Example 26.4.9. Integral of the natural logarithm.
First, we know that and
So we can again use the Geometric series to obtain a power series representation:
Now integrate both sides of
to arrive at
Both and the infinite series evaluation to at so we conclude that Shifting indices in the series using the substitution we arrive at
Finally, we will integrate term-by-term once again to obtain an antiderivative of
Using
we can rearrange:
We can then split this into two series:
Adding these back together yields
For an indefinite integral we should really have an arbitrary constant of integration, so we conclude that
We can also use term-by-term integration to help estimate definite integral values.
Example 26.4.10. Estimating a definite integral with partial sums.
Suppose we want to estimate the definite integral below to within
Recall that we can use any choice of antiderivative for the integrand function to compute a definite integral (Pattern 13.6.2). In particular, we can create an antiderivative through term-by-term integration of a power series representation of the integrand function. Conveniently, our integrand can be interpreted as the sum of a geometric series with
Our power series representation of is valid for which for is equivalent to the domain and contains our domain of integration. And by Theorem 26.4.4, our antiderivative power series is also valid on that domain.
We have
but every term in our power series for involves a non-trivial power of So there is no constant term and which means that we only need to estimate
This is an alternating series with terms whose absolute value are decreasing, so we may apply Fact 25.5.13 to estimate
where the error bound is the magnitude of the term in the sum. What is the smallest value of so that this error bound is less than Then denominator is first larger than for so we only need the first four terms of the sum (indices to inclusive):
Subsection 26.4.4 Convergence at endpoints: Abel’s Theorem
Example 26.4.11. Endpoint convergence of a shifted logarithm.
Suppose we would like to have a power series representation of the shifted logarithm function We have and we may interpret this as the sum of a geometric series with
This power series representation is valid for which for is equivalent to the domain Integrating this power series term-by-term will result in an antiderivative of which must then be a vertical shift of
(Compare with the Taylor series for this function that we computed in Example 26.3.3.)
Since our series for converges on Theorem 26.4.4 implies that our series for converges on that same domain, and for each value of in that domain the series converges to the value of For the series is a harmonic series, and hence diverges, which is to be expected since is undefined. For the series is an alternating harmonic series, which we know converges. But what does it converge to? We hope that it converges to
Theorem 26.4.12. Abel’s Theorem.
Example 26.4.13. Abel’s Theorem for a shifted logarithm.
Returning to Example 26.4.11, the function is continuous on its entire domain so we may say that it is continuous from the left at Abel’s Theorem lets us conclude that the alternating harmonic series converges to as claimed in Example 25.1.9.
Section 26.5 Power series versus Taylor series
In the examples in this chapter we have often used known sum of the Geometric series to quickly obtain a power series representation of a function. How does such a power series compare to the Taylor series based at Is it possible to have two different power series representations for a given function, both “based” at the same point?
Example 26.5.1. Compare the geometric series with the Taylor series of its sum.
Checkpoint 26.5.2. Taylor series of the reflected geometric series.
Perform the same analysis as in Example 26.5.1 for the function
On the one hand, obtain a power series representation for by interpreting its formula as the sum of a geometric series, and on the other hand compute the Taylor series for based at Compare.
Fact 26.5.3. Taylor series for a power series function.
Idea of justification.
To compute the Taylor series, apply the differentiation formula for power series in Corollary 26.4.5.
Corollary 26.5.4. Taylor series for a derivative/antiderivative function.
- For a continuous function
one may obtain the Taylor series for an antiderivative by term-by-term integration of the Taylor series for (with the possible addition of a vertical shift). - For differentiable function
one may obtain the Taylor series for by term-by-term differentiation of the Taylor series for
Example 26.5.5. Determining the Taylor series for an antiderivative of a continuous function.
From the Fundamental Theorem of Calculus, we may conclude that every continuous function has an antiderivative. (See Fact 13.4.4.) For example, the continuous function must have an antiderivative, but you are unlikely to guess a formula for it without using an integral sign. However, if we are willing to use an integral sign, we can construct an antiderivative of
What is the Taylor series for We know that the Taylor series for the natural exponential function is with infinite radius of convergence. (See Example 26.3.2.) Substituting into this series yields and by integrating we obtain
where is an arbitrary constant of integration. The specific antiderivative must be one of the infinite family of antiderivatives above, and from we conclude that it is the antiderivative where
As we know that integrating a power series does not affect the radius of convergence, this power series representation of is valid for all