Skip to main content
Logo image

Chapter 26 Power and Taylor series

Section 26.1 Motivation

In Section 22.1, we looked at approximating \(\ln(1.001)\) with Taylor polynomials of increasing degree. We now have the framework to recognize these approximate values as a sequence of partial sums for the infinite series
\begin{equation*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1} {0.001}^k}{k} \text{.} \end{equation*}
Using Taylor polynomials of higher degree should yield better approximations. This corresponds to taking partial sums including more terms to better approximate the sum value of the infinite series. Will the full infinite series recover the exact value of \(\ln(1.001)\text{?}\)
Before we explore that question, also consider that we may use Taylor polynomials based at a specific point to approximate output values of the function at all input values sufficiently near that point. In other words, the infinite series in the example of approximating \(\ln(1.001)\) above is only one example in an infinite family of such approximation examples. Using the pattern of Taylor polynomials based at \(t = 1\) arrived at in Example 21.3.9, for any input value \(t\) with \(t \approx 1\text{,}\) we could say
\begin{equation*} \ln(t) \approx \nthtaylor{1}(t) = \sum_{k = 1}^n \frac{{(-1)}^{k - 1} {(t - 1)}^k}{k} \text{.} \end{equation*}
There are several variables involved in the expressions above. There is index variable \(k\) for the summation notation. There is also the degree \(n\) of the Taylor polynomial used, but as above we are going to let that trend to \(\infty\) as we consider partial sums of an infinite series. And finally there is the input variable \(t\) that makes a Taylor polynomial a function. In this chapter we will study infinite series that involve such an “input variable”.

Section 26.2 Power series

Subsection 26.2.1 Basics

Definition 26.2.1. Power series.
An infinite series involving non-negative powers of an indeterminate \(t\text{:}\)
\begin{equation*} \sum_{k = 0}^\infty a_k t^k \text{,} \end{equation*}
using the convention \(t^0 = 1\) for all values of \(t\) (including \(t = 0\)). The sequence \(\nseq{a}\) is called the coefficient sequence for the power series.
Example 26.2.2.
The power series \(\sum t^k / k!\) has partial sum functions
\begin{equation*} s_n(t) = \sum_{k = 0}^n \frac{t^k}{k!} \text{.} \end{equation*}
If you completed Checkpoint 21.3.8, then you should recognize these partial sums as precisely the Maclaurin polynomials for \(\exp(t)\text{.}\)
If we attempt to use a power series as a function we may run into trouble.
Example 26.2.3.
Define
\begin{equation*} f(t) = \sum_{k = 1}^\infty \frac{t^k}{k} \text{.} \end{equation*}
For \(t = -1\text{,}\) we have
\begin{equation*} f(-1) = \sum_{k = 1}^\infty \frac{{(-1)}^k}{k} \text{,} \end{equation*}
which is the alternating harmonic series. We saw in Example 25.5.12 that this series converges conditionally, and in Example 25.1.9 we investigated its sum value numerically, determining that \(f(-1) = \ln(2)\text{.}\) On the other hand, for \(t = 1\) we have
\begin{equation*} f(1) = \sum_{k = 1}^\infty \frac{1}{k} \text{,} \end{equation*}
the regular harmonic series, which we know diverges.
Example 26.2.3 demonstrates that, viewed as a function in the indeterminate variable, the domain of a power series is tied to convergence/divergence of the infinite series that results from substituting particular values for that variable. We will say that a power series \(\sum a_k t^k\) converges at \(t = c\) if the infinite series \(\sum a_k c^k\) converges. For a domain set \(D\) of input values, we will say that the power set converges on \(D\) if it converges for all \(c\) in \(D\text{.}\) And similarly with “converges” replaced by “diverges.”
Example 26.2.4.
The power series \(\sum t^k\) is a geometric series for each value of \(t\text{,}\) and we know the base values for which a geometric series converges. So we can say that this power series converges on domain \(-1 \lt t \lt 1\) and diverges on domains \(t \le -1\) and \(t \ge 1\text{.}\)
We know that convergence of an infinite series depends on the terms being added trending to \(0\) sufficiently quickly. So if a power series converges at a particular \(t\)-value, it should also converge at all smaller \(t\)-values.
Justification for Statement 1.
If \(\sum a_k t^k\) converges at \(t = c\text{,}\) this means that the infinite series \(\sum a_k c^k\) converges. We must then have \(\seq{a_n c^n} \to 0\text{,}\) and so there is some tail \(\seqtail{a_n c^n}\) completely contained in the range \((-1, 1)\text{.}\) For every value of \(t\) in the domain \(-\abs{c} \lt t \lt \abs{c}\text{,}\) we have \(\abs{t / c} \lt 1\text{,}\) and so the geometric series \(\sum {\abs{t / c}}^k\) converges (absolutely). But with \(\abs{a_n c^n} \lt 1\) for \(n \ge N\text{,}\) we have
\begin{align*} \abs{a_n t^n} \amp = \abs{a_n t^n} \cdot \abs{\frac{c^n}{c^n}} \\ \amp = \abs{a_n c^n} \cdot \abs{\frac{t^n}{c^n}} \\ \amp \lt {\abs{\frac{t}{c}}}^n \end{align*}
for all \(n \ge N\text{.}\) So \(\sum \abs{a_k t^k}\) converges by comparison with the geometric series \(\sum {\abs{t / c}}^k\text{.}\)
Justification for Statement 2.
This is effectively the contrapositive of Statement 1. If \(\sum a_k c^k\) diverges, then it is not possible for \(\sum a_k t^k\) to converge for any \(t \gt \abs{c}\text{,}\) because if it did then Statement 1 says that \(\sum a_k t^k\) would also converge for all values of \(t\) of smaller magnitude, which includes \(c\text{.}\) But \(\sum a_k c^k\) diverges. And similarly for \(t \lt - \abs{c}\text{.}\)

Subsection 26.2.2 Radius of convergence

For a specific power series \(\sum a_k t^k\text{,}\) Fact 26.2.5 gives us a pretty clear picture of the pattern of convergence at different \(t\)-values:
  • If the series converges at a particular non-zero \(t\)-value, then it also converges at every \(t\)-value of smaller magnitude, positive and negative. This creates a symmetric interval of convergence on the real number line, centred at \(0\text{.}\)
  • If the series diverges at a particular non-zero \(t\)-value, then it also diverges at every \(t\)-value of larger magnitude, positive and negative. This creates a symmetric “void” region on the real number line, centred at \(0\text{,}\) outside of which the series diverges.
It follows that there must be a boundary between the two patterns above: a boundary between \(t\)-values with small enough magnitude for the series to converge, and \(t\)-values magnitude too large for convergence. A symmetric pair of points \(t = \pm R\) on either size of \(0\text{,}\) with convergence at all points in between and divergence at all points outside.
Justification.
Consider the set \(\convset{C}\) of non-negative \(t\)-values for which the series converges. Note that \(\convset{C}\) is non-empty, since every power series of the form \(\sum a_k t^k\) converges to \(a_0\) at \(t = 0\text{.}\) If set \(\convset{C}\) is unbounded, then Fact 26.2.5 implies that the series must converge at all \(t\)-values, so take \(R = \infty\text{.}\) Otherwise, if set \(\convset{C}\) is bounded then it must have a least upper bound. Take \(R\) to be that least upper bound value.
Remark 26.2.7.
Pattern 26.2.6 does not say anything about convergence/divergence at \(t = \pm R\) for a reason. In the examples below, we will see that different power series can exhibit different behaviours at the boundaries of its interval of convergence: some series converge at both \(t = \pm R\text{,}\) some diverge at both points, and some converge at one and diverge at the other.
In the examples below, we will most often use the Ratio test, as two consecutive powers of \(t\) will cancel to a single \(t\) in a ratio:
\begin{equation*} \frac{a_{k + 1} t^{k + 1}}{a_k t^k} = \frac{a_{k + 1}}{a_k} \; t \text{.} \end{equation*}
However, sometimes the Root test is useful, depending on the form of the coefficient sequence \(\nseq{a}\text{.}\)
Also, we will generally test for absolute convergence, which implies convergence (Fact 25.5.1). Because we know that convergence is dependent on the magnitude of the \(t\)-values and not the sign (except possibly right at \(t = \pm R\)), and that the interval of convergence is symmetric on the number line about \(t = 0\text{,}\) introducing an absolute value will help measure the maximum magnitude of \(t\) for convergence.
Example 26.2.8. A power series that only converges within its radius of convergence.
We know that the Geometric Series \(\sum t^k\) converges only for \(-1 \lt t \lt 1\text{,}\) and not at either \(t = \pm 1\text{,}\) or at any \(t\)-value of larger magnitude.
Example 26.2.9. Two series that converge only at one endpoint of the interval of convergence.
  1. Let’s apply the Ratio test to test the power series
    \begin{equation*} \sum_{k = 1}^\infty \frac{t^k}{k} \end{equation*}
    for absolute convergence. We have
    \begin{equation*} \abs{\frac{t^{n + 1} / (n + 1)}{t^n / n}} = \frac{n}{n + 1} \cdot \abs{t} \to 1 \cdot \abs{t} \quad \text{as } n \to \infty \text{.} \end{equation*}
    Recall that the Ratio Test says that the series converges if this limit is less than \(1\text{,}\) diverges if it is greater than \(1\text{,}\) and the test provides no information if the limit is exactly \(1\text{.}\) So we have (absolute) convergence when \(\abs{t} \lt 1\text{,}\) which corresponds to the interval \(-1 \lt t \lt 1\text{,}\) and have divergence when \(\abs{t} \gt 1\text{,}\) which corresponds to the regions \(t \lt -1\) and \(t \gt 1\text{.}\) Our radius of convergence is \(R = 1\text{.}\)
    But what about at the endpoints \(t = \pm 1\text{?}\) At \(t = 1\) we get the harmonic series, which we know diverges, but at \(t = -1\) we get the alternating harmonic series, which we know (conditionally) converges. So the complete interval of convergence is \(-1 \le t \lt 1\text{.}\)
  2. Analyzing absolute convergence of the power series
    \begin{equation*} \sum_{k = 1}^\infty \frac{{(-1)}^k t^k}{k} \end{equation*}
    leads to the same calculation and the results as above, since the absolute value brackets will nullify the alternating \({(-1)}^k\) factor in the terms. So again we have \(R = 1\text{,}\) with (absolute) convergence when on domain \(-1 \lt t \lt 1\) and divergence on domains \(t \lt -1\) and \(t \gt 1\text{.}\) But this time the behaviour at the endpoints has flipped, since substituting \(t = -1\) will again nullify the alternating sign pattern in the coefficient sequence and leave us with the divergent harmonic series, whereas substituting \(t = 1\) will maintain the alternating pattern and leave us with the convergent alternating harmonic series. So the complete interval of convergence is \(-1 \lt t \le 1\text{.}\)
Example 26.2.10. A series that converges at both endpoints.
Again we use the Ratio test to test the power series
\begin{equation*} \sum_{k = 1}^\infty \frac{t^k}{k^2} \end{equation*}
for absolute convergence:
\begin{equation*} \abs{\frac{t^{n + 1} / (n + 1)^2}{t^n / n^2}} = {\left(\frac{n}{n + 1}\right)}^2 \; \abs{t} \to 1^2 \cdot \abs{t} \quad \text{as } n \to \infty \text{.} \end{equation*}
So the radius of convergence is \(R = 1\text{.}\) At one endpoint of the interval of convergence, when \(t = 1\) we have the convergent \(p\)-series \(\sum 1 / k^2\text{.}\) At the other endpoint, when \(t = -1\) we have the alternating \(p\)-series \(\sum {(-1)}^k / k^2\text{.}\) But this series converges absolutely, so it converges as well, and our complete interval of convergence is \(-1 \le t \le 1\text{.}\)
Example 26.2.11. A series with \(R = 0\).
For \(\sum (k!) t^k\) we have
\begin{equation*} \abs{\frac{(n + 1)! \cdot t^{n + 1}}{n! \cdot t^n}} = (n + 1) \; \abs{t} \to \infty \quad \text{as } n \to \infty \text{.} \end{equation*}
In particular, for every non-zero value of \(t\text{,}\) no matter how small, the factorial will eventually overwhelm the powers of \(t\) and so \(a_k t^k \not\to 0\text{,}\) which means that the sum cannot converge. So \(R = 0\) and the sum only converges (to \(0! = 1\)) for \(t = 0\text{.}\)
Example 26.2.12. A series with \(R = \infty\).
For
\begin{equation*} \sum_{k = 0}^\infty \frac{{(-1)}^k x^k}{2^k \cdot k!} \end{equation*}
we have
\begin{equation*} \abs{\frac{{(-1)}^{n+1} t^{n+1} / 2^{n + 1} (n+1)!}{{(-1)}^n t^n / 2^n n!}} = \frac{\abs{t}}{2 (n + 1)} \to 0 \quad \text{as } n \to \infty\text{.} \end{equation*}
Thus this series converges for all values of \(t\text{.}\)
From our examples so far you might think that \(0\text{,}\) \(1\text{,}\) and \(\infty\) are the only possibilities for a radius of convergence. Here is an example with radius of convergence \(R = 2\text{.}\) But if you replace the occurrences of the number \(2\) in this example by any other positive number, you can generate an example with that radius of convergence, demonstrating that any positive number is possible as a radius of convergence of some power series.
Example 26.2.13. .
Consider \(\sum t^k / 2^k\text{.}\) This time we will use the Root test to test absolute convergence:
\begin{equation*} \sqrt[n]{\abs{\frac{t^n}{2^n}}} = \frac{\abs{t}}{2} \text{.} \end{equation*}
No limit is necessary, as this result is constant relative to the index variable \(n\text{.}\) But as with the Ratio Test we need this “limit” result to be less than \(1\) to have convergence, and \(\abs{t} / 2 \lt 1\) occurs when \(\abs{t} \lt 2\text{,}\) so our radius of convergence is \(R = 2\text{.}\)
At endpoint \(t = -2\) the power series becomes
\begin{equation*} \sum_{k = 0}^\infty \frac{{(-1)}^k {(-2)}^k}{2^k} = \sum_{k = 0}^\infty \frac{1}{2^{k - 1}} \end{equation*}
which is a convergent geometric series. At the other endpoint \(t = 2\) we have
\begin{equation*} \sum_{k = 0}^\infty \frac{{(-1)}^k 2^k}{2^k} = \sum_{k = 0}^\infty \frac{{(-1)}^k}{2^{k - 1}} \text{.} \end{equation*}
But we have already established above that this series converges absolutely, hence it converges.
Warning 26.2.14.
When applying the Ratio or Root tests to a power series, make sure you are considering the limit as \(n \to \infty\text{,}\) not as \(t \to \infty\text{.}\)

Subsection 26.2.3 Shifted power series

With inspiration from Taylor series, we may also wish to consider horizontal shifts of power series:
\begin{equation*} \sum a_k {(t - c)}^k \text{.} \end{equation*}
Using a change of variables \(u = t - c\) we can express this as a “normal” power series,
\begin{equation*} \sum a_k u^k \text{,} \end{equation*}
which we know will have a radius of convergence \(R\text{,}\) so that the series converges for all values of \(u\) satisfying \(-R \lt u \lt R\text{.}\) In terms of the original indeterminate \(t\text{,}\) we (unsurprisingly) end up with a shifted interval of convergence as well:
\begin{gather*} -R \lt t - c \lt R \\ c - R \lt t \lt c + R \text{.} \end{gather*}
So we still have a radius of convergence \(R\text{,}\) but the interval of convergence is now centred at \(t = c\) instead of at \(t = 0\text{.}\) As before, the series will diverge outside of that interval, and convergence/divergence at the endpoints needs to be separately investigated.
Example 26.2.15.
The shifted power series
\begin{equation*} \sum_{k = 0}^\infty \frac{k {(t + 2)}^k}{3^{k + 1}} \end{equation*}
will have an interval of convergence centred at \(t = -2\text{.}\) As usual, we use the Ratio test to test the absolute convergence:
\begin{align*} \abs{ \frac{(n + 1) {(t + 2)}^{n + 1} / 3^{n + 2}}{n {(t + 2)}^n / 3^{n + 1}} } \amp = \abs{ \frac{(n + 1) (t + 2) / 3}{n} }\\ \amp = \frac{n + 1}{3 n} \abs{t + 2} \\ \amp \to \frac{1}{3} \abs{t + 2} \end{align*}
as \(n \to \infty\text{.}\) So the series converges for
\begin{gather*} \frac{1}{3} \abs{t + 2} \lt 1 \\ \abs{t + 2} \lt 3 \\ -3 \lt t + 2 \lt 3 \\ -5 \lt t \lt 1 \text{.} \end{gather*}
From the second inequality above, our radius of convergence is \(R = 3\text{,}\) and the interval of convergence has endpoints \(t = -5\) and \(t = 1\text{.}\) At those endpoints, we know from the third inequality above that \(t + 2 = \pm 3\text{,}\) and the series becomes
\begin{equation*} \sum_{k = 0}^\infty \frac{k {(\pm 3)}^k}{3^{k + 1}} = \sum_{k = 0}^\infty \frac{k {(\pm 1)}^k}{3} \text{.} \end{equation*}
Whether the series is alternating (at \(t = -5\)) or not (at \(t = 1\)), it diverges because the magnitude of the terms diverge to infinity. Thus the interval of convergence is as initially stated above, where both endpoints are excluded.

Section 26.3 Taylor series

In this section our analysis will focus primarily on Maclaurin series, as Taylor series are just shifts of Maclaurin series. But since Maclaurin series are just special cases of Taylor series, we will refer to all as Taylor series.

Subsection 26.3.1 Creating Taylor series

Recall that the general pattern for a Taylor polynomial based at \(t = 0\) (that is, a Maclaurin polynomial) for a function \(f\) is
\begin{equation*} \nthtaylor{0}(t) = \sum_{k = 0}^n \frac{\nthderiv[k]{f}(0)}{k!} \; t^k \text{,} \end{equation*}
where \(\nthderiv{f}\) represents the \(\nth\) derivative of \(f\) (and \(\nthderiv[0]{f}\) is just \(f\) itself). If \(\nthderiv{f}(0)\) exists for all \(n\text{,}\) then we can consider \(\nthtaylor{0}\) as a partial sum of a power series, where we just allow the coefficient pattern to continue indefinitely.
Definition 26.3.1. Taylor series at \(0\).
If all orders of derivative value for function \(f\) exist at \(t =0\text{,}\) then the power series
\begin{equation*} T_0(t) = \sum_{k = 0}^\infty \frac{\nthderiv[k]{f}(0)}{k!} \; t^k \end{equation*}
is called the Taylor series at \(0\) for \(f\).
Taylor series for different functions will have different radii of convergence, depending on the pattern of derivative values at \(t = 0\text{.}\) On its interval of convergence, we can in some sense think of the sequence of Taylor polynomials “converging” to the Taylor series:
\begin{equation*} \nthtaylor{0} \overset{?}{\to} T_0 \quad\text{as}\quad n \to \infty \text{.} \end{equation*}
Example 26.3.2. Taylor series for the natural exponential function.
We know that \(\nthderiv{\exp}(t) = \exp(t)\) for all \(n\text{,}\) so \(\nthderiv{\exp}(0) = 1\) for all \(n\text{.}\) Thus the Taylor series at \(0\) is
\begin{equation*} \sum_{k = 0}^\infty \frac{t^k}{k!} = 1 + t + \frac{t^2}{2} + \frac{t^3}{3!} + \frac{t^4}{4!} \dotsb \text{.} \end{equation*}
Using the Ratio test we have
\begin{equation*} \abs{ \frac{t^{n + 1} / (n + 1)!}{t^n / n!} } = \frac{\abs{t}}{n + 1} \to 0 \quad\text{as } n \to \infty \text{.} \end{equation*}
Since this limit is always less than \(1\text{,}\) the Taylor series for the natural exponential function converges for all values of \(t\text{.}\)
Example 26.3.3. Taylor series for shifted logarithm.
Consider \(f(t) = \ln(1 + t)\text{.}\) Since \(d(1 + t)/dt = 1\text{,}\) applying the Chain Rule to the derivatives of \(f(t)\) is simple. Let’s look for the pattern in the derivatives.
\begin{align*} f(t) \amp = \ln(1 + t) \amp \nthderiv[4]{f}(t) \amp = - \frac{3 \cdot 2}{{(1 + t)}^4}\\ f'(t) \amp = \frac{1}{1 + t} \amp \nthderiv[5]{f}(t) \amp = \frac{4 \cdot 3 \cdot 2}{{(1 + t)}^5}\\ f''(t) \amp = - \frac{1}{{(1 + t)}^2} \amp \amp \vdots\\ f'''(t) \amp = \frac{2}{{(1 + t)}^3} \amp \nthderiv{f}(t) \amp = {(-1)}^{n - 1} \, \frac{(n - 1) \cdot \dotsm \cdot 3 \cdot 2}{{(1 + t)}^n}\\ \amp \amp \amp \vdots \end{align*}
At \(t = 0\text{,}\) for \(n \ge 1\) this simplifies to
\begin{equation*} \nthderiv{f}(t) = {(-1)}^{n - 1} \; (n - 1)! \text{.} \end{equation*}
So for \(n \ge 1\) the Taylor series coefficient pattern is
\begin{equation*} \frac{\nthderiv{f}(0)}{n!} = \frac{{(-1)}^{n - 1} \; (n - 1)!}{n!} = \frac{{(-1)}^{n - 1}}{n} \text{.} \end{equation*}
The initial term \(f(0)\) does not fit into this pattern, but in fact \(f(0) = \ln(1) = 0\) so it doesn’t have to. Thus the Taylor series is
\begin{equation*} T_0(t) = \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1}}{k} \; t^k = t - \frac{t^2}{2} + \frac{t^3}{3} - \frac{t^4}{4} + \dotsb \text{,} \end{equation*}
where we begin the sum at index \(0\) because \(f(0) = 0\text{.}\) The calculations in Example 26.2.9 demonstrate that this series converges absolutely. At \(t = 1\) this is an alternating harmonic series, which we know converges. At \(t = -1\) this is the negative of the harmonic series, and so diverges. (But \(t = -1\) is not in the domain of \(\ln(1 + t)\) anyway.) So the Taylor series for \(\ln(1 + t)\) at \(t = 0\) converges on interval \(-1 \lt t \le 1\text{.}\)
Example 26.3.4. Taylor series for cosine.
In Example 21.3.6 we determined the general pattern for a Maclaurin polynomial for cosine to be
\begin{equation*} \nthmaclaurin (t) = \sum_{m = 0}^{\floor{n/2}} \frac{{(-1)}^m}{(2m)!} \; t^{2 m} \text{.} \end{equation*}
Recall that the polynomial only involves even powers of the indeterminate because every second higher derivative value evaluates to \(0\) at \(t = 0\text{.}\) To obtain the Taylor series at \(t = 0\) for cosine we use the same pattern:
\begin{equation*} T_0(t) = \sum_{m = 0}^{\infty} \frac{{(-1)}^m}{(2m)!} \; t^{2 m} = 1 - \frac{t^2}{2} + \frac{t^4}{4!} - \frac{t^6}{6!} + \dotsb \text{.} \end{equation*}
Using the Ratio test we have
\begin{equation*} \abs{\frac{{(-1)}^{m + 1} t^{2 (m + 1)} / \bbrac{2(m+1)}!}{{(-1)}^m t^{2 m} / (2 m)!}} = \frac{t^2}{(2m+2)(2m+1)} \to 0 \quad\text{as } m \to \infty\text{.} \end{equation*}
So the Taylor series for cosine at \(t = 0\) converges for all values of \(t\text{.}\)
Checkpoint 26.3.5. Taylor series for sine.
Use the Maclaurin polynomial pattern you developed in Checkpoint 21.3.7 to create the Taylor series at \(t = 0\) for the sine function, and investigate its convergence.
Warning 26.3.6.
When computing a Taylor series, do not substitute \(t = 0\) until you see the pattern of all higher derivatives of \(f\text{.}\)

Subsection 26.3.2 Limits of Taylor series

We have looked at a number of examples of Taylor series Subsection 26.3.1 and investigated their convergence. But to what do they converge? When we created Taylor polynomials, our goal was to approximate functions with simpler polynomial functions, and we reasoned that higher-degree polynomials should be better approximations by capturing more of the behaviour of the original function. Being an “infinite-degree” polynomial, does the Taylor series for a function become an “infinitely-good” approximation of the original function?
To answer this question, we should first remember what it means for a series to converge to a sum value: the sequence of partial sums should approach that value. Partial sums for a Taylor series are simply Taylor polynomials, and to measure how close a Taylor polynomial value (evaluated at some particular \(t\)-value in the interval of convergence for the Taylor series) is to a potential limiting value, we can consider the difference between the partial sum value and the potential limiting value. (See Fact 22.3.31.) And the “potential limiting value” we propose is the original function value at \(t\text{.}\)
Definition 26.3.7. Remainder sequence for a Taylor series.
Given a function \(f\) for which the derivative values \(\nthderiv{f}(t^\ast)\) exist for all \(n\text{,}\) for each \(n\) define remainder function \(\nthremainder{t^\ast}(t) = f(t) - \nthtaylor{t^\ast}(t)\text{.}\)
Justification.
By definition, at a particular \(t\)-value the series
\begin{equation*} T_{t^\ast}(t) = \sum_{k = 0}^\infty \frac{\nthderiv[k]{f}(t^\ast)}{k!} \, {(t - t^\ast)}^k \end{equation*}
converges if the sequence of partial sum values \(\bbseq{\nthtaylor{t^\ast}(t)}\) converges for that value of \(t\text{.}\) But since \(\nthremainder{t^\ast}(t) = f(t) - \nthtaylor{a}(t)\text{,}\) Fact 22.3.31 says that \(\bbseq{\nthremainder{t^\ast}(t)} \to 0\) is equivalent to \(\bbseq{\nthtaylor{t^\ast}(t)} \to f(t)\text{.}\)
Remark 26.3.9.
It is tempting to think of \(\bbseq{\nthtaylor{t^\ast}(t)}\) as a sequence of functions, converging to the function \(f(t)\text{.}\) But in Fact 26.3.8, the reality is that \(\bbseq{\nthtaylor{t^\ast}(t)}\) is a sequence of numbers at a specific value of \(t\text{,}\) converging to a specific function output value \(f(t)\text{.}\) We have not developed the conceptual tools necessary to discuss sequences of functions and their convergence. At this point, the correct thing to say about \(\bbseq{\nthtaylor{t^\ast}(t)} \to f(t)\) is that this convergence occurs pointwise. (That is, this convergence may be valid or invalid at different points on the \(t\) number line.)
Recall that we have already stated a bound on the error in approximating \(f(t) \approx \nthtaylor{t^\ast}(t)\) for a specific value of \(t\) (Fact 21.3.10). Since \(\nthremainder{t^\ast}(t)\) is measures that error, we have
\begin{equation*} \abs{\nthremainder{t^\ast}(c)} \le \frac{M}{(n + 1)!} \, {\abs{c - t^\ast}}^{n + 1} \text{,} \end{equation*}
where \(M\) is the maximum value of \(\abs{\nthderiv[n+1]{f}(t)}\) on the domain between \(c\) and \(t^\ast\text{,}\) inclusive.
Example 26.3.10. Remainder of a Taylor approximation to the natural exponential function.
In Example 26.3.2 we determined that Taylor series for the exponential function at \(t = 0\) converges for all values of \(t\text{.}\) For all \(n\) we have \(\nthderiv{\exp}(t) = \exp(t)\text{,}\) and since the exponential function is an increasing function, on any given domain \(0 \le t \le c\) we have \(\abs{\nthderiv[n+1]{\exp}(t)} \le e^c\text{.}\) Thus
\begin{equation*} \abs{\nthremainder{0}(c)} \le \frac{e^c}{(n + 1)!} \, {\abs{c - 0}}^{n + 1} \text{.} \end{equation*}
In the ratio
\begin{equation*} \frac{{\abs{c}}^{n+1}}{(n + 1)!} \text{,} \end{equation*}
each of the numerator and denominator is a product of \(n + 1\) factors. But the factors in the numerator are all the same magnitude, while those in the denominator become arbitrarily large, and we conclude that
\begin{equation*} \frac{{\abs{c}}^{n+1}}{(n + 1)!} \to 0 \quad \text{as } n \to \infty \text{.} \end{equation*}
By the Squeeze Theorem, we must also have \(\abs{\nthremainder{0}(c)} \to 0\text{.}\) And a similar argument can be made to demonstrate the same for negative \(c\text{.}\) We conclude that the Taylor series for the exponential function converges to the value of the exponential function at all points; that is,
\begin{equation*} e^t = 1 + t + \frac{t^2}{2} + \frac{t^3}{3!} + \frac{t^4}{4!} + \dotsb \end{equation*}
is always true. In particular, using \(t = 1\) gives us an alternative definition of Euler’s number:
\begin{equation*} e = \sum_{k = 0}^\infty \frac{1}{k!} \text{.} \end{equation*}
Checkpoint 26.3.11. Evaluating Taylor series for sine and cosine.
Demonstrate that the Taylor series for sine and cosine at \(t = 0\) both converge to the value of their respective originating function for all values of \(t\text{.}\)

Section 26.4 Functions defined by power series

Any input-output process defines a function. And any power series \(\sum a_k t^k\) defines such a process: if the input value \(t\) is in the interval of convergence of the series, then the output value is the limiting value of the sequence of partial sums for that value of \(t\text{.}\) For example, we have already seen that the natural exponential function is equivalent to the power-series-defined function \(f(t) = \sum t^k/k!\text{.}\)
If we are to view power series as functions, a number of questions arise.
  • How do we determine where power series functions are continuous, hence integrable? How do we integrate power series functions?
  • How do we determine where power series functions are differentiable, and how do we compute their derivatives?
Throughout this chapter we will explore and state results for Taylor series based at \(t = 0\) and for power series of the form \(\sum a_k t^k\text{,}\) but all results are true for Taylor series at other base points and for “shifted” power series.

Subsection 26.4.1 Term-by-term integration/differentiation

Let’s first consider a power series that is a Taylor series at \(t = 0\) for a function for which \(\nthderiv{f}(0)\) is defined for all \(n\text{:}\)
\begin{equation*} f(0) + f'(0) \, t + \frac{f''(0)}{2} \, t^2 + \frac{f'''(0)}{3!} \, t^3 + \frac{\nthderiv[4]{f}(0)}{4!} \, t^4 + \dotsb \text{.} \end{equation*}
Suppose \(F\) is an antiderivative for \(f\text{.}\) By definition, this means that \(F'(t) = f(t)\text{.}\) But then
\begin{align*} F''(t) \amp = f'(t) \amp F'''(t) \amp = f''(t) \amp \amp \cdots \amp \nthderiv[k]{F}(t) \amp = \nthderiv[k - 1]{f}(t) \amp \amp \cdots \text{,} \end{align*}
and so the Taylor series for \(F\) based at \(t = 0\) is
\begin{align*} \amp F(0) + F'(0) \, t + \frac{F''(0)}{2} \, t^2 + \frac{F'''(0)}{3!} \, t^3 + \frac{\nthderiv[4]{F}(0)}{4!} \, t^4 + \dotsb \\ = \;\; \amp F(0) + f(0) \, t + \frac{f'(0)}{2} \, t^2 + \frac{f''(0)}{3!} \, t^3 + \frac{f'''(0)}{4!} \, t^4 + \dotsb \\ = \;\; \amp F(0) + f(0) \, t + f'(0) \cdot \frac{t^2}{2} + \frac{f''(0)}{2!} \cdot \frac{t^3}{3} + \frac{f'''(0)}{3!} \cdot \frac{t^4}{4} + \dotsb \text{.} \end{align*}
Let’s line this Taylor series up with the one for f.
\begin{equation*} \begin{array}{cccccccccccc} f\colon \amp \amp \amp f(0) \amp + \amp f'(0) \, t \amp + \amp {\displaystyle \frac{f''(0)}{2} \, t^2} \amp + \amp {\displaystyle \frac{f'''(0)}{3!} \, t^3} \amp + \amp \dotsb \\ F\colon \amp F(0) \amp + \amp f(0) \, t \amp + \amp {\displaystyle f'(0) \cdot \frac{t^2}{2}} \amp + \amp {\displaystyle \frac{f''(0)}{2!} \cdot \frac{t^3}{3}} \amp + \amp {\displaystyle \frac{f'''(0)}{3!} \cdot \frac{t^4}{4}} \amp + \amp \dotsb \end{array} \end{equation*}
It appears that each term of the series for \(F\) can be obtained by term-by-term integration of the series for \(f\) (see Formula 1 in Pattern 7.4.2), along with the introduction of an arbitrary constant term \(F(0)\text{.}\)
Now consider the Taylor series based at \(t = 0\) for the derivative of \(f\text{.}\) Let \(g\) represent the derivative of \(f\text{,}\) so that
\begin{align*} g(t) \amp = f'(t) \amp g'(t) \amp = f''(t) \amp \amp \cdots \amp \nthderiv[k]{g}(t) \amp = \nthderiv[k + 1]{f}(t) \amp \amp \cdots \text{.} \end{align*}
So the Taylor series for \(g\) based at \(t = 0\) is
\begin{align*} \amp g(0) + g'(0) \, t + \frac{g''(0)}{2} \, t^2 + \frac{g'''(0)}{3!} \, t^3 + \frac{\nthderiv[4]{g}(0)}{4!} \, t^4 + \dotsb \\ = \;\; \amp f'(0) + f''(0) \, t + \frac{f'''(0)}{2} \, t^2 + \frac{\nthderiv[4]{f}(0)}{3!} \, t^3 + \frac{\nthderiv[5]{f}(0)}{4!} \, t^4 + \dotsb \\ = \;\; \amp f'(0) + \frac{f''(0)}{2} \cdot (2 t) + \frac{f'''(0)}{3!} \cdot (3 t^2) + \frac{\nthderiv[4]{f}(0)}{4!} \cdot (4 t^3) + \frac{\nthderiv[5]{f}(0)}{5!} \cdot \bbrac{5 t^4} + \dotsb \text{.} \end{align*}
Again, let’s line this Taylor series up with the one for f.
\begin{equation*} \begin{array}{cccccccccc} f\colon \amp f(0) \amp + \amp f'(0) \, t \amp + \amp {\displaystyle \frac{f''(0)}{2} \, t^2} \amp + \amp {\displaystyle \frac{f'''(0)}{3!} \, t^3} \amp + \amp \dotsb \\ g\colon \amp \amp \amp f'(0) \amp + \amp {\displaystyle \frac{f''(0)}{2} \cdot (2 t)} \amp + \amp {\displaystyle \frac{f'''(0)}{3!} \cdot (3 t^2)} \amp + \amp \dotsb \end{array} \end{equation*}
This time it appears that each term of the series for \(g = f'\) can be obtained by term-by-term differentiation of the series for \(f\) (see Pattern 12.4.3).
It’s important to note at this point that the Taylor series for \(F\) and \(f'\) above don’t necessarily converge to those respective functions for all \(t\text{.}\) We’ll address questions of convergence in Subsection 26.4.2, but for now the next two examples give us hope that term-by-term operations yield the desired results.
Example 26.4.1. Integrating/differentiating the exponential Taylor series.
From Example 26.3.10 we know that for all \(t\) we have
\begin{gather} e^t = 1 + t + \frac{t^2}{2} + \frac{t^3}{3!} + \frac{t^4}{4!} + \dotsb \text{.}\tag{✶} \end{gather}
Applying term-by-term integration to this series we obtain
\begin{equation*} t + \frac{t^2}{2} + \frac{t^3 / 3}{2} + \frac{t^4 / 4}{3!} + \frac{t^5 / 5}{4!} + \dotsb \text{.} \end{equation*}
However, consolidating denominators in each term, we see that this series is identical to (✶), except that it is missing the constant term. So term-by-term integration has resulted in a power series that converges to \(e^t - 1\text{,}\) which conveniently agrees with Pattern 9.6.1.
Applying term-by-term differentiation to (✶) yields
\begin{equation*} 0 + 1 + \frac{2 t}{2} + \frac{3 t^2}{3!} + \frac{4 t^3}{4!} + \dotsb \text{,} \end{equation*}
which simplifies to the Taylor series for the exponential function, agreeing with Pattern 12.5.7.
Example 26.4.2. Integrating/differentiating the Taylor series for cosine.
From Checkpoint 26.3.11 we know that for all \(t\) we have
\begin{equation*} \cos(t) = \sum_{m = 0}^{\infty} \frac{{(-1)}^m}{(2m)!} \; t^{2 m} \text{.} \end{equation*}
Let’s integrate each term of this series:
\begin{align*} \sum_{m = 0}^{\infty} \left( \ccmiint{\frac{{(-1)}^m}{(2m)!} \; t^{2 m}}{t} \right) \amp = \sum_{m = 0}^{\infty} \left( \frac{{(-1)}^m}{(2m)!} \; \ccmiint{t^{2 m}}{t} \right)\\ \amp = \sum_{m = 0}^{\infty} \frac{{(-1)}^m}{(2m)!} \; \frac{t^{2 m + 1}}{2 m + 1} \\ \amp = \sum_{m = 0}^{\infty} \frac{{(-1)}^m}{(2m)! \cdot (2 m + 1)} \; t^{2 m + 1} \\ \amp = \sum_{m = 0}^{\infty} \frac{{(-1)}^m}{(2 m + 1)!} \; t^{2 m + 1} \text{.} \end{align*}
If you completed Checkpoint 26.3.5, you will recognized this result as precisely the Taylor series at \(t = 0\) for the sine function, conveniently agreeing with Pattern 10.5.2.
Now let’s differentiate each term of the Taylor series for cosine. Since the first term in the series is a constant whose derivative is \(0\text{,}\) we will simply shift indices in the term-by-term differentiated series.
\begin{align*} \sum_{m = 0}^{\infty} \opddt \left( \frac{{(-1)}^m}{(2m)!} \; t^{2 m} \right) \amp = \sum_{m = 0}^{\infty} \frac{{(-1)}^m}{(2m)!} \; \ddt{(t^{2 m})}\\ \amp = \sum_{m = 1}^{\infty} \frac{{(-1)}^m}{(2m)!} \cdot (2 m t^{2 m - 1}) \\ \amp = \sum_{m = 1}^{\infty} \frac{{(-1)}^m (2 m)}{(2m)!} \; t^{2 m - 1} \\ \amp = \sum_{m = 1}^{\infty} \frac{{(-1)}^m}{(2 m - 1)!} \; t^{2 m - 1} \end{align*}
Let’s shift the index so that our sum begins at index \(0\) by substituting \(j = m - 1\text{.}\) Under this substitution,
\begin{align*} m \amp = j + 1 \text{,} \amp 2 m - 1 \amp = 2 j + 1 \text{,} \end{align*}
and our differentiated sum becomes
\begin{align*} \sum_{m = 1}^{\infty} \frac{{(-1)}^m}{(2 m - 1)!} \; t^{2 m - 1} \amp = \sum_{j = 0}^{\infty} \frac{{(-1)}^{j + 1}}{(2 j + 1)!} \; t^{2 j + 1}\\ \amp = \sum_{j = 0}^{\infty} \frac{(-1) \cdot {(-1)}^j}{(2 j + 1)!} \; t^{2 j + 1} \\ \amp = - \sum_{j = 0}^{\infty} \frac{{(-1)}^j}{(2 j + 1)!} \; t^{2 j + 1} \text{.} \end{align*}
Again, this new series is the Taylor series at \(t = 0\) for the sine function, except for the extra negative sign out front, and so this result agrees with Pattern 12.6.7.

Subsection 26.4.2 Convergence of integrated/differentiated power series

Naturally, questions of convergence arise.
  • For \(F\) an antiderivative of \(f\text{,}\) do the Taylor series for \(f\text{,}\) \(F\text{,}\) and \(f'\) all have the same interval of convergence?
  • For what values of \(t\) does each series converge to the value of the function from which it was created?
We’ll address the question of convergence more generally.
Justification for the differentiated series.
First suppose \(c\) lies in \(0 \lt t \lt R\text{.}\) (A similar argument as to the one provided below can be made for negative \(c\text{.}\)) We know from Fact 26.2.5 that \(\sum a_k c^k\) converges absolutely, and in fact so does \(\sum a_k d^k\) for any value of \(d\) between \(c\) and \(R\text{.}\) Choose such a value of \(d\) (for example, \(d\) could be the average of \(c\) and \(R\)). We will compare the absolute convergence of the differentiated series \(\sum k a_k c^{k - 1}\) with \(\sum a_k d^k\text{.}\)
One may easily verify with the Ratio Test that the series \(\sum k t^{k - 1}\) has radius of convergence \(1\text{.}\) Since \(c / d \lt 1\text{,}\) the series \(\sum k {(c/d)}^{k - 1}\) converges absolutely, and the sequence of terms \(\bbseq{n {(c/d)}^{n - 1}}\) must converge to \(0\text{.}\) Therefore, any constant multiple of this sequence must converge to \(0\text{.}\) In particular, if we divide each term by \(d\text{,}\) we have
\begin{equation*} \lrseq{ \frac{n c^{n - 1}}{d^n} } \to 0 \text{.} \end{equation*}
As such, there must exist some tail of this sequence (starting at index \(N\text{,}\) say) completely contained in the open range \((-1, 1)\text{.}\) As both \(c\) and \(d\) are positive, for all \(n \ge N\) we have
\begin{gather*} \frac{n c^{n - 1}}{d^n} \lt 1 \\ n c^{n - 1} \lt d^n \text{.} \end{gather*}
We may multiply this inequality by the non-negative value \(\abs{a_n}\text{,}\) and take absolute value of everything since these are all positive values, to obtain
\begin{equation*} \abs{n a_n c^{n - 1}} \lt \abs{a_n d^n} \text{.} \end{equation*}
This is the comparison we were seeking: since \(\sum a_k d^k\) converges absolutely, so must \(\sum n a_n c^{n - 1}\text{.}\)
Now suppose \(c\) is outside of \(-R \le t \le R\text{.}\) Again we will assume that \(c\) is positive, as a similar argument will work in the case that \(c\) is negative. We essentially make a “reflected” argument of the one above, reflected at \(R\text{.}\) Again choose \(d\) between \(R\) and \(c\text{,}\) which will also lie outside of the radius of convergence of \(\sum a_k t^k\text{.}\) Then \(c/d \gt 1\) and so
\begin{equation*} \lrseq{ n {\left(\frac{c}{d}\right)}^{n - 1} } \to \infty \text{.} \end{equation*}
Any constant multiple of this sequence will also diverge to infinity. In particular, dividing every term by \(d\) gives us
\begin{equation*} \lrseq{ n \frac{c^{n - 1}}{d^n} } \to \infty \text{,} \end{equation*}
and so there is some tail of this sequence where the terms become and stay greater than \(1\text{.}\) Performing similar manipulations as in the convergent case above, we may say that there is an \(N\) where
\begin{equation*} \abs{n a_n c^{n - 1}} \gt \abs{a_n d^n} \end{equation*}
for all \(n \ge N\text{,}\) hence we conclude that \(\sum n a_n c^{n - 1}\) diverges absolutely by comparison with \(\sum a_k d^k\text{.}\)
Justification for the integrated series.
First suppose \(c\) lies in \(-R \lt t \lt R\text{.}\) We know from Fact 26.2.5 that \(\sum a_k c^k\) converges absolutely, and clearly
\begin{equation*} \abs{\frac{a_n c^n}{n + 1}} \lt \abs{a_n c^n} \end{equation*}
for all \(n\text{,}\) so that \(\sum \abs{a_k c^k / (k + 1)}\) converges by comparison. But then so does
\begin{equation*} \abs{c} \cdot \sum_{k = 0}^\infty \abs{\frac{a_k}{k + 1} \, c^k} = \sum_{k = 0}^\infty \abs{\frac{a_k}{k + 1} \, c^{k + 1}}\text{.} \end{equation*}
Therefore, \(\sum a_k c^{k + 1} / (k + 1)\) converges absolutely.
Now consider a value of \(c\) outside of \(-R \le t \le R\text{.}\) If \(\sum a_k c^{k + 1} / (k + 1)\) converged, then \(c\) would be within the interval of convergence of the series \(\sum a_k x^{k + 1} / (k + 1)\text{.}\) If we differentiate this series term-by-term, the resulting series would have the same radius of convergence. But differentiating that series term-by-term yields
\begin{equation*} \sum_{k = 1}^\infty a_k \cdot \frac{(k + 1) x^k}{k + 1} = \sum_{k = 1}^\infty a_k x^k \text{,} \end{equation*}
which is simply a tail of the original series \(\sum a_k x^k\text{,}\) and hence has the same radius of convergence \(R\text{.}\) Since we assumed \(c\) to be outside of this radius of convergence, we have a contradiction, and we conclude that \(\sum a_k c^{k + 1} / (k + 1)\) must diverge.
So integrated/differentiated power series converge on the same interval that the original power series does. But to what do they converge? Thankfully, the results of Example 26.4.1 and Example 26.4.2 are not coincidences or special cases. But justifying this is beyond the scope of this text.
While the justification for Statement 2 of Theorem 26.4.4 is beyond the scope of this text, it can be used to justify Statement 1. Every differentiable function is continuous (Pattern 11.7.11), and we can use term-by-term differentiation of the power series expression for \(F(t)\) to arrive back at \(f\text{,}\) demonstrating that \(F'(t) = f(t)\text{.}\)
We can also extend Theorem 26.4.4 to higher derivatives.

Subsection 26.4.3 Using differentiation/integration to compute power series

Instead of computing higher and higher derivatives of a function in order to compute its Taylor series, an alternative is to use term-by-term integration of a known power series representation for a function to obtain a power series representation for its antiderivative or its derivatives.
Example 26.4.6. Derivative of the geometric series.
We know that the geometric series \(\sum r^k\) converges to \(1 / (1 - r)\) for \(-1 \lt r \lt 1\) (Pattern 25.1.11). Trading \(r\) for our preferred variable \(t\text{,}\) we get a power series representation of the function \(f(t) = 1 / (1 - t)\) with radius of convergence \(R = 1\text{:}\)
\begin{equation*} \frac{1}{1 - t} = 1 + t + t^2 + t^3 + \dotsb \text{.} \end{equation*}
Now
\begin{equation*} \opddt\left( \frac{1}{1 - t} \right) = \frac{1}{{(1 - t)}^2} \text{,} \end{equation*}
so we may differentiate the series for \(f\) term-by-term to obtain
\begin{equation*} \frac{1}{{(1 - t)}^2} = 1 + 2 t + 3 t^2 + 4 t^3 + \dotsb \text{,} \end{equation*}
valid on domain \(-1 \lt t \lt 1\text{.}\)
Checkpoint 26.4.7.
Use Corollary 26.4.5 to continue the pattern of Example 26.4.6, obtaining a general power series representation for every power \(1 / {(1 - t)}^{m + 1}\text{,}\) \(m \ge 0\text{.}\)
Example 26.4.8. Integral of the geometric series.
We know that
\begin{equation*} \sum_{k = 0}^\infty t^k = \frac{1}{1 - t} \text{,} \qquad -1 \lt t \lt 1 \text{.} \end{equation*}
Integrating both sides, we obtain
\begin{equation*} \sum_{k = 0}^\infty \frac{t^{k + 1}}{k + 1} = - \ln(1 - t) + C \text{.} \end{equation*}
(On domain \(-1 \lt t \lt 1\) there is no need to include absolute value brackets inside the logarithm.) Both the infinite series and the logarithm evaluate to \(0\) at \(t = 0\text{,}\) so we conclude that \(C = 0\text{.}\) Shifting indices in the series using the substitution \(m = k + 1\text{,}\) we arrive at
\begin{equation*} \sum_{m = 1}^\infty \frac{t^m}{m} = - \ln(1 - t) \text{.} \end{equation*}
In particular, for \(t = 1/2\) we have
\begin{equation*} \sum_{m = 1}^\infty \frac{{(1/2)}^m}{m} = - \ln\left(\frac{1}{2}\right) = \ln(2) \text{.} \end{equation*}
This provides us with an interesting representation of \(\ln(2)\) as an infinite series:
\begin{equation*} \ln(2) = \sum_{m = 1}^\infty \frac{1}{m 2^m} \text{.} \end{equation*}
Example 26.4.9. Integral of the natural logarithm.
First, we know that \(\ln'(t) = 1 / t\text{,}\) and
\begin{equation*} \frac{1}{t} = \frac{1}{1 - r} \quad\text{for}\quad r = 1 - t \text{.} \end{equation*}
So we can again use the Geometric series to obtain a power series representation:
\begin{equation*} \frac{1}{t} = \frac{1}{1 - r} = \sum_{k = 0}^\infty r^k = \sum_{k = 0}^\infty {(1 - t)}^k = \sum_{k = 0}^\infty {(-1)}^k {(t - 1)}^k\text{.} \end{equation*}
Note that this power series expression converges for \(-1 \lt r \lt 1\text{,}\) which translates into \(0 \lt t \lt 2\text{.}\)
Now integrate both sides of
\begin{equation*} \frac{1}{t} = \sum_{k = 0}^\infty {(-1)}^k {(t - 1)}^k \end{equation*}
to arrive at
\begin{equation*} \ln(t) + C = \sum_{k = 0}^\infty \frac{{(-1)}^k}{k + 1} {(t - 1)}^{k + 1} \text{.} \end{equation*}
Both \(\ln(t)\) and the infinite series evaluation to \(0\) at \(t = 1\text{,}\) so we conclude that \(C = 0\text{.}\) Shifting indices in the series using the substitution \(m = k + 1\text{,}\) we arrive at
\begin{gather} \ln(t) = \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m} {(t - 1)}^m\text{.}\tag{†} \end{gather}
Finally, we will integrate term-by-term once again to obtain an antiderivative of \(\ln(t)\text{:}\)
\begin{align*} \ccmiint{\ln(t)}{t} \amp = \sum_{m = 1}^\infty \left( \ccmiint{\frac{{(-1)}^{m - 1}}{m} {(t - 1)}^m}{t} \right)\\ \amp = \sum_{m = 1}^\infty \left( \frac{{(-1)}^{m - 1}}{m} \ccmiint{{(t - 1)}^m}{t} \right) \\ \amp = \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m} \frac{{(t - 1)}^{m + 1}}{m + 1} \\ \amp = \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m (m + 1)} \; {(t - 1)}^{m + 1} \text{.} \end{align*}
Using
\begin{equation*} \frac{1}{m (m + 1)} = \frac{1}{m} - \frac{1}{m + 1} \end{equation*}
we can rearrange:
\begin{align*} \ccmiint{\ln(t)}{t} \amp = \sum_{m = 1}^\infty \left( \frac{{(-1)}^{m - 1}}{m} - \frac{{(-1)}^{m - 1}}{(m + 1)} \right) \; {(t - 1)}^{m + 1}\\ \amp = \sum_{m = 1}^\infty \left( \frac{{(-1)}^{m - 1}}{m} + \frac{{(-1)}^m}{(m + 1)} \right) \; {(t - 1)}^{m + 1} \text{.} \end{align*}
We can then split this into two series:
\begin{gather} \ccmiint{\ln(t)}{t} = \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m} \; {(t - 1)}^{m + 1} + \sum_{m = 1}^\infty \frac{{(-1)}^m}{(m + 1)} \; {(t - 1)}^{m + 1}\text{.}\tag{††} \end{gather}
Now, the first series in (††), resembles the second power series representation for \(\ln(t)\) in (†):
\begin{equation*} \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m} \; {(t - 1)}^{m + 1} = (t - 1) \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m} \; {(t - 1)}^m = (t - 1) \ln(t)\text{.} \end{equation*}
And the second series in (††), resembles the first power series representation for \(\ln(t)\) in (†):
\begin{align*} \sum_{m = 1}^\infty \frac{{(-1)}^m}{(m + 1)} \; {(t - 1)}^{m + 1} \amp = \sum_{k = 1}^\infty \frac{{(-1)}^k}{(k + 1)} \; {(t - 1)}^{k + 1}\\ \amp = -(t - 1) + (t - 1) + \sum_{k = 1}^\infty \frac{{(-1)}^k}{(k + 1)} \; {(t - 1)}^{k + 1} \\ \amp = -(t - 1) + \frac{{(-1)}^0}{(0 + 1)} \; {(t - 1)}^{0 + 1} + \sum_{k = 1}^\infty \frac{{(-1)}^k}{(k + 1)} \; {(t - 1)}^{k + 1} \\ \amp = -(t - 1) + \sum_{k = 0}^\infty \frac{{(-1)}^k}{(k + 1)} \; {(t - 1)}^{k + 1} \\ \amp = 1 - t + \ln(t) \text{.} \end{align*}
Adding these back together yields
\begin{equation*} \ccmiint{\ln(t)}{t} = (t - 1) \ln(t) + 1 - t + \ln(t) = t \ln(t) - t + 1\text{.} \end{equation*}
For an indefinite integral we should really have an arbitrary constant of integration, so we conclude that
\begin{equation*} \ccmiint{\ln(t)}{t} = t \ln(t) - t + C \text{.} \end{equation*}
We can also use term-by-term integration to help estimate definite integral values.
Example 26.4.10. Estimating a definite integral with partial sums.
Suppose we want to estimate the definite integral below to within \(\pm {10}^{-10}\text{.}\)
\begin{equation*} \ccmint{0}{1/2}{\frac{1}{1 + t^7}}{t} \end{equation*}
Recall that we can use any choice of antiderivative for the integrand function to compute a definite integral (Pattern 13.6.2). In particular, we can create an antiderivative through term-by-term integration of a power series representation of the integrand function. Conveniently, our integrand can be interpreted as the sum of a geometric series with \(r = -t^7\text{:}\)
\begin{gather*} f(t) = \frac{1}{1 + t^7} = \frac{1}{1 - (-t^7)} = \sum_{k = 0}^\infty {(-t^7)}^k = \sum_{k = 0}^\infty {(-1)}^k t^{7 k} \\ F(t) = \ccmiint{f(t)}{t} = \sum_{k = 0}^\infty {(-1)}^k \frac{t^{7 k + 1}}{7 k + 1} \end{gather*}
Our power series representation of \(f\) is valid for \(-1 \lt r \lt 1\text{,}\) which for \(r = -t^7\) is equivalent to the domain \(-1 \lt t \lt 1\) and contains our domain of integration. And by Theorem 26.4.4, our antiderivative power series is also valid on that domain.
We have
\begin{equation*} \ccmint{0}{1/2}{\frac{1}{1 + t^7}}{t} = F(1/2) - F(0) \text{,} \end{equation*}
but every term in our power series for \(F\) involves a non-trivial power of \(t\text{.}\) So there is no constant term and \(F(0) = 0\text{,}\) which means that we only need to estimate
\begin{equation*} F(1/2) = \sum_{k = 0}^\infty \frac{{(-1)}^k}{(7 k + 1) 2^{7 k + 1}} \text{.} \end{equation*}
This is an alternating series with terms whose absolute value are decreasing, so we may apply Fact 25.5.13 to estimate
\begin{equation*} F(1/2) \approx \left( \sum_{k = 0}^n \frac{{(-1)}^k}{(7 k + 1) 2^{7 k + 1}} \right) \pm \frac{1}{(7 n + 8) 2^{7 n + 8}} \text{,} \end{equation*}
where the error bound is the magnitude of the \(\nth[(n + 1)]\) term in the sum. What is the smallest value of \(n\) so that this error bound is less than \({10}^{-10}\text{?}\) Then denominator is first larger than \({10}^{10}\) for \(n = 3\text{,}\) so we only need the first four terms of the sum (indices \(k = 0\) to \(k = 3\text{,}\) inclusive):
\begin{equation*} \ccmint{0}{1/2}{\frac{1}{1 + t^7}}{t} = F(1/2) \approx \frac{1}{2} - \frac{1}{8 \cdot 2^8} + \frac{1}{15 \cdot 2^{15}} - \frac{1}{22 \cdot 2^{22}}\text{.} \end{equation*}

Subsection 26.4.4 Convergence at endpoints: Abel’s Theorem

Example 26.4.11. Endpoint convergence of a shifted logarithm.
Suppose we would like to have a power series representation of the shifted logarithm function \(f(t) = \ln(1 + t)\text{.}\) We have \(f'(t) = 1/(1 + t)\text{,}\) and we may interpret this as the sum of a geometric series with \(r = -t\text{:}\)
\begin{equation*} f'(t) = \frac{1}{1 - (-t)} = \sum_{k = 0}^\infty {(-t)}^k = 1 - t + t^2 - t^3 + \dotsb \text{.} \end{equation*}
This power series representation is valid for \(-1 \lt r \lt 1\text{,}\) which for \(r = -t\) is equivalent to the domain \(-1 \lt t \lt 1\text{.}\) Integrating this power series term-by-term will result in an antiderivative of \(f'(t)\text{,}\) which must then be a vertical shift of \(f(t)\text{:}\)
\begin{equation*} f(t) + C = \sum_{k = 0}^\infty \left( \ccmiint{{(-1)}^k t^k}{t} \right) = \sum_{k = 0}^\infty \frac{{(-1)}^k t^{k + 1}}{k + 1} = \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1} t^m}{m}\text{,} \end{equation*}
for some constant \(C\text{.}\) We have \(f(0) = 0\text{,}\) and the power series has no constant term, so we must have \(C = 0\) and
\begin{equation*} \ln(1 + t) = \sum_{m = 1}^\infty \frac{{(-1)}^{m - 1}}{m} \; t^m \text{.} \end{equation*}
(Compare with the Taylor series for this function that we computed in Example 26.3.3.)
Since our series for \(f'(t)\) converges on \(-1 \lt t \lt 1\text{,}\) Theorem 26.4.4 implies that our series for \(\ln(1 + t)\) converges on that same domain, and for each value of \(t\) in that domain the series converges to the value of \(\ln(t)\text{.}\) For \(t = -1\) the series is a harmonic series, and hence diverges, which is to be expected since \(f(-1)\) is undefined. For \(t = 1\) the series is an alternating harmonic series, which we know converges. But what does it converge to? We hope that it converges to \(f(1) = ln(2)\text{.}\)
Example 26.4.13. Abel’s Theorem for a shifted logarithm.
Returning to Example 26.4.11, the function \(f(t) = \ln(1 + t)\) is continuous on its entire domain \(t \gt -1\text{,}\) so we may say that it is continuous from the left at \(t = 1\text{.}\) Abel’s Theorem lets us conclude that the alternating harmonic series \(\sum {(-1)}^{k - 1} / k\) converges to \(\ln(2)\text{,}\) as claimed in Example 25.1.9.

Section 26.5 Power series versus Taylor series

In the examples in this chapter we have often used known sum of the Geometric series to quickly obtain a power series representation of a function. How does such a power series compare to the Taylor series based at \(t = 0\text{?}\) Is it possible to have two different power series representations for a given function, both “based” at the same point?

Example 26.5.1. Compare the geometric series with the Taylor series of its sum.

We know
\begin{equation*} \frac{1}{1 - t} = \sum_{k = 0}^\infty t^k \text{,} \qquad -1 \lt t \lt 1 \text{.} \end{equation*}
Write \(f(t) = 1 / (1 - t)\text{.}\) Try computing the first four derivatives of \(f\text{;}\) you should observe the pattern
\begin{equation*} \nthderiv{f}(t) = \frac{n!}{{(1 - t)}^{n + 1}} \text{,} \end{equation*}
from which we obtain \(\nthderiv{f}(0) = n!\text{.}\) Using this in the Taylor series formula, we have
\begin{align*} \frac{1}{1 - t} \amp = \sum_{k = 0}^\infty \frac{\nthderiv[k]{f}(0)}{k!} \; t^k \\ \amp = \sum_{k = 0}^\infty \frac{k!}{k!} \; t^k \\ \amp = \sum_{k = 0}^\infty t^k \text{,} \end{align*}
which is simply the geometric series again.

Checkpoint 26.5.2. Taylor series of the reflected geometric series.

Perform the same analysis as in Example 26.5.1 for the function
\begin{equation*} f(t) = \frac{1}{1 + t} = \frac{1}{1 - (-t)} \text{.} \end{equation*}
On the one hand, obtain a power series representation for \(f\) by interpreting its formula as the sum of a geometric series, and on the other hand compute the Taylor series for \(f\) based at \(t = 0\text{.}\) Compare.

Idea of justification.

To compute the Taylor series, apply the differentiation formula for power series in Corollary 26.4.5.

Example 26.5.5. Determining the Taylor series for an antiderivative of a continuous function.

From the Fundamental Theorem of Calculus, we may conclude that every continuous function has an antiderivative. (See Fact 13.4.4.) For example, the continuous function \(f(t) = e^{t^2}\) must have an antiderivative, but you are unlikely to guess a formula for it without using an integral sign. However, if we are willing to use an integral sign, we can construct an antiderivative of \(f\text{:}\)
\begin{equation*} F(t) = \ccmint{0}{t}{e^{u^2}}{u} \text{.} \end{equation*}
What is the Taylor series for \(F\text{?}\) We know that the Taylor series for the natural exponential function is \(\exp(t) = \sum t^k / k!\text{,}\) with infinite radius of convergence. (See Example 26.3.2.) Substituting \(t^2\) into this series yields \(e^{t^2} = \sum t^{2 k} / k!\text{,}\) and by integrating we obtain
\begin{equation*} \ccmiint{e^{t^2}}{t} = C + \sum_{k = 0}^\infty \frac{t^{2 k + 1}}{k! (2 k + 1)} \text{,} \end{equation*}
where \(C\) is an arbitrary constant of integration. The specific antiderivative \(F(t)\) must be one of the infinite family of antiderivatives above, and from \(F(0) = 0\) we conclude that it is the antiderivative where \(C = 0\text{:}\)
\begin{equation*} \ccmint{0}{t}{e^{u^2}}{u} = \sum_{k = 0}^\infty \frac{t^{2 k + 1}}{k! (2 k + 1)} \text{.} \end{equation*}
As we know that integrating a power series does not affect the radius of convergence, this power series representation of \(F\) is valid for all \(t\text{.}\)