Skip to main content
Logo image

Chapter 25 Series

Section 25.1 Definitions and examples

Recall that our motivation for studying sequences was the sequence of approximation values one could obtain by approximating a specific function value using Taylor polynomials of increasing degree. Consider the example of approximating \(\ln (1.001)\) from Section 22.1 once more. Using Taylor polynomials based at \(t = 1\text{,}\) we have
\begin{align*} \ln(1.001) \amp \approx T_{0,1}(1.001) = 0 \\ \ln(1.001) \amp \approx T_{1,1}(1.001) = 0 + 0.001 \\ \ln(1.001) \amp \approx T_{2,1}(1.001) = 0 + 0.001 - \frac{{0.001}^2}{2} \\ \ln(1.001) \amp \approx T_{3,1}(1.001) = 0 + 0.001 - \frac{{0.001}^2}{2} + \frac{{0.001}^3}{6} \\ \amp \quad \vdots \text{.} \end{align*}
(This time we have included the “zero-degree” Taylor polynomial approximation value.) This creates a sequence \(\nseq{t}\) where
\begin{equation*} t_n = T_{n,1}(1.001) = 0 + 0.001 - \frac{{0.001}^2}{2} + \dotsc + \frac{{(-1)}^{n - 1} {0.001}^n}{n!} \text{.} \end{equation*}
If we allow the possibility of an “infinite-degree” Taylor polynomial \(T_{\infty,1}\text{,}\) computing a value like \(T_{\infty,1}(1.001)\) of such a “polynomial” would involve adding up an infinite number of summands, while each term \(t_n\) from the sequence above represents a partial summation of only a finite number of summands.

Definition 25.1.1. Partial sum.

For an infinite sequence \(\nseq{a}\) and a specific index \(n\text{,}\) the sum
\begin{equation*} s_n = \sum_{k = 0}^n a_k \end{equation*}
is called the \(\nth\) partial sum of \(\nseq{a}\). The sequence \(\nseq{s}\) that indexes all such sums is called the sequence of partial sums for \(\nseq{a}\).

Example 25.1.2.

  1. Consider the sequence with terms \(a_n = 1 / {10}^n\text{:}\)
    \begin{align*} a_0 \amp = 1 \text{,} \amp a_1 \amp = 0.1 \text{,} \amp a_2 \amp = 0.01 \text{,} \amp \amp \dots \text{.} \end{align*}
    The associated partial sums are:
    \begin{align*} s_0 \amp = 1 \text{,} \\ s_1 \amp = 1 + 0.1 = 1.1 \text{,} \\ s_2 \amp = 1 + 0.1 + 0.01 = 1.11 \text{,} \\ s_3 \amp = 1 + 0.1 + 0.01 + 0.001 = 1.111 \text{,} \\ \amp \vdots \text{.} \end{align*}
    It certainly seems that if we let this process continue indefinitely, these partial sums would trend toward \(1.\bar{1}\text{.}\)
  2. Consider the sequence \(\nseq{b}\) whose terms are the digits of \(\pi\text{:}\)
    \begin{align*} b_0 \amp = 3 \text{,} \amp b_1 \amp = 1 \text{,} \amp b_2 \amp = 4 \text{,} \amp b_3 \amp = 1 \text{,} \amp b_4 \amp = 5 \text{,} \amp b_5 \amp = 9 \text{,} \amp \amp \dots \text{.} \end{align*}
    Then create a new sequence with terms \(a_n = b_n / {10}^n\text{.}\) The partial sums for \(\nseq{a}\) are:
    \begin{align*} s_0 \amp = 3 \text{,} \\ s_1 \amp = 3 + 0.1 = 3.1 \text{,} \\ s_2 \amp = 3 + 0.1 + 0.04 = 3.14 \text{,} \\ s_3 \amp = 3 + 0.1 + 0.04 + 0.001 = 3.141 \text{,} \\ s_4 \amp = 3 + 0.1 + 0.04 + 0.001 + 0.0005 = 3.1415 \text{,} \\ s_5 \amp = 3 + 0.1 + 0.04 + 0.001 + 0.0005 + 0.00009 = 3.14159 \text{,} \\ \amp \vdots \text{.} \end{align*}
    It certainly seems that if we let this process continue indefinitely, these partial sums would trend toward \(\pi\text{.}\)
The preceding example demonstrates that sometimes partial sums can exhibit a specific pattern. But sometimes they do not.

Example 25.1.3.

Consider the sequence \(\nseq{a}\) with terms \(a_n = {(-1)}^n n^2\text{:}\)
\begin{align*} a_0 \amp = 0 \text{,} \amp a_1 \amp = -1 \text{,} \amp a_2 \amp = 4 \text{,} \amp a_3 \amp = -9 \text{,} \amp a_4 \amp = 16 \text{,} \amp \amp \dots \text{.} \end{align*}
The associated partial sums are:
\begin{align*} s_0 \amp = 0 \text{,} \\ s_1 \amp = 0 + (-1) = -1 \text{,} \\ s_2 \amp = 0 + (-1) + 4 = 3 \text{,} \\ s_3 \amp = 0 + (-1) + 4 + (-9) = -6 \text{,} \\ s_4 \amp = 0 + (-1) + 4 + (-9) + 16 = 10 \text{,} \\ s_5 \amp = 0 + (-1) + 4 + (-9) + 16 + (-25) = -15 \text{,} \\ \amp \vdots \text{.} \end{align*}
These partial sums are oscillating between increasingly large positive and negative numbers, so there is no trend toward any specific number.

Definition 25.1.4. Infinite series.

Write
\begin{equation*} \sum_{k = 0}^\infty a_k \end{equation*}
(or sometimes just \(\sum a_k\)) to represent the limit of the sequence of partial sums for the sequence \(\nseq{a} \text{,}\) called an infinite series. We say that the series converges/diverges if the sequence of partial sums does.

Example 25.1.5.

We have already seen in Example 25.1.2 that
\begin{align*} \sum_{k = 0}^\infty {10}^{-k} \amp = \frac{10}{9} \text{,} \amp \sum_{k = 0}^\infty a_k \amp = \pi \text{,} \end{align*}
where \(\nseq{a}\) is the sequence of digits of \(\pi\text{.}\)

Remark 25.1.6.

Every real number \(r\) is equal to the series formed by the digits in its decimal expansion:
\begin{equation*} r = \sum_{k = -N}^\infty \frac{r_k}{{10}^k} \text{,} \end{equation*}
where each \(r_k\) is an integer between \(0\) and \(9\) (inclusive) and \(N\) is the highest “place” that a non-zero digit appears. For example, for \(r = 9876\) then \(N = 3\) because the first digit \(r_{-3} = 9\) appears in the thousands (\({10}^3\)) place, but for \(r = 0.009876\) then \(N = -3\) because the first digit \(r_3 = 9\) appears in the thousandths (\({10}^{-3}\)) place.
In a sense, this is what decimal expansion means as an expression of a real number.

Warning 25.1.7.

The usual associative and commutative properties of addition DO NOT apply to the terms in an infinite series.

Example 25.1.8. Grouping terms can produce an incorrect result.

Consider the “alternating” series
\begin{equation*} \sum_{k = 0}^\infty {(-1)}^k = 1 + (-1) + 1 + (-1) + 1 + (-1) + \dotsb \text{.} \end{equation*}
It is tempting to use the associative property of addition and group terms in this sum as follows:
\begin{align*} \sum_{k = 0}^\infty {(-1)}^k \amp \overset{\text{?}}{=} \bbrac{1 + (-1)} + \bbrac{1 + (-1)} + \bbrac{1 + (-1)} + \dotsb\\ \amp = 0 + 0 + 0 + \dotsb \\ \amp = 0 \text{.} \end{align*}
However, this would not be the correct result based on the definition of Infinite series. Instead of performing this dubious manipulation of the terms of the sequence, we should consider the sequence of partial sums:
\begin{align*} s_0 \amp = 1 \\ s_1 \amp = 1 + (-1) = 0 \\ s_2 \amp = 1 + (-1) + 1 = 1 \\ s_3 \amp = 1 + (-1) + 1 + (-1) = 0 \\ \amp \vdots \text{.} \end{align*}
As you can see, the sequence \(\nseq{s}\) oscillates between \(1\) and \(0\text{,}\) and so cannot converge to any specific value. In fact, the infinite series \(\sum {(-1)}^k\) diverges.

Example 25.1.9. Re-ordering terms can produce different results.

It is possible to demonstrate that the alternating harmonic series
\begin{equation*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1}}{k} = 1 + \left(- \frac{1}{2}\right) + \frac{1}{3} + \left(-\frac{1}{4}\right) + \dotsb \end{equation*}
converges to \(\ln(2)\text{.}\) (See Subsection 26.4.4.) In the Sage cell below you can investigate the behaviour of the partial sums numerically. Try increasing the partial sum upper bound value \(n\) to see how the partial sum values appear to approach \(\ln(2)\text{.}\)
However, if we rearrange the terms of the sequence we actually get a different result. It is possible to demonstrate that
\begin{equation*} 1 + \frac{1}{3} + \left(- \frac{1}{2}\right) + \frac{1}{5} + \frac{1}{7} + \left(-\frac{1}{4}\right) + \dotsb = \frac{3}{2} \ln(2) \text{.} \end{equation*}
Again, the Sage cell below is provided for you to investigate this convergence numerically.
In fact, it is a theorem of Riemann that the terms of this sequence can be rearranged to add up to any value you like! Suppose you would like the rearranged series to add up to a specific number \(s\text{.}\) Just add up positive terms until your partial sum is greater than \(s\text{,}\) then add on negative terms until your partial sum is less than \(s\text{,}\) then more positive terms until the partial sum is greater than \(s\) again, and so on. Try it with \(s\) equal to your favourite number!
While you shouldn’t rearrange/simplify an infinite sum, you certainly can rearrange/simplify the partial sums as you investigate the convergence of the infinite sum.

Example 25.1.10. A telescoping sum.

Consider
\begin{equation*} \sum_{k = 1}^\infty \frac{2}{k (k + 2)} \text{.} \end{equation*}
The terms can be rewritten as
\begin{equation*} \frac{2}{k (k + 2)} = \frac{1}{k} - \frac{1}{k + 2} \text{,} \end{equation*}
and so the \(\nth\) partial sum is
\begin{equation*} s_n = \sum_{k = 1}^n \left( \frac{1}{k} - \frac{1}{k + 2} \right) = \sum_{k = 1}^n \frac{1}{k} - \sum_{k = 1}^n \frac{1}{k + 2} \text{.} \end{equation*}
More explicitly,
\begin{align*} s_n = 1 + \frac{1}{2} \amp + \frac{1}{3} + \frac{1}{4} + \dotsb + \frac{1}{n} \\ \amp - \frac{1}{3} - \frac{1}{4} - \dotsb - \frac{1}{n} - \frac{1}{n + 1} - \frac{1}{n + 2} \text{.} \end{align*}
You can see that most of the terms of in the two finite sums will cancel, leaving only
\begin{equation*} s_n = 1 + \frac{1}{2} + \frac{1}{n + 1} + \frac{1}{n + 2} \text{.} \end{equation*}
But as \(n \to \infty\text{,}\) the terms with \(n\) in the denominator will become negligibly small, and so \(\nseq{s} \to 1 + \frac{1}{2}\text{.}\) That is,
\begin{equation*} \sum_{k = 1}^\infty \frac{2}{k (k + 2)} = \frac{3}{2} \text{.} \end{equation*}
The following is an extremely important general example.

Justification of Statement 1.

For all values of \(r\text{,}\) the partial sum \(s_n\) is
\begin{equation*} s_n = \sum_{k = 0}^n r^k = 1 + r + r^2 + \dotsb + r^n \text{.} \end{equation*}
Notice what happens if we multiply a partial sum by \(r\) and subtract:
\begin{equation*} \begin{array}{rcccccccccccc} s_n \amp = \amp 1 \amp + \amp r \amp + \amp r^2 \amp + \amp \dotsb \amp + \amp r^n \\ - \quad r s_n \amp = \amp \amp \amp r \amp + \amp r^2 \amp + \amp \dotsb \amp + \amp r^n \amp + \amp r^{n + 1} \\ \hline s_n - r s_n \amp = \amp 1 \amp - \amp r^{n + 1} \end{array}\text{.} \end{equation*}
Factoring the common \(s_n\) on the left, we have
\begin{equation*} s_n (1 - r) = 1 - r^{n + 1} \text{.} \end{equation*}
If \(r \neq 1\text{,}\) then \(1 - r \neq 0\) and we may divide both sides to obtain
\begin{equation*} s_n = \frac{1 - r^{n + 1}}{1 - r} \text{.} \end{equation*}

Justification of Statement 2.

Convergence of an infinite series is defined by the convergence of its sequence of partial sums. We will break the assumption \(-1 \lt r \lt 1\) into several cases.
Case \(r = 0\).
Recall that in this case we are taking the first term in the series to be \(1\text{.}\) But all other terms in the series are \(0^n = 0\text{,}\) so that \(s_n = 1\) for all \(n\text{.}\) Thus \(\nseq{s}\) is a constant sequence that converges to \(1\text{,}\) which agrees with the value of the conjectured formula \(1 / (1 - r)\) when \(r = 0\text{.}\)
Case \(0 \lt r \lt 1\).
In this case the exponential function \(f(t) = r^t\) represents exponential decay, and the terms of the sequence \(\seq{r^{n + 1}}\) are discrete values along that decay curve. So we conclude that \(\seq{r^{n + 1}} \to 0\) and
\begin{equation*} s_n = \frac{1 - r^{n + 1}}{1 - r} \to \frac{1}{1 - r} \text{,} \end{equation*}
as conjectured.
Case \(-1 \lt r \lt 0\).
In this case \(0 \lt \abs{r} \lt 1\text{,}\) and both the exponential function \(f(t) = {\abs{r}}^t\) and its vertical reflection \(g(t) = - {\abs{r}}^t\) have a horizontal asymptote at \(y = 0\text{.}\) With a negative base \(r\text{,}\) the terms of the sequence \(\seq{r^{n + 1}}\) bounce between the two curves, but still trend to \(0\) along with the curves. So again \(\seq{r^{n + 1}} \to 0\) and
\begin{equation*} s_n = \frac{1 - r^{n + 1}}{1 - r} \to \frac{1}{1 - r} \text{,} \end{equation*}
as conjectured.

Justification of Statement 3.

Again we will consider different cases.
Case \(r = 1\).
In this case we have
\begin{equation*} s_n = 1^0 + 1^1 + 1^2 + \dotsb + 1^n = n + 1\text{,} \end{equation*}
and this sequence clearly diverges to infinity.
Case \(r = -1\).
In this case the partial sums alternate between \(1\) and \(0\text{,}\) and so diverge.
Case \(r \gt 1\).
In this case the exponential function \(f(t) = r^t\) represents exponential growth, and the sequence \(\seq{r^{n + 1}}\) diverges to infinity along with that function. Therefore, the formula for the partial sums provided by Statement 1 also diverges.
Case \(r \lt -1\).
As in the case \(-1 \lt r \lt 0\) in the justification for Statement 2, when \(r\) is negative the terms of the sequence \(\seq{r^{n + 1}}\) are bouncing between values on the graphs of the exponential function \(f(t) = {\abs{r}}^t\) and its vertical reflection. But this time, if \(r \lt -1\) then both these exponential functions experience unbounded growth, one in a positive direction and the other in a negative direction, so \(\seq{r^{n + 1}}\) diverges. Therefore, the formula for the partial sums provided by Statement 1 also diverges.

Example 25.1.12. Two important geometric series.

  1. The infinite series of the sequence \(\seq{1/2^k}\) is geometric with \(r = 1/2\text{,}\) which converges since \(1/2\) is less than \(1\text{,}\) and we have
    \begin{equation*} \sum_{k = 0}^\infty \frac{1}{2^k} = \frac{1}{1 - 1/2} = 2 \text{.} \end{equation*}
  2. The infinite series of the sequence \(\seq{1/{10}^k}\) is geometric with \(r = 1/10\text{,}\) which converges since \(1/10\) is less than \(1\text{,}\) and we have
    \begin{equation*} \sum_{k = 0}^\infty \frac{1}{{10}^k} = \frac{1}{1 - 1/10} = \frac{10}{9} \text{.} \end{equation*}
    We already looked at this series in Series 1 of Example 25.1.2 — you may wish to go back and compare the conjectured sum result expressed as a decimal with the result above.

Section 25.2 Properties

An infinite series can be thought of as an improper integral of a step function:
\begin{equation*} \sum_{k = 0}^\infty a_k = \ccmint{0}{\infty}{a(t)}{t} \text{,} \end{equation*}
where \(a(t)\) is the step function that takes on value \(a(t) = a_k\) on the subdomain \(k \le t \lt k + 1\text{.}\) Effectively we are viewing the infinite series as a regular Riemann sum with \(\change{t} = 1\text{,}\) but with an unbounded domain (hence an infinite number of rectangles). From this point of view, partial sum effectively becomes the same thing as partial area. Usually Riemann sums are only an approximation of a definite integral, but in this case it is an equality because the function being integrated is already a step function.
With this in mind, the properties of infinite series mirror the properties of improper integrals.

Justification.

Suppose \(k_1 \lt k_2\text{.}\) Then
\begin{align*} \sum_{k = k_1}^\infty a_k \amp = \lim_{n \to \infty} \sum_{k = k_1}^n a_k \\ \amp = \lim_{n \to \infty} \left( \sum_{k = k_1}^{k_2 - 1} a_k + \sum_{k = k_2}^n a_k \right) \\ \amp = \sum_{k = k_1}^{k_2 - 1} a_k + \lim_{n \to \infty} \sum_{k = k_2}^n a_k \\ \amp = \sum_{k = k_1}^{k_2 - 1} a_k + \sum_{k = k_2}^\infty a_k \text{.} \end{align*}
Since the sum over indices \(k_1 \le k \le k_2 - 1\) is just a number, the two series must both converge or both diverge.

Example 25.2.3.

We’ve previously seen that \(\sum (1/2^k) = 2\text{,}\) so
\begin{align*} \sum_{k = 3}^\infty \frac{1}{2^k} \amp = \sum_{k = 0}^\infty \frac{1}{2^k} - \sum_{k = 0}^2 \frac{1}{2^k} \\ \amp = 2 - \left( 1 + \frac{1}{2} + \frac{1}{4} \right) \\ \amp = \frac{1}{4} \text{.} \end{align*}
We may extend some of the patterns of finite sums (Subsection 5.3.2) to infinite series.

Example 25.2.5. Splitting an infinite series.

The series
\begin{equation*} \sum_{k = 0}^\infty \frac{5^k + 2}{10^k} \end{equation*}
looks complicated at first glance, but we can split it into two geometric series that we’ve already computed:
\begin{align*} \sum_{k = 0}^\infty \frac{5^k + 2}{10^k} \amp = \sum_{k = 0}^\infty \left( \frac{5^k}{10^k} + \frac{2}{10^k} \right) \\ \amp = \sum_{k = 0}^\infty \frac{5^k}{10^k} + \sum_{k = 0}^\infty \frac{2}{10^k} \\ \amp = \sum_{k = 0}^\infty \frac{1}{2^k} + 2 \sum_{k = 0}^\infty \frac{1}{10^k} \\ \amp = 2 + 2 \cdot \frac{10}{9} \\ \amp = \frac{38}{9} \text{.} \end{align*}
You may have heard that if a decimal number exhibits some repeating pattern in its decimal expansion, then it must be a rational number. Turning a rational number into a decimal one is not difficult: just use long division. Here is an example of turning a decimal number with a repeating pattern back into a rational number, using geometric series along with Property 1 of Pattern 25.2.4.

Example 25.2.6. Expressing a decimal as a fraction.

Let’s turn \(2.3\overline{17}\) back into a fraction. Because the repeating pattern is two digits long, instead of grouping digit-by-digit we’ll group them in pairs:
\begin{align*} 2.3171717\dots \amp = \frac{23}{10} + \frac{17}{10^3} + \frac{17}{10^5} + \dotsb \\ \amp = \frac{23}{10} + \frac{17}{10^3} \left(1 + \frac{1}{10^2} + \frac{1}{10^4} + \dotsb \right) \\ \amp = \frac{23}{10} + \frac{17}{1000} \sum_{k = 0}^\infty \frac{1}{100^k} \text{.} \end{align*}
The series \(\sum (1/{100}^k)\) is geometric with \(r = 100\text{,}\) so
\begin{align*} 2.3\overline{17} \amp = \frac{23}{10} + \frac{17}{1000} \sum_{k = 0}^\infty \frac{1}{100^k} \\ \amp = \frac{23}{10} + \frac{17}{1000} \cdot \frac{1}{1 - 1/100} \\ \amp = \frac{23}{10} + \frac{17}{10\cancel{00}} \cdot \frac{\cancel{100}}{99} \\ \amp = \frac{23}{10} + \frac{17}{990} \\ \amp = \frac{23 \cdot 99 + 17}{990} \\ \amp = \frac{2294}{990} \\ \amp = \frac{1147}{495} \text{.} \end{align*}
Just as with Fact 24.2.3, to be able to add an infinite number of terms those terms must tend to \(0\text{.}\)

Justification.

Let \(\nseq{s}\) be the sequence of partial sums. As \(\sum a_k\) converges, we have \(\nseq{s} \to L\) for some limit value \(L\text{.}\) Let \(\nseq{t}\) be the sequence of “delayed” partial sums: for \(n \ge 1\) set
\begin{equation*} t_n = \sum_{k = 0}^{n - 1} a_n \end{equation*}
(and we’ll set \(t_0 = 0\)). Then also \(\nseq{t} \to L\text{.}\) By the Sequence limit laws, the difference sequence \(\seq{s_n - t_n}\) converges to \(L - L = 0\text{.}\) But the terms of the difference sequence for \(n \ge 1\) are
\begin{align*} s_n - t_n \amp = \sum_{k = 0}^n a_k - \sum_{k = 0}^{n - 1} a_k \\ \amp = \left[ \cancel{\left( \sum_{k = 0}^{n - 1} a_k \right)} + a_n \right] - \cancel{\sum_{k = 0}^{n - 1} a_k} \\ \amp = a_n \end{align*}
(and also \(s_0 - t_0 = a_0 - 0 = a_0\)). So the difference sequence \(\seq{s_n - t_n}\) is identical to the sequence \(\nseq{a}\text{.}\) Since we have determined that \(\seq{s_n - t_n} \to 0\text{,}\) then also \(\nseq{a} \to 0\text{.}\)

Warning 25.2.8.

Do not get Fact 25.2.7 backwards! The next (important!) example will demonstrate that knowing \(\nseq{a} \to 0\) is not sufficient for \(\sum a_k\) to converge.

Example 25.2.9. Harmonic Series.

The infinite series \(\sum 1/k\) (beginning at index \(k = 1\)) has \(\seq{1/n} \to 0\text{,}\) but does not converge. To see this, we’ll consider a particular subsequence of the sequence of partial sums. But first, some analysis of some “partial” partial sums. Because this sequence is decreasing, in any sum of consecutive terms we can replace all terms by the last term and end up with a smaller result:
\begin{align*} \sum_{k = m}^n \frac{1}{k} \amp = \frac{1}{m} + \frac{1}{m+1} + \dotsb + \frac{1}{n} \\ \amp \gt \frac{1}{n} + \frac{1}{n} + \dotsb + \frac{1}{n} \\ \amp = \frac{n - m + 1}{n} \text{.} \end{align*}
In particular, a “partial” partial sum between two indices that are consecutive powers of \(2\) exhibits the following pattern:
\begin{align*} \sum_{k = 2^{m - 1} + 1}^{2^m} \frac{1}{k} \amp \gt \frac{2^m - (2^{m - 1} + 1) + 1}{2^m} \\ \amp = \frac{2^m - 2^{m - 1}}{2^m} \\ \amp = \frac{2 \cdot 2^{m - 1} - 1 \cdot 2^{m - 1}}{2^m} \\ \amp = \frac{2^{m - 1}}{2^m} \\ \amp = \frac{1}{2} \text{.} \end{align*}
Now let’s use the inequality above to look at the subsequence of partial sums where the index is a power of \(2\text{:}\)
\begin{align*} s_1 \amp = 1 \\ s_2 \amp = 1 + \frac{1}{2} = \frac{3}{2} \\ s_4 \amp = s_2 + \sum_{k = 3}^{ 4} \frac{1}{k} \gt \frac{3}{2} + \frac{1}{2} = \frac{4}{2} \\ s_8 \amp = s_4 + \sum_{k = 5}^{ 8} \frac{1}{k} \gt \frac{4}{2} + \frac{1}{2} = \frac{5}{2} \\ s_{16} \amp = s_8 + \sum_{k = 9}^{16} \frac{1}{k} \gt \frac{5}{2} + \frac{1}{2} = \frac{6}{2} \text{.} \end{align*}
The pattern continues, so that
\begin{equation*} s_{2^n} \ge \frac{n + 2}{2} \text{.} \end{equation*}
(We have used “or equal” here so that \(s_1\) and \(s_2\) can be included.) This particular subsequence of the sequence of partNow let’s use the ial sums diverges to infinity, hence it is not possible for the full sequence of partial sums to converge (see Pattern 22.3.13).

Example 25.2.10. Changing variables in a series.

We would expect the series \(\sum 1/(k+1)\) to diverge because the terms are so similar to the harmonic series. In fact, by changing variables we see that this series is exactly the harmonic series. Let \(m = k + 1\text{.}\) As \(k\) increments through the values \(0,1,2,3,\dotsc\text{,}\) \(m\) will increment through the values \(1,2,3,4,\dotsc\text{.}\) Then we have
\begin{equation*} \sum_{k = 0}^\infty \frac{1}{k + 1} = \sum_{m = 1}^\infty \frac{1}{m} \text{,} \end{equation*}
and so the series \(\sum 1/(k+1)\) must diverge, being equivalent to the divergent harmonic series.
The contrapositive of Fact 25.2.7 is often useful to avoid unnecessary work.

Example 25.2.12.

The infinite series
\begin{equation*} \sum_{k = 0}^\infty \frac{2 k^3 - 3 k + 1}{k^3 + k^2 - 5} \end{equation*}
must diverge because
\begin{equation*} \lrseq{\frac{2 k^3 - 3 k + 1}{k^3 + k^2 - 5}} \to 2 \text{.} \end{equation*}
In the case that the terms of \(\nseq{a}\) are non-negative, then the partial sums of \(\sum a_k\) are increasing (or, at least, non-decreasing), so their convergence is dependent on being bounded.

Example 25.2.14.

We have seen that the Harmonic Series \(\sum 1/k\) diverges, not unlike the fact that the improper integral of \(1/t\) diverges as \(t \to \infty\text{.}\) Since the improper integral of \(1/t^2\) converges, might we expect the infinite series \(\sum 1/k^2\) to converge?
The terms of the sequence \(\seq{1/n^2}\) are all positive, so let’s try to bound the partial sums. Here we will take the opposite approach as in Example 25.2.9: we’ll compare “partial” partial sums to a larger value by replacing each term by the first. Since \(\seq{1/k^2}\) is a decreasing sequence, we have
\begin{align*} \sum_{k = m}^n \frac{1}{k^2} \amp = \frac{1}{m^2} + \frac{1}{{(m+1)}^2} + \dotsb + \frac{1}{n^2} \\ \amp \lt \frac{1}{m^2} + \frac{1}{m^2} + \dotsb + \frac{1}{m^2} \\ \amp = \frac{n - m + 1}{m^2} \text{.} \end{align*}
Simiarly to Example 25.2.9, apply this to “partial” partial sums between indices that are consecutive two powers of \(2\text{:}\)
\begin{align*} \sum_{k = 2^m}^{2^{m + 1} - 1} \frac{1}{k^2} \amp \lt \frac{(2^{m + 1} - 1) - 2^m + 1}{{(2^m)}^2} \\ \amp = \frac{2 \cdot 2^m - 1 \cdot 2^m}{{(2^m)}^2} \\ \amp = \frac{2^m}{{(2^m)}^2} \\ \amp = \frac{1}{2^m} \text{.} \end{align*}
Now let’s use the inequality above to look at the subsequence of partial sums where the index is one less than a power of \(2\text{:}\)
\begin{align*} s_1 \amp = 1 \\ s_3 \amp = 1 + \sum_{k = 2}^{ 3} \frac{1}{k^2} \lt 1 + \frac{1}{2} = \frac{ 3}{2} \\ s_7 \amp = s_3 + \sum_{k = 4}^{ 7} \frac{1}{k^2} \lt \frac{3}{2} + \frac{1}{4} = \frac{ 7}{4} \\ s_{15} \amp = s_7 + \sum_{k = 8}^{15} \frac{1}{k^2} \lt \frac{7}{4} + \frac{1}{8} = \frac{15}{8} \text{.} \end{align*}
The pattern continues, so that
\begin{equation*} s_{2^n - 1} \le \frac{2^n - 1}{2^{n - 1}} \lt \frac{2^n}{2^{n - 1}} = 2 \text{.} \end{equation*}
But since the terms of \(\seq{1/n^2}\) are positive, the sequence of partical sums is increasing. And for all \(n \ge 1\text{,}\) we have \(n \le 2^n - 1\text{,}\) so that
\begin{equation*} s_n \le s_{2^n - 1} \lt 2 \text{.} \end{equation*}
As the sequence of partial sums is monotonic and bounded, it must converge.
We know that for an infinite series \(\sum a_k\) to converge, it is necessary (but not sufficient) for \(\nseq{a} \to 0\text{.}\) But even more, when the series does converge, the “amount left to add” must be getting small as well.

Justification.

Let \(\nseq{s}\) be the sequence of partial sums of the full infinite series \(\sum a_k\) beginning at index \(k = 0\text{,}\) and let \(S\) represent the sum value of that convergent series. Because \(\nseq{s} \to S\text{,}\) applying Fact 22.3.31 tells us that \(\seq{S - s_n} \to 0\text{.}\) Now, note that every term of \(\nseq{r}\) is actually defined, because if \(\sum a_k\) converges beginning at index \(0\text{,}\) then it converges beginning at any index, and in fact we have
\begin{equation*} r_n = \sum_{k = 0}^\infty a_k - \sum_{k = 0}^n a_k = S - s_n \end{equation*}
from Fact 25.2.1. Thus \(\nseq{r} = \seq{S - s_n} \to 0\text{.}\)

Section 25.3 Comparison tests

Here we repeat the introductory remarks we made for comparison tests for improper integrals (see the beginning of Section 24.3) in the context of series. Fact 25.2.7 says that we need \(\nseq{a} \to 0\) for \(\sum a_k\) to converge, but Example 25.2.9 demonstrates that this is not sufficient; if \(\nseq{a} \to 0\) too slowly, then the series will diverge. The comparison tests in this section allow us to determine convergence without actually computing the sum value, by comparing the series to one whose convergence is already known. But we will also add a new technique: considering the remarks made at the beginning of Section 25.2 concerning the analogy between infinite series and improper integrals, we can also compare convergence across this analogy.

Subsection 25.3.1 Integral comparison test

Justification.
For simplicity, we will prove this for \(k_0 = 0\text{;}\) every other starting index/domain boundary then just represents a “shift” of the \(k_0 = 0\) case.
Define the integral function
\begin{equation*} F(t) = \ccmint{0}{t}{f(u)}{u} \text{,} \end{equation*}
which is continuous on domain \(t \ge 0\) by the Fundamental Theorem of Calculus. By definition, we have
\begin{equation*} \ccmint{0}{\infty}{f(t)}{t} = \lim_{t \to \infty} F(t) \text{.} \end{equation*}
Note that \(F(t)\) is an increasing function, as we are accumulating more area as the integration domain endpoint \(t\) increases. (And the derivative function \(F'(t) = f(t)\) is positive, as we have assumed that \(f(t)\) is a positive function.)
As usual, let \(\nseq{s}\) be the sequence of partial sums for the infinite series \(\sum f(k)\text{.}\) Let \(\nseq{\ell}\) and \(\nseq{u}\) be the sequences with terms
\begin{align*} \ell_n \amp = s_n - s_0 \amp u_n \amp = s_{n - 1} \\ \amp = \sum_{k = 1}^n f(k) \amp \amp = \sum_{k = 0}^{n - 1} f(k) \text{.} \end{align*}
(Take \(u_0 = 0\text{.}\)) Then for \(n \ge 1\text{,}\) each \(\ell_n\) can be considered as a right Riemann sum and each \(u_n\) can be considered as a left Riemann sum for the integral \(F(n)\text{,}\) with \(n\) steps and regular with \(\change{t} = 1\) in both cases.
A diagram of a partial series sum as a lower Riemann sum for a partial area of an improper integral.
(a) Partial sum as a lower Riemann sum for a partial area of an improper integral.
A diagram of a partial series sum as an upper Riemann sum for a partial area of an improper integral.
(b) Partial sum as an upper Riemann sum for a partial area of an improper integral.
Figure 25.3.2.
As \(f(t)\) is decreasing, each right sum \(\ell_n\) is a lower sum and each left sum \(u_n\) is an upper sum:
\begin{equation*} \ell_n \le F(n) \le u_n \qquad \text{for all } n \text{.} \end{equation*}
Convergent integral.
Suppose the improper integral of \(f(t)\) as \(t \to \infty\) converges. By definition, this means that the limit of \(F(t)\) exists as \(t \to \infty\text{.}\) But then the sequence \(\bbseq{F(n)}\) also must converge (see Fact 23.5.10), and so is bounded. But since \(\ell_n \le F(n)\) for all \(n\text{,}\) then also \(\nseq{\ell}\) is bounded above. And since \(\nseq{\ell}\) is simply a constant shift of \(\nseq{s}\text{,}\) that too must be bounded above (and also bounded below by \(0\) because the terms \(f(n)\) are all positive.) We are now in the situation of Fact 25.2.13 and may conclude that \(\sum f(k)\) converges.
Divergent integral.
Suppose the improper integral of \(f(t)\) as \(t \to \infty\) diverges. This means that the limit of \(F(t)\) as \(t \to \infty\) does not exist. In particular, as \(F(t)\) is an increasing function, we must have \(F(t) \to \infty\) as \(t \to \infty\text{,}\) which in turn implies that \(\bbseq{F(n)}\) diverges to infinity. But since \(u_n \ge F(n)\) for all \(n\text{,}\) then also \(\nseq{u} \to \infty\) as well. And as the terms of \(\nseq{u}\) are the same as the terms of \(\nseq{s}\) with shifted indexing, we also have \(\nseq{s} \to \infty\text{,}\) so that \(\sum f(k)\) diverges.
Convergent series.
If \(\sum f(k)\) converges, then it cannot be that the improper integral of \(f(t)\) diverges, since if it did the “Divergent integral” case would imply that the series should diverge as well. Hence the integral converges when the series does.
Divergent series.
If \(\sum f(k)\) diverges, then it cannot be that the improper integral of \(f(t)\) converges, since if it did the “Convergent integral” case would imply that the series should converge as well. Hence the integral diverges when the series does.
Example 25.3.3.
Consider the series \(\sum \ln(k) / k\text{,}\) starting at \(k = 1\text{.}\) Set \(f(t) = \ln(t) / t\text{,}\) a function that is continuous for \(t \gt 0\) and positive for \(t \gt 1\text{.}\) Compute
\begin{equation*} f'(t) = \frac{1 - \ln(t)}{t^2} \text{.} \end{equation*}
This derivative is negative as long as \(\ln(t) \gt 1\text{,}\) which occurs for \(t \gt e\text{.}\) So we may say that \(f\) is decreasing on domain \(t \gt 3\text{,}\) and we have verified that \(f\) satisfies the conditions of the Integral Comparison Test for \(k_0 = 3\text{.}\) Because of this, we can say that the convergence/divergence of the sum and integral below are equivalent:
\begin{align*} \amp \sum_{k = 3}^\infty \frac{\ln(k)}{k} \text{,} \amp \amp \ccmint{3}{\infty}{\frac{\ln(t)}{t}}{t} \text{.} \end{align*}
Also note that the convergence of the sum above is equivalent to the convergence of the full sum that begins at index \(1\) (Fact 25.2.1), so investigating the integral above is sufficient to determine the convergence/divergence of \(\sum \ln(k) / k\text{.}\)
For the integral, we have already noted that \(\ln(t) \gt 1\) for \(t \gt e\) and so also for \(t \ge 3\text{.}\) Hence
\begin{equation*} \frac{\ln(t)}{t} \gt \frac{1}{t} \end{equation*}
on domain \(t \ge 3\text{.}\) And we know that an improper integral of \(1/t\) diverges (Fact 24.1.9), hence our improper integral of \(\ln(t) / t\) diverges as by the Comparison test for integrals.
From this we conclude that \(\sum \ln(k) / k\) diverges.
We have seen that \(\sum 1/k^2\) converges but \(\sum 1/k\) diverges, just as with improper integrals. Using the Integral Comparison Test, we can transfer all of Fact 24.1.9 to the analgous facts about infinite series.

Subsection 25.3.2 Comparing two series

We may compare two series by comparing their sequences of partial sums or by comparing their sequences of terms.
Justification for Statement 1.
Since \(\nseq{a}\) and \(\nseq{b}\) are non-negative then \(\nseq{s}\) and \(\nseq{t}\) are increasing sequences (or, at least, non-decreasing). If \(\sum b_k\) converges then \(\nseq{t}\) must be bounded, and from \(s_n \le t_n\) we conclude that \(\nseq{s}\) is bounded as well. The Monotone Convergence Theorem now implies that \(\nseq{s}\) converges.
Justification for Statement 2.
As in the justification of Statement 1, \(\nseq{s}\) and \(\nseq{t}\) must be non-decreasing. But if \(\sum a_k\) diverges then \(\nseq{s}\) cannot be bounded and must diverge to infinity. Then from \(s_n \le t_n\) we conclude that \(\nseq{t} \to \infty\) as well.
Justification.
Let \(\nseq{s}\) and \(\nseq{t}\) represent the sequences of partial sums for \(\sum a_k\) and \(\sum b_k\text{,}\) respectively. Since \(\nseq{a}\) and \(\nseq{b}\) are non-negative, our assumption \(a_n \le b_n\) for all \(n\) implies that \(s_n \le t_n\) for all \(n\) as well. So the patterns of convergence/divergence of \(\sum a_k\) and \(\sum b_k\) follow Pattern 25.3.5.
Remark 25.3.7.
With Fact 25.2.1 in mind, we see that the “for all \(n\)” conditions in the statements of both Pattern 25.3.5 and Pattern 25.3.6 may be relaxed. As long as the necessary conditions hold for tails of the sequences involved, then the conclusions of these two facts will hold for the “tails” of the infinite series involved, which is sufficient.
Example 25.3.8.
  1. Since
    \begin{equation*} \frac{1}{2^n + 1} \lt \frac{1}{2^n} \end{equation*}
    for all \(n\text{,}\) \(\sum 1 / (2^k + 1) \) converges by comparison with the geometric series \(\sum 1 / 2^k \text{.}\)
  2. For
    \begin{gather} \sum \frac{5}{2 k^2 + 4 k + 3} \tag{✶} \end{gather}
    we have
    \begin{equation*} \frac{5}{2 n^2 + 4 n + 3} \lt \frac{5}{2} \cdot \frac{1}{n^2} \end{equation*}
    for all \(n\text{,}\) and \(\sum 5 / 2 k^2\) converges because \(\sum 1 / k^2\) does (see Pattern 25.2.4). So by comparison (✶) must converge.
  3. Since
    \begin{equation*} \frac{6}{n - 1} \gt \frac{1}{n} \end{equation*}
    for all \(n \ge 2\text{,}\) the series
    \begin{equation*} \sum_{k = 2}^\infty \frac{6}{k - 1} \end{equation*}
    diverges by comparison with the harmonic series \(\sum 1/k\text{.}\)

Subsection 25.3.3 Comparing rates

Just as with improper integrals, for a series \(\sum a_k\) to converge we need \(\nseq{a} \to 0\text{,}\) but that on its own is not enough — if \(\nseq{a}\) doesn’t converge to zero fast enough, then it will diverge (for example, the harmonic series). So we can test convergence/divergence by comparing the rate at which the terms trend to \(0\) against the terms of a series whose convergence/divergence is known.
Justification.
Suppose \(\seq{a_n / b_n} \to L\text{.}\) Recall that we assume \(L\) is positive. Take \((c,d)\) to be some open range around \(L\) with \(c \gt 0\text{.}\) (For example, we could take \(c = L/2\) and \(d = 2 L\text{.}\)) Then there is some tail \(\seqtail{a_n / b_n}\) that is completely contained in this range:
\begin{gather*} c \lt \frac{a_n}{b_n} \lt d \\ c b_n \lt a_n \lt d b_n \text{.} \end{gather*}
(Recall that all terms \(b_n\) are positive, so multiplying by \(b_n\) does not affect the inequalities.)
We may now use the above inequalities to analyze converge/divergence.
Case that \(\sum b_k\) converges.
Then \(\sum d b_k\) also converges, and \(\sum a_k\) converges by comparison with that series.
Case that \(\sum b_k\) diverges.
Then \(\sum c b_k\) also diverges, and \(\sum a_k\) diverges by comparison with that series.
Case that \(\sum a_k\) converges.
Then \(\sum b_k\) cannot diverge, because if it did we have already demonstrated that \(\sum a_k\) would diverge in that case.
Case that \(\sum a_k\) diverges.
Then \(\sum b_k\) cannot converge, because if it did we have already demonstrated that \(\sum a_k\) would converge in that case.
Example 25.3.10.
  1. Consider \(\sum 6 / (k + 1)\text{.}\) We have already verified that \(\sum 1 / (k + 1)\) diverges (Example 25.2.10), so \(\sum 6 / (k + 1)\) must diverge as well. But let’s check that the Limit comparison test for series says the same:
    \begin{equation*} \frac{6 / (n + 1)}{1 / n} = \frac{6 n}{n + 1} \to 6 \qquad \text{as } n \to \infty \text{.} \end{equation*}
    By comparison with the divergent harmonic series \(\sum 1/k\text{,}\) the series \(\sum 6 / (k + 1)\) diverges.
  2. Consider \(\sum 1 / (2^k - 1)\text{.}\) Compare with the geometric series \(\sum 1 / 2^k\text{:}\)
    \begin{equation*} \frac{1 / (2^n - 1)}{ 1 / 2^n} = \frac{2^n}{2^n - 1} \to 1 \qquad \text{as } n \to \infty \text{.} \end{equation*}
    So \(\sum 1 / (2^k - 1)\) converges.
Example 25.3.12. Choosing a comparator series.
Consider
\begin{gather} \sum \frac{2 k^2 + 3 k}{\sqrt{5 + k^7}} \text{.}\tag{✶✶} \end{gather}
For large \(k\) we have
\begin{equation*} \frac{2 k^2 + 3 k}{\sqrt{5 + k^7}} \approx \frac{2 k^2}{\sqrt{k^7}} = \frac{2}{k^{3/2}} \text{.} \end{equation*}
So we’ll attempt to compare with the convergent \(p\)-series \(\sum 1 / k^{3/2}\text{:}\)
\begin{align*} \frac{(2 n^2 + 3 n) / \sqrt{5 + n^7}}{1 / n^{3/2}} \amp = \frac{2 n^{7/2} + 3 n^{5/2}}{\sqrt{5 + n^7}}\\ \amp = \frac{2 + 3 / n}{\sqrt{5 + n^7} / n^{7/2}} \\ \amp = \frac{2 + 3 / n}{\sqrt{5 / n^7 + 1}} \\ \amp \to \frac{2 + 0}{\sqrt{0 + 1}} \\ \amp = 2 \end{align*}
as \(n \to \infty\text{.}\) From this limit result we conclude that (✶✶) converges by comparison with the convergent \(p\)-series \(\sum 1 / k^{3/2}\text{.}\)
The Limit comparison test for series states that when the trends of the two sequences of terms are progressing at the same rate (give or take a constant multiple), then the convergence/divergence of the associated series are equivalent. When the trends of the two sequences progress at different rates, then their convergence is not equivalent but we can still make conclusions based on knowledge of convergence/divergence of one of the sequences.
Remark 25.3.14.
In light of Pattern 25.3.13, it’s best to use the denominator sequence \(\nseq{b}\) to represent the one whose convergence/divergence of the associated series is known when attempting a limit comparison.

Section 25.4 More convergence tests

Subsection 25.4.1 Root test

If the terms of a series involve the index variable as an exponent, we can determine the convergence of the series by examing the trend of the base sequence.
Justification for Statement 1.
We will attempt to compare \(\sum a_k\) with a convergent geometric series \(\sum r^k\text{.}\) We assume \(L \lt 1\text{,}\) and note that we must also have \(L \ge 0\) since the terms of \(\seq{\sqrt[n]{a_n}}\) are non-negative. We can create an open range \((c,d)\) around \(L\) with \(d \lt 1\text{.}\) (For example, we could take \(c = L/2\) and \(d = (1 - L)/2\text{.}\)) Then there is some tail \(\seqtail{\sqrt[n]{a_n}}\) that is completely contained in this range:
\begin{equation*} c \lt \sqrt[n]{a_n} \lt d \text{.} \end{equation*}
Since power functions are always increasing functions on non-negative domains, we may raise each “side” of the above inequality to the \(\nth\) power and the inequalities will be maintained:
\begin{equation*} c^n \lt a_n \lt d^n \text{.} \end{equation*}
We know have our base \(r = d\) for a geometric series, and \(\sum d^k\) converges because \(0 \lt d \lt 1\text{.}\) Since \(a_n \lt d^n\) for all terms in the tail \(\nseqtail{a}\text{,}\) the series \(\sum a_k\) converges as well by comparison.
Justification for Statement 2.
Case of infinite \(L\).
If the terms of \(\seq{\sqrt[n]{a_n}}\) diverge to infinity, then the sequence of \(\nth\) powers of those terms (which is just \(\nseq{a}\)) will also diverge to infinity, and so the series \(\sum a_k\) will diverge in this case.
Case of finite \(L\).
The argument in this case is identical to the one above, except this time we compare to a divergent geometric series. Since \(L \gt 1\) we can create an open range \((c,d)\) around \(L\) with \(c \gt 1\text{.}\) Then similarly to the previous argument we can determine some tail \(\nseqtail{a}\) with \(a_n \gt c^n\) for all terms in that tail, and so \(\sum a_k\) diverges by comparison with the divergent geometric series \(\sum c^k\text{.}\)
Example 25.4.2.
  1. For series \(\sum a_k\) with terms
    \begin{equation*} a_n = {\left(\frac{2 n + 3}{3 n + 2}\right)}^n \end{equation*}
    we have
    \begin{equation*} \sqrt[n]{a_n} = \frac{2 n + 3}{3 n + 2} \to \frac{2}{3} \qquad \text{as } n \to \infty \end{equation*}
    so we conclude that \(\sum a_k\) converges.
  2. For series \(\sum a_k\) with terms
    \begin{equation*} a_n = \frac{2^n}{(n + 1) 3^{n + 1}} \text{,} \end{equation*}
    we appear to have a mix of geometric behaviour and harmonic behaviour, but likely the geometric behaviour will win (exponentials beat polynomials) and the series should converge. Let’s verify with the Root test:
    \begin{align*} \sqrt[n]{a_n} \amp = \frac{2}{\sqrt[n]{(n + 1) 3^{n + 1}}} \\ \amp = \frac{2}{\sqrt[n]{n + 1} \sqrt[n]{3^n \cdot 3}} \\ \amp = \frac{2}{\sqrt[n]{n + 1} \cdot 3 \cdot \sqrt[n]{3}} \text{.} \end{align*}
    We have already seen that \((1 + t)^{1/t} \to 1\) as \(t \to \infty\) (Example 23.6.22), so also \(\seq{\sqrt[n]{n + 1}} \to 1\text{.}\) One may also verify with a logarithmic limit that \(\seq{3^{1/n}} \to 1\text{.}\) So we have \(\seq{\sqrt[n]{a_n}} \to 2/3\text{,}\) and conclude that \(\sum a_k\) converges.
Checkpoint 25.4.3.
Verify that the case of Statement 3 of the Root test really is inconclusive by testing both the \(p\)-series \(\sum 1/k^2\) and the harmonic series \(\sum 1/k\text{.}\) You should find that the terms of both series satisfy \(\seq{\sqrt[n]{a_n}} \to 1\text{,}\) but one series converges and one diverges.

Subsection 25.4.2 Ratio test

Again, we need \(\nseq{a} \to 0\) for \(\sum a_k\) to converge, but that’s not sufficient: we need the terms to get small fast enough that the partial sums don’t become unbounded. Another way to measure whether this is happening is to compare each term to the next in a ratio.
Justification for Statement 1.
Again we will attempt to compare \(\sum a_k\) with a convergent geometric series. And again we can create an open range \((c,d)\) around \(L\) with \(d \lt 1\text{,}\) and there will be some tail \(\seqtail{a_{n + 1} / a_n}\) entirely contained in that range. But we also assume that the terms of \(\nseq{a}\) to be positive, so all the ratios of those terms are positive as well. So for \(n \ge N\) we have
\begin{gather*} 0 \lt \frac{a_{n + 1}}{a_n} \lt d \\ 0 \lt a_{n + 1} \lt d a_n \text{.} \end{gather*}
(Recall that the terms of \(\nseq{a}\) are positive, so multiplying by \(a_n\) does not affect the inequalities.) Now we proceed inductively, starting at \(n = N\text{:}\)
\begin{gather*} 0 \lt a_{N + 1} \lt d a_N \\ 0 \lt a_{N + 2} \lt d a_{N + 1} \lt d \cdot d a_N = d^2 a_N \\ 0 \lt a_{N + 3} \lt d a_{N + 2} \lt d \cdot d^2 a_N = d^3 a_N \\ \vdots \text{.} \end{gather*}
The pattern continues, and we can say that
\begin{equation*} 0 \lt a_{N + k} \lt d^k a_N \end{equation*}
for all \(k \ge 1\text{.}\) Now, since \(0 \lt d \lt 1\text{,}\) the geometric series \(\sum d^k\) converges, and by linearity so does \(\sum d^k a_N\text{.}\) (Note that \(a_N\) is just one specific term in the sequence \(\nseq{a}\text{,}\) and so we may treat it as a constant.) Then by comparison, the “shifted” series \(\sum b_k\) with terms \(b_n = a_{N + n}\) converges. But since
\begin{equation*} \sum_{k = 0}^\infty a_{N + k} = a_N + a_{N + 1} + a_{N + 2} + \dotsb \end{equation*}
we may rewrite this (convergent) series as
\begin{equation*} \sum_{k = N}^\infty a_k\text{.} \end{equation*}
Applying Fact 25.2.1 now tells us that the full series \(\sum a_k\) also converges.
Justification for Statement 2.
Here we may make a simpler argument. There is some tail \(\seqtail{a_{n + 1} / a_n}\) where the terms are all greater than \(1\text{.}\) But then \(a_{n + 1} \gt a_n\) for all \(n \ge N\text{,}\) so in fact the sequence of terms is increasing and cannot trend to \(0\) as it needs to in order for the sum to converge. (Remember that we assume the terms are a positive sequence.)
Remark 25.4.5.
  • The ratio test is also a good choice for series where the terms involve variable exponents but the root test isn’t practical, because often there will be significant cancellation in the ratio of consecutive terms.
  • The justifiction for Statement 2 of Pattern 25.4.4 implies something interesting: if we have a sequence \(\nseq{a}\) where
    • \(\displaystyle \nseq{a} \to 0\)
    • \(\seq{a_{n + 1} / a_n}\) converges
    • \(\sum a_k\) diverges
    then it must be that \(\seq{a_{n + 1} / a_n} \to 1\text{.}\) Because if \(\seq{a_{n + 1} / a_n}\) converged to a limit value less than \(1\) then the series would converge, and if it converged to a limit value greater than \(1\) then \(\nseq{a} \not\to 0\text{.}\) So in this divergent case the terms are trending to \(0\) but not very fast, because we eventually always have
    \begin{equation*} \frac{a_{n + 1}}{a_n} \approx 1 \text{,} \end{equation*}
    which implies that we eventually always have \(a_{n + 1} \approx a_n\text{,}\) and this approximation becomes better and better as we go further out in the sequence.
Example 25.4.6.
  1. For series \(\sum k^3 / 3^k\text{,}\) the root test will not play nicely with the numerator, so let’s apply the ratio test instead:
    \begin{align*} \frac{a_{n + 1}}{a_n} \amp = \frac{(n + 1)^3 / 3^{n + 1}}{n^3 / 3^n} \\ \amp = \frac{(n + 1)^3}{\cancel{3^n} \cdot 3} \cdot \frac{\cancel{3^n}}{n^3} \\ \amp = \frac{1}{3} \cdot {\left(\frac{n + 1}{n}\right)}^3 \text{.} \end{align*}
    This final expression converges to \(1/3\) as \(n \to \infty\text{,}\) so we conclude that the series converges.
  2. For series \(\sum k! / {10}^k\text{,}\) again the root test will not play nicely with the factorial in the numerator, so let’s apply the ratio test:
    \begin{align*} \frac{a_{n + 1}}{a_n} \amp = \frac{(n + 1)! / {10}^{n + 1}}{n! / {10}^n} \\ \amp = \frac{(n + 1)!}{{10}^{n + 1}} \cdot \frac{{10}^n}{n!} \\ \amp = \frac{(n + 1) \cdot \bcancel{n!}}{\cancel{{10}^n} \cdot 10} \cdot \frac{\cancel{{10}^n}}{\bcancel{n!}} \\ \amp = \frac{n + 1}{10} \text{.} \end{align*}
    This final expression diverges to \(\infty\) as \(n \to \infty\text{,}\) so we conclude that the series diverges. (Can you verify that the sequence \(\seq{n! / {10}^n}\) is eventually increasing? Hint: Consider each of the numerator and denominator as a product of \(n\) factors, and compare the factors.)
  3. For series \(\sum k^k / k!\text{,}\) again the root test will not play nicely with the factorial, so let’s apply the ratio test:
    \begin{align*} \frac{a_{n + 1}}{a_n} \amp = \frac{{(n + 1)}^{n + 1} / (n + 1)!}{n^n / n!} \\ \amp = \frac{{(n + 1)}^{n + 1}}{(n + 1)!} \cdot \frac{n!}{n^n} \\ \amp = \frac{{(n + 1)}^n \cdot \bcancel{(n + 1)}}{\bcancel{(n + 1)} \cdot \cancel{n!}} \cdot \frac{\cancel{n!}}{n^n} \\ \amp = {\left(\frac{n + 1}{n}\right)}^n \\ \amp = {\left(1 + \frac{1}{n}\right)}^n \text{.} \end{align*}
    As \(n \to \infty\text{,}\) this final expression behaves exactly as the limit in Example 23.6.23, and so we can say \(\seq{a_{n + 1} / a_n} \to e\text{.}\) As this limit value is greater than \(1\text{,}\) we conclude that the series diverges. (Again, can you verify that the sequence \(\seq{n^n / n!}\) is eventually increasing?)
Checkpoint 25.4.7.
Verify that the case of Statement 3 of the Ratio test really is inconclusive by testing both the \(p\)-series \(\sum 1/k^2\) and the harmonic series \(\sum 1/k\text{.}\) You should find that the terms of both series satisfy \(\seq{a_{n + 1} / a_n} \to 1\text{,}\) but one series converges and one diverges.

Section 25.5 Absolute convergence

So far in this chapter, the convergence tests have all been for infinite series with positive or non-negative terms. If \(\nseq{a}\) is a sequence with all non-positive terms, then it should be clear the the convergence of \(\sum a_k\) is equivalent to the convergence of \(\sum \abs{a_k}\text{.}\) But what if \(\nseq{a}\) has some positive and some negative terms? Can the convergence/divergence of \(\sum \abs{a_k}\) tell us anything about \(\sum a_k\) in that case?

Subsection 25.5.1 Absolute versus conditional convergence

The answer to the question above is “yes.”
Justification.
The argument is effectively the same as for Fact 24.3.6, just with sums and terms instead of integrals and function values. Assume \(\sum \abs{a_k}\) converges. Then from Pattern 25.2.4, so do both of
\begin{align*} \amp \sum 2 \abs{a_k} \text{,} \amp \amp \sum \bbrac{-\abs{a_k}} \text{.} \end{align*}
For all \(n\) we have
\begin{gather*} \phantom{\implies} \quad -\abs{a_n} \le a_n \le \abs{a_n} \quad \phantom{\implies} \\ \implies \quad 0 \le a_n + \abs{a_n} \le 2 \abs{a_n} \text{.} \quad \phantom{\implies} \end{gather*}
So by comparison, \(\sum \bbrac{a_k + \abs{a_k}}\) must converge. But then adding this convergent series to another will yield a convergent series (Pattern 25.2.4), so
\begin{equation*} \sum \Bbrac{a_k + \abs{a_k} + \bbrac{-\abs{a_k}}} = \sum a_k \end{equation*}
converges.
Definition 25.5.2. Absolute and conditional convergence.
  1. If \(\sum \abs{a_k}\) converges then we say that the infinite series \(\sum a_k\) is absolutely convergent.
  2. If \(\sum a_k\) converges but \(\sum \abs{a_k}\) does not, then we say that the infinite series \(\sum a_k\) is conditionally convergent.
Remark 25.5.3.
Note that \(\sum a_k\) is convergent in both parts of Definition 25.5.2, by Fact 25.5.1, so it is not a misnomer to call \(\sum a_k\) absolutely convergent when it is known that \(\sum \abs{a_k}\) is convergent.
Example 25.5.4.
  1. For the series \(\sum {(-1)}^{k - 1} / k^2\) (beginning at index \(k = 1\)), we have
    \begin{equation*} \sum_{k = 1}^\infty \abs{\frac{{(-1)}^{k - 1}}{k^2}} = \sum_{k = 1}^\infty \frac{1}{k^2} \text{,} \end{equation*}
    which is a convergent \(p\)-series. So \(\sum {(-1)}^{k - 1} / k^2\) converges absolutely, hence converges.
  2. For the series \(\sum \cos(k) / k^2\) (beginning at index \(k = 1\)), we have
    \begin{equation*} \sum_{k = 1}^\infty \abs{\frac{\cos(k)}{k^2}} = \sum_{k = 1}^\infty \frac{\abs{\cos(k)}}{k^2} \text{.} \end{equation*}
    Since \(0 \le \abs{\cos(k)} \le 1\) for all \(k\text{,}\) we have \(\abs{\cos(k)} / k^2 \le 1/k^2\text{,}\) and so \(\sum \cos(k) / k^2\) converges absolutely by comparison with the convergent \(p\)-series \(\sum 1/k^2\text{.}\)
  3. For the series \(\sum {(-1)}^{k - 1} / k\) (beginning at index \(k = 1\)), we have
    \begin{equation*} \sum_{k = 1}^\infty \abs{\frac{{(-1)}^{k - 1}}{k}} = \sum_{k = 1}^\infty \frac{1}{k} \text{,} \end{equation*}
    which is the divergent harmonic series. So \(\sum {(-1)}^{k - 1} / k\) does not converge absolutely. However, we have investigated the partial sums of \(\sum {(-1)}^{k - 1} / k\) numerically in Example 25.1.9 and it would appear that it converges to a value of \(\ln(2)\text{.}\) In Example 25.5.12 we will verify that this series does converge conditionally.
We have seen in Example 25.1.9 that it is possible to rearrange the terms in a conditionally convergent series and get a different sum result. But if a series converges absolutely then this is not possible.
Justification.
Let \(\nseq{s}\) and \(\nseq{t}\) represent the sequences of partial sums of \(\sum a_k\) and \(\sum a_{\varphi(k)}\text{,}\) respectively, and let \(S\) represent the value of \(\sum a_k\text{,}\) so that \(\nseq{s} \to S\text{.}\) We would like to verify that \(\nseq{t} \to S\) as well. Using Fact 22.3.31, we will instead verify that \(\seq{t_n - S} \to 0\text{.}\) Following Corollary 22.3.32, consider an arbitrary half-open range \([0,c)\text{.}\) Our goal is to verify that some tail of the absolute sequence \(\seq{\abs{t_n - S}}\) is completely contained in this range.
As \(\nseq{s} \to S\text{,}\) then also \(\seq{s_n - S} \to 0\text{,}\) and there exists some tail \(\absseqtail[n \ge N_1]{s_n - S}\) completely contained in the half-open range \([0,c/2)\text{.}\) But we not only assume that \(\sum a_k\) converges, we assume it converges absolutely. That is, \(\sum \abs{a_k}\) converges. Let \(\nseq{r}\) be the sequence of “remainders” for this convergent sequence:
\begin{equation*} r_n = \sum_{k = n + 1}^\infty \abs{a_k} \text{.} \end{equation*}
Fact 25.2.15 tells us that \(\nseq{r} \to 0\text{,}\) so there exists some tail \(\nseqtail[N_2]{r}\) also completely contained in the half-open range \([0,c/2)\text{.}\) (Note that \(\nseq{r}\)) is a sequence of non-negative terms because a convergent series of non-negative terms must have a non-negative sum value.)
Now let \(N\) be the greater of \(N_1\) and \(N_2\text{,}\) so that both the tails \(\absseqtail{s_n - S}\) and \(\nseqtail{r}\) are completely contained in \([0,c/2)\text{.}\) In particular, the first terms of these tails are both contained in this range:
\begin{align} \abs{s_N - S} \amp \lt c/2 \text{,} \amp r_N \amp \lt c/2 \text{.}\tag{†} \end{align}
Remember that the series we would like to investigate is \(\sum a_{\varphi(k)}\text{,}\) but the terms of this sequence are all mixed up. However, from the original series \(\sum a_k\text{,}\) the specific partial sum of the original series
\begin{equation*} s_N = \sum_{k = 0}^N a_k = a_0 + a_1 + a_2 + \dotsb + a_N \end{equation*}
involves only a finite number of terms, and all of these terms must appear eventually in the mixed-up sequence \(\seq{a_{\varphi(n)}}\text{.}\) Choose \(M\) large enough that this happens. (For example, we could choose \(M\) to be the largest index so that the “shuffled” index \(\varphi(M)\) is between \(0\) and \(N\text{,}\) inclusive.) The two partial sums \(s_N\) and \(t_M\) involve only a finite number of terms each, and so we can subtract them and manipulate them without worrying about convergence. But we have chosen \(M\) so that in the partial sum
\begin{equation*} t_M = \sum_{k = 0}^M a_{\varphi(k)} \text{,} \end{equation*}
all the terms of \(s_N\) appear, just in a mix-up order. So if we subtract \(t_M - s_N\text{,}\) all those common terms will cancel and we will only be left will those terms from \(t_M\) that did not appear in \(s_N\text{.}\) Because \(s_N\) involves all terms up to and including \(a_N\text{,}\) the terms remaining must all have indices that are larger than \(N\) (as part of the original sequence of terms \(\nseq{a}\)):
\begin{equation*} \abs{t_M - s_N} = \abs{a_{j_1} + a_{j_2} + \dotsb + a_{j_\ell}} \end{equation*}
for some collection of indices \(j_1,\dotsc,j_\ell\) that are all greater than \(N\text{.}\) Using the extended Triangle Inequality, we can say that
\begin{gather} \abs{t_M - s_N} \le \abs{a_{j_1}} + \abs{a_{j_2}} + \dotsb + \abs{a_{j_\ell}}\text{.}\tag{††} \end{gather}
In this form, because the indices are all larger than \(N\text{,}\) each of these absolute terms appears in the \(\nth[N]\) absolute remainder
\begin{equation*} r_N = \sum_{k = N + 1}^\infty \abs{a_k} \text{.} \end{equation*}
We know the series defining \(r_N\) converges, and since it is an infinite sum of non-negative terms, it must add up to a larger value than the finite sum in (††). So we can say
\begin{equation*} \abs{t_M - s_N} \le r_N \text{.} \end{equation*}
Now the partial sum \(t_M\) already contains all of the terms from \(s_N\) (and likely more), so if we go even further out into the partial sums \(\nseq{t}\) that will still be true. In other words, all of the analysis above is actually true for every difference \(\abs{t_n - s_N}\) for \(n \ge M\text{,}\) so that for such \(n\) it is always true that \(\abs{t_n - s_N} \le r_N\text{.}\)
We’re now ready to confirm that the tail \(\absseqtail[n \ge M]{t_n - S}\) is completely contained in the range \([0,c)\text{.}\) Because of the absolute values, we know that the terms of this tail will all be greater than or equal to zero; we just need to confirm that they are all less than \(c\text{.}\) So consider:
\begin{align*} \abs{t_n - S} \amp = \abs{t_n - s_N + s_N - S} \\ \amp \le \abs{t_n - s_N} + \abs{s_N - S} \\ \amp \le r_N + \abs{s_N - S} \\ \amp \lt \frac{c}{2} + \frac{c}{2} \\ \amp = c \text{,} \end{align*}
as required, where the first inequality above is the Triangle Inequality, the second is from our analysis of the terms of \(\nseqtail[M]{t}\) compared to the specific partial sum \(s_N\text{,}\) and the third is from (†).

Subsection 25.5.2 Alternating series

Notice that the terms in Series 1 and Series 3 in Example 25.5.4 alternate between positive and negative:
\begin{align*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1}}{k^2} \amp = 1 + \left(-\frac{1}{4}\right) + \frac{1}{9} + \left(-\frac{1}{16}\right) + \dotsb \\ \amp = 1 - \frac{1}{4} + \frac{1}{9} - \frac{1}{16} + \dotsb \end{align*}
\begin{align*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1}}{k} \amp = 1 + \left(-\frac{1}{2}\right) + \frac{1}{3} + \left(-\frac{1}{ 4}\right) + \dotsb \\ \amp = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{ 4} + \dotsb \end{align*}
Definition 25.5.6. Alternating series.
An infinite series where the terms alternate between positive and negative values.
Example 25.5.7.
  1. \(\displaystyle \sum_{k = 0}^\infty \frac{\cos(k \pi)}{2^k} = 1 - \frac{1}{2} + \frac{1}{4} - \frac{1}{8} + \frac{1}{16} - \dotsb \)
  2. \(\displaystyle \sum_{k = 0}^\infty \frac{{(-1)}^{k + 1}}{3^k} = - 1 + \frac{1}{3} - \frac{1}{9} + \frac{1}{27} + \frac{1}{81} + \dotsb \)
Note 25.5.9.
For every sequence \(\nseq{a}\text{,}\) the convergence/divergence of \(\sum a_k\) and \(\sum (-a_k)\) are equivalent. And if \(\sum a_k\) is alternating beginning with a negative term, then \(\sum (-a_k)\) is alternating beginning with a positive term. So in the remainder of this subsection, we will generally assume that alternating series begin with a positive term (though in examples they might not).
Justification.
(⇒) 
If \(\sum {(-1)}^k a_k\) converges then we must have \(\seq{{(-1)}^n a_n} \to 0\) (Fact 25.2.7). But this implies \(\nseq{a} \to 0\text{.}\)
(⇐) 
Assume \(\nseq{a} \to 0\text{.}\) Let \(\nseq{s}\) represent the sequence of partial sums as usual. Because of the alternating nature of the series, the partial sums also alternate, but not between positive and negative, but instead between growing and decaying since we will alternately be adding a positive term or a negative onto the previous partial sum. So we will apply Fact 22.3.20 and split \(\nseq{s}\) into subsequences \(\nseq{t}\) and \(\nseq{u}\) with terms
\begin{align*} t_n \amp = s_{2 n} \text{,} \amp u_n \amp = s_{2 n + 1} \text{.} \end{align*}
The subsequence \(\nseq{t}\) represents the even-indexed terms in \(\nseq{s}\) where the last term from \(\nseq{a}\) added on is positive, and the subsequence \(\nseq{u}\) represents the odd-indexed terms in \(\nseq{s}\) where the last term from \(\nseq{a}\) added on is negative. Because of this even-odd index split, collectively \(\nseq{t}\) and \(\nseq{u}\) account for every term from \(\nseq{s}\text{,}\) and so if we can demonstrate that these two subsequences converge to the same limit, then Fact 22.3.20 says that \(\nseq{s}\) must converge to that limit as well.
We’ll begin with \(\nseq{t}\text{:}\)
\begin{align*} t_n = s_{2 n} \amp = \sum_{k = 0}^{2 n} {(-1)}^k a_k \\ \amp = a_0 - a_1 + a_2 - a_3 + \dotsb + a_{2 n - 2} - a_{2 n - 1} + a_{2 n} \\ \amp = (a_0 - a_1) + (a_2 - a_3) + \dotsb + (a_{2 n - 2} - a_{2 n - 1}) + a_{2 n} \text{.} \end{align*}
Each of the bracketed binomials above is positive because we have assumed that \(\nseq{a}\) is positive and decreasing, so that \(a_k \gt a_{k + 1}\) for all \(k\text{.}\) And the final term \(a_{2 n}\) is positive as well, so we can say that \(\nseq{t}\) is a positive sequence. But \(\nseq{t}\) is decreasing as well:
\begin{align*} t_{n + 1} \amp = s_{2(n + 1)} \\ \amp = s_{2n + 2} \\ \amp = s_{2n} + {(-1)}^{2 n + 1} a_{2 n + 1} + {(-1)}^{2 n + 2} a_{2 n + 2} \\ \amp = t_n + (-1) \cdot a_{2 n + 1} + a_{2 n + 2} \\ \amp = t_n + (a_{2 n + 2} - a_{2 n + 1}) \text{.} \end{align*}
This time the bracketed binomial above is negative, because \(a_{2 n + 2} \lt a_{2 n + 1}\text{,}\) and so \(t_{n + 1} \lt t_n\text{.}\) We have now established that \(\nseq{t}\) is positive and decreasing, which means that it is bounded above by \(t_0\) and below by \(0\text{.}\) The Monotone Convergence Theorem tells us that \(t_n\) converges; let \(L\) represent its limit value.
Now consider \(\nseq{u}\text{.}\) Let \(\nseq{b}\) represent the subsequence of \(\nseq{a}\) with \(b_n = a_{2 n + 1}\text{.}\) (That is, \(\nseq{b}\) is all the odd-indexed terms of \(\nseq{a}\text{.}\)) We have
\begin{align*} u_n \amp = s_{2 n + 1} \\ u_n \amp = s_{2 n} + {(-1)}^{2 n + 1} a_{2 n + 1} \\ u_n \amp = t_n - b_n \text{.} \end{align*}
This says that \(\nseq{u}\) is the difference of two convergent sequences: \(\nseq{t} \to L\) and \(\nseq{b} \to 0\) (since \(\nseq{a} \to 0\)). The Sequence limit laws tell us that
\begin{equation*} \nseq{u} \to L - 0 = L \text{.} \end{equation*}
We have now established that both partial sum subsequences \(\nseq{t}\) and \(\nseq{u}\) converge to the same limit, so Fact 22.3.20 says that \(\nseq{s}\) must also converge to that limit.
Example 25.5.11.
  1. Consider the alternating series
    \begin{equation*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1} k^2}{k^3 + 1} \text{.} \end{equation*}
    This series is unlikely to converge absolutely, as
    \begin{equation*} \abs{\frac{{(-1)}^{k - 1} k^2}{k^3 + 1}} = \frac{k^2}{k^3 + 1} \approx \frac{1}{k} \quad \text{for large } k \text{,} \end{equation*}
    and we know that the harmonic series diverges. Consider \(\nseq{a}\) with \(a_n = n^2 / (n^3 + 1)\text{,}\) beginning at index \(n = 1\text{,}\) like our sum. This is a positive sequence, but is it decreasing? Let \(f(t) = t^2 / (t^3 + 1) \text{.}\) Then
    \begin{equation*} f'(t) = \frac{t (2 - t^3)}{{(t^3 + 1)}^2} \text{,} \end{equation*}
    and this derivative is negative on domain \(t \ge \sqrt[3]{2}\text{.}\) So we can say that \(\nseq{a}\) is decreasing for \(n \ge 3\text{.}\) And from examining dominant terms it is clear that \(\nseqtail[3]{a} \to 0\text{,}\) so the Alternating Series Test says that
    \begin{equation*} \sum_{k = 3}^\infty \frac{{(-1)}^{k + 1} k^2}{k^3 + 1} \end{equation*}
    converges. And since this “tail” converges, our original series must converge as well.
  2. Consider the alternating series
    \begin{equation*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1} \cdot k}{4 k - 1} \text{.} \end{equation*}
    Let \(\nseq{a}\) represent the sequence of absolute terms: \(a_n = n / (4 n - 1)\text{,}\) beginning at index \(n = 1\text{,}\) like our sum. This is a positive sequence, and you can verify via a derivative calculation as in the example above that it is a decreasing sequence. However, \(\nseq{a} \to 1/4 \neq 0\text{,}\) so the Alternating Series Test says that our series diverges. In fact, the limit value for \(\nseq{a}\) tells us in what way it diverges: if \(n\) is large enough for \(a_n \approx 1/4\) sufficiently closely and \(S = s_n\) is that particular partial sum value, then all partial sum sequence terms will bounce back and forth between \(S\) and either \(S \pm 1/4\) (where the plus-or-minus depends on whether \(n\) is even or odd).
Example 25.5.12. Conditional convergence of the Alternating Harmonic Series.
Consider the alternating harmonic series
\begin{equation*} \sum_{k = 1}^\infty \frac{{(-1)}^{k - 1}}{k} \text{.} \end{equation*}
Then the sequence of absolute terms \(\seq{1/n}\) is positive, decreasing, and trends to \(0\text{.}\) So the Alternating Series Test says that the series converges. As we have seen previously that this series does not converge absolutely, we instead say that \(\sum {(-1)}^{k - 1} / k\) converges conditionally.

Subsection 25.5.3 Estimating alternating sums

Suppose \(\nseq{a}\) is positive, decreasing, and \(\nseq{a} \to 0\text{.}\) Then the Alternating Series Test tells us that \(\sum a_k\) converges; let \(S\) represent its sum. As in the justification of the Alternating Series Test, let \(\nseq{t}\) and \(\nseq{u}\) represent the sequences of even-indexed and odd-indexed partial sums:
\begin{align*} t_n \amp = s_{2 n} \text{,} \amp u_n \amp = s_{2 n + 1} \text{.} \end{align*}
As \(\nseq{s} \to S\text{,}\) then also \(\nseq{t} \to S\) and \(\nseq{u} \to S\text{.}\) We saw that \(\nseq{t}\) decreases to \(S\text{.}\) What about \(\nseq{u}\text{?}\) We have
\begin{align*} u_{n + 1} \amp = s_{2(n + 1) + 1} \\ \amp = s_{2n + 3} \\ \amp = s_{2n + 1} + {(-1)}^{2 n + 2} a_{2 n + 2} + {(-1)}^{2 n + 3} a_{2 n + 3} \\ \amp = u_n + (a_{2 n + 2} - a_{2 n + 3}) \text{.} \end{align*}
This time \(u_{n + 1} \gt u_n\text{,}\) as \(a_{2 n + 2} - a_{2 n + 3} \gt 0\) because \(\nseq{a}\) is decreasing. So \(\nseq{u}\) increases to \(S\text{.}\)
To summarize, the terms of the partial sum sequence \(\nseq{s}\) alternate between values of \(\nseq{t}\text{,}\) a sequence whose terms are greater than and decreasing down to \(S\text{,}\) and values of \(\nseq{u}\text{,}\) a sequence whose terms are less than and increasing up to \(S\text{.}\) So the sum value \(S\) is squeezed between \(\nseq{t}\) and \(\nseq{u}\text{,}\) so that \(S\) is always between consecutive terms of \(\nseq{s}\text{:}\)
\begin{align*} n \text{ even:} \amp \qquad s_{n + 1} \lt S \lt s_n \text{,} \\ n \text{ odd:} \amp \qquad s_n \lt S \lt s_{n + 1} \text{.} \end{align*}
And in all cases, the distance between consecutive terms in the sequence of partial sums is the term of \(\nseq{a}\) to be next added onto the sum.
Recall that under the conditions in the fact above, the alternating sum converges if and only if \(\nseq{a} \to 0\text{.}\) So the fact says that if we would like to estimate the alternating sum to within a specific tolerance, we simply need to determine the index at which the terms of \(\nseq{a}\) get within that tolerance of \(0\text{.}\)
Example 25.5.14.
Suppose we would like to estimate the sum
\begin{equation*} \sum_{k = 0}^\infty {(-1)}^k \frac{k + 1}{4^k} \end{equation*}
to within a tolerance of \(\pm 5 \times {10}^{-5}\text{.}\)
First let’s verify that the sequence \(\bbseq{(n + 1) / 4^n}\) satisfies the conditions of Alternating Series Test. Clearly this sequence is positive. To investigate whether the sequence decreases to \(0\text{,}\) we will instead consider the corresponding continuous function \(f(t) = (t + 1) / 4^t\text{.}\) We have
\begin{equation*} f'(t) = \frac{1 - (t + 1) \ln(4)}{4^t} \text{,} \end{equation*}
which is negative on domain \(t \gt 1 - 1/\ln(4)\text{,}\) so we can say that \(\bbseq{(n + 1) / 4^n}\) is a decreasing sequence for \(n \ge 1\text{.}\) Furthermore, \(f(t)\) has \(\infty/\infty\) form as \(t \to \infty\text{,}\) so
\begin{equation*} \lim_{t \to \infty} \frac{t + 1}{4^t} \lhopeq \lim_{t \to \infty} \frac{1}{4^t \ln(4)} = 0 \text{,} \end{equation*}
and we conclude that \(\nseq{a} \to 0\) as well. All of this confirms that \(\sum {(-1)}^k a_k\) converges, and we can use partial sums to estimate its value.
Per Fact 25.5.13, we need to determine the index when \(a_n \lt 5 \times {10}^{-5}\text{,}\) and then we can backtrack one index from that. You may calculate that the denominator of the terms of \(\nseq{a}\) becomes greater than \({10}^5\) at \(n = 9\text{.}\) From \(4^9 \gt 2 \times {10}^5\) we can compute:
\begin{equation*} a_9 = \frac{9 + 1}{4^9} \lt \frac{10}{2 \times {10}^5} = \frac{5}{10^5} \text{.} \end{equation*}
This means that \(s_8\) will estimate the infinite sum to within \(\pm 5 \times {10}^{-5}\text{:}\)
\begin{align*} \sum_{k = 0}^\infty {(-1)}^k \frac{k + 1}{4^k} \amp \approx \sum_{k = 0}^8 {(-1)}^k \frac{k + 1}{4^k} \\ \amp = 1 - \frac{2}{4} + \frac{3}{16} - \frac{4}{64} + \frac{5}{256} - \frac{6}{1024} + \frac{7}{4096} - \frac{8}{4^7} + \frac{9}{4^8} \end{align*}
We can have Sage add this up for us.
As this is an even-indexed term, we know this is an over-estimate. And if you increase the digits parameter in the Sage cell above, you’ll see that it is already rounded up from the exact value. So we can be confident that
\begin{equation*} 0.639979 \lt \sum_{k = 0}^\infty {(-1)}^k \frac{k + 1}{4^k} \lt 0.640030 \end{equation*}
(where we have subtracted off \(5.1 \times {10}^{-5}\) to account for the fact that \(0.640030\) is a rounded-up value).