Skip to main content

Section 9.2 Definition and Convergence of Infinite Series

Subsection 9.2.1

Now that I have introduced sequences and their limits, I can define infinite series.

Definition 9.2.1.

If \(\{a_n\}\) is a sequence, then the sum of all infinitely many terms \(a_n\) is called an infinite series. Infinite series are usually written with sigma notation.
\begin{equation*} \sum_{n=1}^\infty a_n \end{equation*}
The number \(n\) is called the index and the numbers \(a_n\) are called the terms. Forgetting the sum produces the sequence of terms \(\{a_n\}_{n=1}^\infty\text{.}\) Though the above notation started with \(n=1\text{,}\) the series can have a starting index of any integer.

Subsection 9.2.2 Partial-Sums and Convergence

Unlike finite sums, there is no guarantee that an infinite series evaluates to anything. The problem of infinite series is precisely this: how can I add up infinitely many things? This isn’t a problem that algebra can solve, but calculus, with the use of limits, can give a reasonable answer. I need to set up an approximation process and take the limit, just as I did for derivatives and integrals. The approximation process is called partial sums. Instead of taking the entire sum to infinity, I just take a sum of finitely many pieces.

Definition 9.2.2.

The \(n\)th partial sum of an infinite series is the sum of the first \(n\) terms.
\begin{equation*} s_n := \sum_{k=1}^n a_k \end{equation*}
Since these are finite sums, I can actually calculate them. They serve as approximations to the total infinite sum. Moreover, these partial sums \(\{s_n\}_{n=1}^\infty\) define a sequence. I can take the limit of the sequence of partial sums. This is the limit of the approximation process, so it should calculate the value of the series.

Definition 9.2.3.

The value of an infinite series is the limit of the sequence of partial sums, if the limit exists.
\begin{equation*} \sum_{n=1}^\infty a_n := \lim_{n \rightarrow \infty} s_n = \lim_{n \rightarrow \infty} \sum_{k=1}^n a_k \end{equation*}
If this limit exists, the series is convergent. Otherwise, the series is divergent.
With limits, writing
\begin{equation*} \lim_{n \rightarrow a} a_n = \infty \end{equation*}
was a short hand. The symbol ``\(infinity\)’’ was a way to write ``getting larger and larger without bound.’’ Something similar happens for infinite series. Some divergent series diverge because the sum grows larger and larger without bound (as in Example 9.2.5 below). For these series, it is common to use a very similar shorthand notation.
\begin{equation*} \sum_{n=1}^\infty a_n = \infty \end{equation*}
Sometime, mathematicians also use the infinity symbol for convergence. This is somewhat taking liberties with our notation, somewhat stretching the intention of the symbols, but it nonetheless is practice. For a convergent series, I would indicate this convergence by the following notation.
\begin{equation*} \sum_{n=1}^\infty a_n \lt \infty \end{equation*}
I would probably use this notation only for sums with all positive terms, where being ``less than infinity’’ would clearly indicate that it sums to a finite number.

Subsection 9.2.3 First Convergence Examples

Example 9.2.4.

I’ll start with Zeno’s paradox. paradox. I can this problem as the sum of a geometric series with common ratio \(\frac{1}{2}\text{.}\)
\begin{equation*} \sum_{n=1}^\infty \frac{1}{2^n} \end{equation*}
I’ll calculate the first few partial sums.
\begin{align*} s_1 \amp = \frac{1}{2}\\ s_2 \amp = \frac{1}{2} + \frac{1}{4} = \frac{3}{4}\\ s_3 \amp = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} = \frac{7}{8}\\ s_4 \amp = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} = \frac{15}{16}\\ s_5 \amp = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} = \frac{31}{32}\\ s_6 \amp = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \frac{1}{32} + \frac{1}{64} = \frac{63}{64} \end{align*}
From these first few partial sum, I can claim that there is a pattern of partial sums.
\begin{equation*} s_n = \frac{2^n-1}{2n} \end{equation*}
I haven’t proved this pattern of partial sums, but there is still a strong informal argument for it. (Proving the patterns of partial sums is often done through a proof technique called mathematical induction. This is an important proof technique and necessary for a more formal presentation of infinite series, but I’m not going to cover it in this course.) Now that I have a pattern for the partial sums, I can take the limit of the pattern to calculate the value of the infinite series.
\begin{equation*} \lim_{n \rightarrow \infty} s_n = \lim_{n \rightarrow \infty} \frac{2^n - 1}{2n} = 1 \end{equation*}
Unsurprisingly, I conclude that the total distance travelled from \(0\) to \(1\) is simply \(1\) unit. This gives a justification for saying that I can travel an infinite number of smaller and smaller intervals, since all those infinitely many intervals add up to a finite distance. (Whether this actually soves Zeno’s paradox is a question left for the philosophers.)

Example 9.2.5.

Now consider the sum of the harmonic series. I am going to analyze the partial sums. I will not produce a general formula, but I can define some lower bounds for these partial sums.
\begin{align*} \sum_{n=1}^\infty \frac{1}{n} \amp\\ s_1 \amp = 1\\ s_2 \amp = 1 + \frac{1}{2} = \frac{3}{2}\\ s_3 \amp = 1 + \frac{1}{2} + \frac{1}{3}\\ s_4 \amp = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} > 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} = 2 \end{align*}
The inequatity holds since \(\frac{1}{3} > \frac{1}{4}\) and all other terms remain the same.
\begin{align*} s_8 \amp = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}\\ \amp \hspace{1cm} > 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} = \frac{5}{2} \end{align*}
This time, I replace all the fractions without powers of 2 in the demoninator with smaller terms to satify the inequality.
\begin{align*} s_{16} \amp > 3 \end{align*}
I can generate a lower bound for any \(s_{2^n}\) in this way, producing a pattern of lower bounds.
\begin{align*} s_{32} \amp > \frac{7}{2}\\ s_{64} \amp > 4\\ s_{128} \amp > \frac{9}{2}\\ s_{256} \amp > 5 \end{align*}
Taking every second power of two gives partial sums larger than the sequence of positive numbers.
\begin{equation*} s_{2^{2k-2}} > k \hspace{2cm} \forall k \geq 2 \end{equation*}
The lower bounds get larger and larger. The limit of the sequence of partial sums is larger than this limit of larger bounds.
\begin{equation*} \lim_{n \rightarrow \infty} s_n = \lim_{k \rightarrow \infty} s_{2^{2k-2}} \geq \lim_{k \rightarrow \infty} k = \infty \end{equation*}
The harmonic series is divergent. This is something of a surprising result, since the harmoinc series looks similar to the series defining Zeno’s paradox. However, the terms of the harmonic series are large enough to eventually add up to something larger than any finite number.

Example 9.2.6.

Another important example is an alternating series of positive and negative ones. I’ll calculate the first few partial sums.
\begin{align*} \sum_{n=0}^\infty (-1)^n \amp\\ s_0 \amp = 1\\ s_1 \amp = 1-1 = 0\\ s_2 \amp = 1-1+1 = 1\\ s_3 \amp = 1-1+1-1 = 0\\ s_4 \amp = 1+1-1-1+1 = 1\\ s_5 \amp = 1+1-1-1+1-1 = 0 \end{align*}
There is a pattern here. The sequence of partial sums is a sequences that switches back and forth between 0 and 1.
\begin{align*} s_{2n+1} \amp = 0 \ \ \forall n \in \NN\\ s_{2n} \amp = 1 \ \ \forall n \in \NN\\ \lim_{n \rightarrow \infty} s_n \amp \hspace{1cm} DNE \end{align*}
This series does not converge, even though it doesn’t grow to infinity. There is simply no way to settle on a value when the partial sums keep switching back and forth from \(0\) to \(1\text{.}\) Unlike the geometric series behind Zeno’s paradox, this is a series where I simply can’t reckon with the infinite numbers of steps. Since there is no last step in an finite process, I can never decide if the last addition is \((-1)\text{,}\) giving a value of \(0\text{,}\) of if the last addition is \((+1)\text{,}\) giving a value of \(1\text{.}\)

Example 9.2.7.

There are some nice examples where algebraic manipulation leads to reasonable partial sums. In this example (and similar series), the middle terms in each successive partial sum cancel; these series are called telescoping series.
\begin{align*} \sum_{n=1}^\infty \frac{1}{n(n+1)} \amp\\ s_n \amp = \sum_{k=1}^n \frac{1}{k(k+1)} = \sum_{k=1}^n \frac{1}{k} - \frac{1}{k+1}\\ \amp = \frac{1}{1} - \frac{1}{2} + \frac{1}{2} - \frac{1}{3} + \frac{1}{3} - \frac{1}{4} + \frac{1}{4} \ldots - \frac{1}{n+1} \end{align*}
Almost all the terms cancel out, leaving only the first and last term. This lets me determine a pattern for the partial sums and take the limit of that pattern to determine the value of the series.
\begin{align*} \amp = 1 - \frac{1}{n+1}\\ \sum_{n=1}^\infty \frac{1}{n(n+1)} \amp = \lim_{n \rightarrow \infty} 1 - \frac{1}{n+1} = 1 \end{align*}
I need a new definition to present the last two examples.

Definition 9.2.8.

The factorial of a natural number \(n\) is written \(n!\text{.}\) It is defined to be the product of all natural numbers up to and including \(n\text{.}\)
\begin{equation*} n! = (1)(2)(3)(4)(5)\ldots(n-2)(n-1)(n) \end{equation*}
In addition, \(0! = 1\text{.}\)
The factorial grows very rapidly. Even by \(n=40\text{,}\) the factorial is already a ridiculously large number.
\begin{equation*} 40! =815915283247897734345611269596115894272000000000 \end{equation*}
Asymptotically, the factorial grows even faster than the exponential. In particular, I can use this fact in asymptotic analysis of limits.

Example 9.2.9.

Here’s a series example using the factorial. I’ll calculate the first few partial sums.
\begin{align*} \sum_{n=0}^\infty \frac{1}{n!} \amp \\ s_0 \amp = 1 \\ s_1 \amp = 1 + 1 = 2 \amp \\ s_2 \amp = 1 + 1 + \frac{1}{2} = \frac{5}{2} \\ s_3 \amp = \frac{5}{2} + \frac{1}{6} = \frac{16}{6} \\ s_4 \amp = \frac{16}{6} + \frac{1}{24} = \frac{61}{24} \\ s_5 \amp = \frac{61}{24} + \frac{1}{120} = \frac{51}{20} \\ s_6 \amp = \frac{51}{20} + \frac{1}{720} = \frac{1837}{720} \end{align*}
It looks like these partial sums are growing slowly and possibly leveling off at some value, perhaps less than 3. I can’t prove it now, but the value of this series is surprising.
\begin{equation*} \sum_{n=0}^{\infty} \frac{1}{n!} = \lim_{n \rightarrow \infty} s_n = e \end{equation*}
This is another definition for the number \(e\text{.}\) I’ll prove that this definition is equivalent to the previous definitions is Subsection 11.2.3.

Example 9.2.10.

The study of values of particular infinite series is a major project in the history of mathematics. There are many interesting results, some of which are listed here for your curiosity.
\begin{align*} \pi \amp = 4 \sum_{n=0}^\infty \frac{(-1)^n}{2n+1} = \frac{4}{1} - \frac{4}{3} + \frac{4}{5} - \frac{4}{7} + \frac{4}{9} - \ldots\\ \pi \amp = \sqrt{12} \sum_{n=-0}^\infty \frac{(-1)^n}{3^n (2n+1)} = \sqrt{12} \left( 1 - \frac{1}{9} + \frac{1}{45} - \ldots \right)\\ \frac{1}{\pi} \amp = \frac{2\sqrt{2}}{9801} \sum_{n=0}^\infty \frac{ (4n)! (1103 + 26390n)}{(n!)^4 (396)^{4n}}\\ \frac{1}{\pi} \amp = \frac{1}{426880 \sqrt{16005}} \sum_{n=0}^\infty \frac{(6n)! (13591409 + 545140134n) (-1)^n}{(3n)! (n!)^3 (640320)^n}\\ \frac{\pi^4}{90} \amp = \sum_{n=1}^\infty \frac{1}{n^4}\\ e \amp = \sum_{n=0}^\infty \frac{(3n)^2+1}{(3n)!}\\ e \amp = \sum_{n=0}^\infty \frac{n^7}{877n^!} \end{align*}

Subsection 9.2.4 The Test for Divergence

As several examples have illustrated, it is possible for a divergent series to have terms which decrease to zero. The comparsions between the geometric series with common ratio \(\frac{1}{2}\) and the harmonic series is an important comparison. The geometric series (which address Zeno’s paradox) has terms \(\frac{1}{2^n}\text{;}\) these terms approach zero and the whole series adds up to exactly \(1\text{.}\) The harmonic series also has terms \(\frac{1}{n}\) which approach zero, but the series diverges. This leads to a very important observation: the fact that the terms of a series decay to zero is not sufficient to conclude that the series converges.
The inverse logical statement, however, does hold.
This is called the Test for Divergence because the logic here can only be used to prove divergence. If the limit of the terms is zero, I don’t know if the series converges. If might be like the geometric series with common ratio \(\frac{1}{2}\) and converge. It might be like the harmonic series and diverge. I just don’t know. However, if the limit of the terms is not zero, I know for certain that the series will diverge.

Subsection 9.2.5 Two Central Examples

There are two important classes of convergent series which I will use frequently. These two examples will be particularly important for future comparison results. The first of the two examples has already been mentioned in connection with Zeno’s paradox: the geometric series. I’ll now give a full definition of is.

Definition 9.2.12.

Let \(r \in \RR\text{.}\) The geometric series with common ratio \(r\) is this series.
\begin{equation*} \sum_{n=0}^\infty r^n \end{equation*}

Proof.

For \(|r| \lt 1\text{,}\) I will calculate the \(k\)th partial sums. In the calculation, I multiply by \(\frac{1-r}{1-r}\text{.}\) In the numerator, when I distribute the multiplication of the two terms, all but the very first and the very last products will cancel out, leaving a relatively simple expression.
\begin{equation*} s_k = 1 + r + r^2 + r^3 + \ldots + r^k = \frac{1-r}{1-r} \left( 1 + r + r^3 + r^3 + \ldots + r^k \right) = \frac{(1-r^k)}{1-r} \end{equation*}
The convergence of the series is shown by the limit of these partial sums.
\begin{equation*} \sum_{n=0}^\infty r^n = \lim_{k \rightarrow \infty} \frac{1-r^k}{1-r} = \frac{1}{1-r} \end{equation*}
This limit is \(\frac{1}{1-r}\) only because of the assuming that \(|r| \lt 1\text{.}\) Under that assuming, the power of \(r\) in the numerator of the limit decay to zero, leaving the result as stated.
The second important series is the \(\zeta\) (zeta) series. These are often called \(p\)-series in some texts. I call this the \(\zeta\) series since this definition, in a broader context, is the definition of the famous Riemann \(\zeta\) function.

Definition 9.2.14.

The \(\zeta\) series is the infinite series with terms \(\frac{1}{n^p}\text{.}\)
\begin{equation*} \zeta(p) = \sum_{n=1}^\infty \frac{1}{n^p} \end{equation*}
I state this without proof for now. The convergence of the \(\zeta\) series can be proved with the Integral Test in Subsection 10.3.2. Unlike the geometric series, where I can easily write the value, the actual value of \(\zeta(p)\) is not easy to express in conventional algebraic terms.