Section 1.1 Infinite Series
Subsection 1.1.1 Zeno's Paradox
The first two chapters of these notes, covering Infinite Series and Taylor Series, cover the same as the final two chapters of the Calculus II notes. Much of what we cover is review and parts of the text are taken from the Calculus II notes, but there will be a new focus on Taylor series and new material on approximation and error analysis.
There are three classical branches of calculus. The first two, derivatives and integrals, command the vast majority of the time and energy in most first year calculus classes. In many universities, these two topics are the entire course. However, there is a third branch of the calculus which deserves equal attention: infinite series.
In some ways, the problem of infinite series is older than the problems motivating derivatives and integrals. Issues of infinite series go back to at least early Greek mathematics, where thinkers struggled with the puzzle known as Zeno's Paradox.
There are many forms of Zeno's Paradox; I will present one relatively common version. If you wish to travel from point \(a\) to point \(b\text{,}\) then first you must travel half-way. Having gone halfway to \(b\text{,}\) you must again cover half the remaining distance. Having gone \(3/4\) of the way to \(b\text{,}\) there is still a distance remaining, and you still must first cover half that distance. Repeating this process gives an infinite series of halves, all of which must be traversed to travel from \(a\) to \(b\text{.}\) Since doing an infinite number of things is not humanly possible, you will never be able to reach \(b\text{.}\) Finally, since this holds for any two points \(a\) and \(b\text{,}\) movement is impossible.
Obviously, Zeno's paradox doesn't hold, since we are able to move from one place to another. But Zeno's paradox has commanded the attention and imagination of philosophers and mathematicians for over 2000 years, as they struggled to deal with the infinity implicit in even the smallest movement. Infinite series is one way (though, some would argue, an incomplete way) of dealing with Zeno's paradox.
Subsection 1.1.2 Definition of Infinite Series
Definition 1.1.1.
If \(\{a_n\}\) is a sequence, then the sum of all infinitely many terms \(a_n\) is called an infinite series. We write infinite series with sigma notation.
The number \(n\) is called the index and the numbers \(a_n\) are called the terms. If we want to forget the sum, we can talk about the sequence of terms \(\{a_n\}_{n=1}^\infty\text{.}\) Though we started with \(n=1\) in this definition, we could start with any integer.
Subsection 1.1.3 Partial Sums and Convergence
Unlike finite sums, we have no guarantee that this expression evaluates to anything. The problem of infinite series is precisely this: how do we add up infinitely many things? This isn't a problem that algebra can solve, but calculus, with the use of limits, can give a reasonable answer. We need to set up an approximation process and take the limit, just as we did for derivatives and integrals. The approximation process is called partial sums. Instead of taking the entire sum to infinity, let's just take a piece of finite length.
Definition 1.1.2.
The \(n\)th partial sum of an infinite series is the sum of the first \(n\) terms.
Since these are finite sums, we can actually calculate them. They serve as approximations to the total infinite sum. Moreover, these partial sums \(\{s_n\}_{n=1}^\infty\) define a sequence. We can take the limit of the sequcne of partial sums. This is the limit of the approximation process, so it should calculate the value of the series.
Definition 1.1.3.
The value of an infinite series is the limit of the sequence of partial sums, if the limit exists.
If this limit exists, we call the series convergent. Otherwise, we call the series divergent.
Example 1.1.4.
The first and most classical example is simply Zeno's paradox. If we are trying to go from \(0\) to \(1\text{,}\) first we travel \(\frac{1}{2}\text{,}\) then \(\frac{1}{4}\text{,}\) then \(\frac{1}{8}\text{,}\) and so on. We represent this paradox as an infinite sum.
Let's look at the partial sums.
Since we have a general expression for the partial sums, we can take the limit.
Unsurprisingly, we get that the total distance travelled from \(0\) to \(1\) is simply \(1\) unit. This gives a justification for saying that we can travel an infinite number of smaller and smaller intervals, since all those infinitely many intervals add up to a finite distance. (Whether this actually soves Zeno's paradox is a question left for the philosophers.)
Example 1.1.5.
Now consider the sum of the harmonic series. We are going to analyze the partial sums. We don't get a general formula, but we can define some lower bounds for these partial sums.
The inequatity holds since \(\frac{1}{3} > \frac{1}{4}\) and all other terms remain the same.
\begin{align*} s_8 \amp = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8}\\ \amp > 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} = \frac{5}{2}\\ \end{align*}We replace all the fraction without powers of 2 in the demoninator with smaller terms to satify the inequality.
\begin{align*} s_{16} \amp > 3\\ \end{align*}We can generate a lower bound for \(s_{2^n}\) in this pattern.
\begin{align*} s_{32} \amp > \frac{7}{2}\\ s_{64} \amp > 4\\ s_{128} \amp > \frac{9}{2}\\ s_{256} \amp > 5 \end{align*}Taking every second power of two gives us partial sums larger than the sequence of positive numbers.
The lower bounds get larger and larger. The limit of the sequence of partial sums is larger than this limit of larger bounds.
The harmonic series is divergent. This is something of a surprising result, since the harmoinc series looks similar to the series defining Zeno's paradox. However, the terms of the harmonic series are large enough to eventually add up to something larger than any finite number.
The harmonic series in the previous example diverged and the limit of its partial sum was \(\infty\text{.}\) For such series, it is not uncommon to write
As with the use of infinity in limits, this does not imply that infinity is a number. Rather, \(= \infty\) in this course is a symbol meaning larger and larger without bound. In a similar notation, when a series convergenes, it's all common to see this written as
particularly when the series has positive terms. This again does not imply that infinity is a number which can be compared, but is just a short-hand symbol for convergence.
Studying the harmonic series in Example 1.1.5 leads to an important observation: it is possible to build a series where the terms are getting smaller and smaller and still end up with an infinite value. This gives some credit to the initial concern of Zeno's paradox, since all these smaller and smaller pieces may eventually add up to something infinite. One might have the intuition that if the terms are becoming very small (as with the harmonic series), the series should have a finite sum; the harmonic series is a counter-example to this intuition. However, the reverse intuition holds, as seen in the following result.
Proposition 1.1.6.
(The Test for Divergence) If we have an infinite series
such that
then the series must diverge.
Using the test for divergence and the harmonic series example, we can rephrase the important relation between convergence of a series and the limits of the terms. For an infinite series, the fact that the terms tend to zero is necessary for convergence but is not sufficient. It is a very common temptation to assume the fact that the terms tend to zero is sufficient; be careful not to fall into this trap.
Example 1.1.7.
Another important example is an alternating series of positive and negative ones.
We can determine a pattern for even and odd terms.
\begin{align*} \amp s_{2n} \amp = \amp 0 \ \ \forall n \in \NN\\ \amp s_{2n+1} \amp = \amp 1 \ \ \forall n \in \NN\\ \amp \lim_{n \rightarrow \infty} s_n \amp \amp DNE \end{align*}This series does not converge, even though it doesn't grow to infinity. There is simply no way to settle on a value when the partial sums keep switching back and forth from \(0\) to \(1\text{.}\)
Example 1.1.8.
There are some nice examples where algebraic manipulation leads to reasonable partial sums. In this example (and similar series), the middle terms in each successive partial sum cancel; these series are called telescoping series.
Almost all the terms conveniently cancel out, leaving only the first and the last.
Definition 1.1.9.
The factorial of a natural number \(n\) is written \(n!\text{.}\) It is defined to be the product of all natural numbers up to and including \(n\text{.}\)
In addition, we define \(0! = 1\text{.}\) (Why? There are good reasons!)
The factorial grows very rapidly. Even by the time we get to \(n=40\text{,}\) the factorial is already a ridiculously large number.
Asymptotically, the factorial grows even faster than the exponential.
Example 1.1.10.
Here's a series example using the factorial.
It looks like these terms are growing slowly and possibly leveling off at some value, perhaps less than 3. We can't prove it now, but the value of this series is surprising.
This is another definition for the number \(e\text{.}\) We'll prove that this definition is equivalent our existing definitions in Section 2.2.
Example 1.1.11.
The study of values of particular infinite series is a major project in the history of mathematics. There are many interesting results, some of which are listed here for your curiosity.
Subsection 1.1.4 Geometric and \(\zeta\) Series
There are two important classes of convergent series which we will use throughout this chapter. The first is the geometric series.
Definition 1.1.12.
For \(|r|\lt 0\text{,}\) the geometric series with common ratio \(r\) is this series.
Proposition 1.1.13.
The geometric series with common ratio \(r\) converges to \(\frac{1}{1-r}\) as long as \(|r|\lt 1\text{.}\)
Proof.
For \(|r| \lt 1\text{,}\) we look at the partial sums. We multiply by \(\frac{1-r}{1-r}\text{.}\) In the expansion in the denominator, most of the terms cancel and we are left with a simple expression.
The convergence of the series is shown by the limit of these partial sums.
The second class of convergent series are the \(\zeta\) (zeta) series. These are often called \(p\)-series in standard texts. We are calling them \(\zeta\) series since this definition, in a broader context, gives the famous Riemann \(\zeta\) function.
Definition 1.1.14.
The \(\zeta\) series is the infinite series with terms \(\frac{1}{n^p}\text{.}\)
Proposition 1.1.15.
The \(\zeta\) series converges when \(p > 1\text{.}\)
We give this without proof for now. The convergence of the \(\zeta\) series can be proved with the Integral Test in Section 1.3. Unlike the geometric series, where we can easily write the value, the actual value of \(\zeta(p)\) is not easy to express in conventional algebraic terms.
Subsection 1.1.5 Manipulation of Series
Once we have defined convergent series, we want to be able to work with them algebraically. There are several important manipulations and techniques.
First, series are linear as long as the indices match up. This means we can bring out constants and split series over sums.
Second, we can remove terms. Since a series is just notation for a sum, we can take out leading terms and write them in conventional notation.
Third, we can shift the indices. The key idea here is balance: whatever we do to the index in the bounds, we do the opposite to the index in the terms to balance it out.
Both techniques are very useful, particularly for combining series.
Example 1.1.16.
In this example, we want to add two series which don't have matching indices. We shift the first series to make the indices match and allow the addition.