Section 1.1 Infinite Series
Subsection 1.1.1 Zeno's Paradox
The first two chapters of these notes, covering Infinite Series and Taylor Series, cover the same as the final two chapters of the Calculus II notes. Much of what I cover is review and many parts of the text are taken from the Calculus II notes, but there will be a new focus on Taylor series and new material on approximation and error analysis.
There are three classical branches of calculus. The first two, derivatives and integrals, command the vast majority of the time and energy in most first year calculus classes. In many universities, these two topics are the entire course. However, there is a third branch of the calculus which deserves equal attention: infinite series.
In some ways, the problem of infinite series is older than the problems motivating derivatives and integrals. Infinite series go back at least to early Greek mathematics, where thinkers struggled with the puzzles of infinitiy. The most famous of those ancient puzzles is known as Zeno's Paradox.
There are many forms of Zeno's Paradox; I will present one relatively common version. If you wish to travel from point \(a\) to point \(b\text{,}\) then first you must travel half-way. Having gone halfway to \(b\text{,}\) you must again cover half the remaining distance. Having gone \(\frac{3}{4}\) of the way to \(b\text{,}\) there is still a distance remaining, and you still must first cover half that distance. Repeating this process gives an infinite series of halves, all of which must be traversed to travel from \(a\) to \(b\text{.}\) Since doing an infinite number of things is not humanly possible, you will never be able to reach \(b\text{.}\) Finally, since this holds for any two points \(a\) and \(b\text{,}\) movement is impossible.
Obviously, Zeno's paradox doesn't hold, since movement is, indeed, possible. But Zeno's paradox has commanded the attention and imagination of philosophers and mathematicians for over 2000 years, as they struggled to deal with the infinity implicit in even the smallest movement. Infinite series is one way (though, some would argue, an incomplete way) of dealing with Zeno's paradox.
Subsection 1.1.2
Definition 1.1.1.
If \(\{a_n\}\) is a sequence, then the sum of all infinitely many terms \(a_n\) is called an infinite series. Infinite series are usually written with sigma notation.
The number \(n\) is called the index and the numbers \(a_n\) are called the terms. Forgetting the sum produces the sequence of terms \(\{a_n\}_{n=1}^\infty\text{.}\) Though the above notation started with \(n=1\text{,}\) the series can have a starting index of any integer.
Subsection 1.1.3 Partial-Sums and Convergence
Unlike finite sums, there is no guarantee that an infinite series evaluates to anything. The problem of infinite series is precisely this: how can I add up infinitely many things? This isn't a problem that algebra can solve, but calculus, with the use of limits, can give a reasonable answer. I need to set up an approximation process and take the limit, just as I did for derivatives and integrals. The approximation process is called partial sums. Instead of taking the entire sum to infinity, I just take a sum of finitely many pieces.
Definition 1.1.2.
The \(n\)th partial sum of an infinite series is the sum of the first \(n\) terms.
Since these are finite sums, I can actually calculate them. They serve as approximations to the total infinite sum. Moreover, these partial sums \(\{s_n\}_{n=1}^\infty\) define a sequence. I can take the limit of the sequence of partial sums. This is the limit of the approximation process, so it should calculate the value of the series.
Definition 1.1.3.
The value of an infinite series is the limit of the sequence of partial sums, if the limit exists.
If this limit exists, the series is convergent. Otherwise, the series is divergent.
With limits, writing
was a short hand. The symbol ``\(infinity\)'' was a way to write ``getting larger and larger without bound.'' Something similar happens for infinite series. Some divergent series diverge because the sum grows larger and larger without bound (as in Example 1.1.5 below). For these series, it is common to use a very similar shorthand notation.
Sometime, mathematicians also use the infinity symbol for convergence. This is somewhat taking liberties with our notation, somewhat stretching the intention of the symbols, but it nonetheless is practice. For a convergent series, I would indicate this convergence by the following notation.
I would probably use this notation only for sums with all positive terms, where being ``less than infinity'' would clearly indicate that it sums to a finite number.
Subsection 1.1.4 First Convergence Examples
Example 1.1.4.
I'll start with Zeno's paradox. paradox. I can this problem as the sum of a geometric series with common ratio \(\frac{1}{2}\text{.}\)
I'll calculate the first few partial sums.
From these first few partial sum, I can claim that there is a pattern of partial sums.
I haven't proved this pattern of partial sums, but there is still a strong informal argument for it. (Proving the patterns of partial sums is often done through a proof technique called mathematical induction. This is an important proof technique and necessary for a more formal presentation of infinite series, but I'm not going to cover it in this course.) Now that I have a pattern for the partial sums, I can take the limit of the pattern to calculate the value of the infinite series.
Unsurprisingly, I conclude that the total distance travelled from \(0\) to \(1\) is simply \(1\) unit. This gives a justification for saying that I can travel an infinite number of smaller and smaller intervals, since all those infinitely many intervals add up to a finite distance. (Whether this actually soves Zeno's paradox is a question left for the philosophers.)
Example 1.1.5.
Now consider the sum of the harmonic series. I am going to analyze the partial sums. I will not produce a general formula, but I can define some lower bounds for these partial sums.
The inequatity holds since \(\frac{1}{3} > \frac{1}{4}\) and all other terms remain the same.
This time, I replace all the fractions without powers of 2 in the demoninator with smaller terms to satify the inequality.
I can generate a lower bound for any \(s_{2^n}\) in this way, producing a pattern of lower bounds.
Taking every second power of two gives partial sums larger than the sequence of positive numbers.
The lower bounds get larger and larger. The limit of the sequence of partial sums is larger than this limit of larger bounds.
The harmonic series is divergent. This is something of a surprising result, since the harmoinc series looks similar to the series defining Zeno's paradox. However, the terms of the harmonic series are large enough to eventually add up to something larger than any finite number.
Example 1.1.6.
Another important example is an alternating series of positive and negative ones. I'll calculate the first few partial sums.
There is a pattern here. The sequence of partial sums is a sequences that switches back and forth between 0 and 1.
This series does not converge, even though it doesn't grow to infinity. There is simply no way to settle on a value when the partial sums keep switching back and forth from \(0\) to \(1\text{.}\) Unlike the geometric series behind Zeno's paradox, this is a series where I simply can't reckon with the infinite numbers of steps. Since there is no last step in an finite process, I can never decide if the last addition is \((-1)\text{,}\) giving a value of \(0\text{,}\) of if the last addition is \((+1)\text{,}\) giving a value of \(1\text{.}\)
Example 1.1.7.
There are some nice examples where algebraic manipulation leads to reasonable partial sums. In this example (and similar series), the middle terms in each successive partial sum cancel; these series are called telescoping series.
Almost all the terms cancel out, leaving only the first and last term. This lets me determine a pattern for the partial sums and take the limit of that pattern to determine the value of the series.
I need a new definition to present the last two examples.
Definition 1.1.8.
The factorial of a natural number \(n\) is written \(n!\text{.}\) It is defined to be the product of all natural numbers up to and including \(n\text{.}\)
In addition, \(0! = 1\text{.}\)
The factorial grows very rapidly. Even by \(n=40\text{,}\) the factorial is already a ridiculously large number.
Asymptotically, the factorial grows even faster than the exponential. In particular, I can use this fact in asymptotic analysis of limits.
Example 1.1.9.
Here's a series example using the factorial. I'll calculate the first few partial sums.
It looks like these partial sums are growing slowly and possibly leveling off at some value, perhaps less than 3. I can't prove it now, but the value of this series is surprising.
This is another definition for the number \(e\text{.}\) I'll prove that this definition is equivalent to the previous definitions is Subsection 2.2.3.
Example 1.1.10.
The study of values of particular infinite series is a major project in the history of mathematics. There are many interesting results, some of which are listed here for your curiosity.
Subsection 1.1.5 The Test for Divergence
As several examples have illustrated, it is possible for a divergent series to have terms which decrease to zero. The comparsions between the geometric series with common ratio \(\frac{1}{2}\) and the harmonic series is an important comparison. The geometric series (which address Zeno's paradox) has terms \(\frac{1}{2^n}\text{;}\) these terms approach zero and the whole series adds up to exactly \(1\text{.}\) The harmonic series also has terms \(\frac{1}{n}\) which approach zero, but the series diverges. This leads to a very important observation: the fact that the terms of a series decay to zero is not sufficient to conclude that the series converges.
The inverse logical statement, however, does hold.
Proposition 1.1.11.
(The Test for Divergence) Consider an infinite series.
Assume that the limit of the terms of the series is non zero.
Under this assumption, the series must diverge.
This is called the Test for Divergence because the logic here can only be used to prove divergence. If the limit of the terms is zero, I don't know if the series converges. If might be like the geometric series with common ratio \(\frac{1}{2}\) and converge. It might be like the harmonic series and diverge. I just don't know. However, if the limit of the terms is not zero, I know for certain that the series will diverge.
Subsection 1.1.6 Two Central Examples
There are two important classes of convergent series which I will use frequently. These two examples will be particularly important for future comparison results. The first of the two examples has already been mentioned in connection with Zeno's paradox: the geometric series. I'll now give a full definition of is.
Definition 1.1.12.
Let \(r \in \RR\text{.}\) The geometric series with common ratio \(r\) is this series.
Proposition 1.1.13.
The geometric series with common ratio \(r\) converges to \(\frac{1}{1-r}\) as long as \(|r| \lt 1\text{.}\)
Proof.
For \(|r| \lt 1\text{,}\) I will calculate the \(k\)th partial sums. In the calculation, I multiply by \(\frac{1-r}{1-r}\text{.}\) In the numerator, when I distribute the multiplication of the two terms, all but the very first and the very last products will cancel out, leaving a relatively simple expression.
The convergence of the series is shown by the limit of these partial sums.
This limit is \(\frac{1}{1-r}\) only because of the assuming that \(|r| \lt 1\text{.}\) Under that assuming, the power of \(r\) in the numerator of the limit decay to zero, leaving the result as stated.
The second important series is the \(\zeta\) (zeta) series. These are often called \(p\)-series in some texts. I call this the \(\zeta\) series since this definition, in a broader context, is the definition of the famous Riemann \(\zeta\) function.
Definition 1.1.14.
The \(\zeta\) series is the infinite series with terms \(\frac{1}{n^p}\text{.}\)
Proposition 1.1.15.
The \(\zeta\) series converges when \(p \gt 1\text{.}\)
I state this without proof for now. The convergence of the \(\zeta\) series can be proved with the Integral Test in Subsection 1.3.2. Unlike the geometric series, where I can easily write the value, the actual value of \(\zeta(p)\) is not easy to express in conventional algebraic terms.
Subsection 1.1.7 Manipulation of Series
This short section covers a couple useful algebraic manipulations of sigma notation and infinite series. These algebraic tools are useful for understanding series, determining convergence, and combining different series together.
First, series are linear as long as the indices match up. Series split up over sum and differences in the term. Constants present in the term can be factored out of the sigma notation.
Second, since sigma notation is just notation for a sum, pieces of the sum can be written explicitly using the ordinary notation for sums. This is called removing terms from the series.
Third, I can shift the indices. The key idea here is balance: whatever I do to the index in the bounds, I must do the opposite to the index in the terms to balance it out.
Note that the linearity properties required the bounds of the series to match up. Shifting and/or removing terms is very useful for adjusting series so that they have the same starting bounds, and thus can be combined into a single series.
Example 1.1.16.
In this example, I want to add two series which don't have matching indices. I shift the first series to make the indices match and allow the addition. In the last step below, I also show an example of common denominator with factorials, which is another frequently useful bit of algebra. In the second term, I need to multiply by all the missing term to make the factorial in the denominator.