Section 1.3 Convergence Tests
When working with series, the most pressing problem is convergence. I need to know if a series converges before I can do anything else with is. Therefore, mathematicians have devised many ways to test a series for convergence. I have already presented the test for divergence in Proposition 1.1.11. That test was the first of many. In this section, I will present several more ways to test a series for convergence.
Subsection 1.3.1 Comparison on Series
The first couple of tests are comparison tests. For these test, I can determine the convergence of a series by comparing it to a series that is known to converge or diverge. The most important series for comparison are the geometric series (defined in Definition 1.1.12) and the \(\zeta\) series (defined in Definition 1.1.14). There are two main types of comparison: direct and asymptotic.
In these comparisons, and in series in general, there are a comple of conventions for using infinity in comparison. In these conventions, please remember that infinity is not a number and I don't do arithmetic with it. These conventions are, non-the-less, useful for stating the comparison theorems. The conventions simply say that for any real number, it is permissible to write \(-\infty \lt \alpha \lt \infty\text{.}\)
Proposition 1.3.1.
(Direct Comparison) Let \(\{a_n\}\) and \(\{b_n\}\) be the terms of two infinite series. Then an inequality of the terms implies an inequality of the series.
In addition, let \(a_n\) and \(b_n\) be positive for all \(n \in \NN\) (and still assume \(a_n \leq b_n\)).
If \(\sum b_n\) is convergent, since the sum \(\sum a_n\) is smaller, it must also be convergent.
If \(\sum a_n\) is divergent, since the sum \(\sum b_n\) is larger, so it must also be divergent.
Here are some comparison examples using direct comparison.
Example 1.3.2.
The terms \(\frac{1}{n-2}\) are larger that \(\frac{1}{n}\) and the harmonic series \(\sum \frac{1}{n}\) is divergent, so this series is also divergent.
Example 1.3.3.
The terms \(\frac{1}{3^n + 4n + 1}\) are smaller that \(\frac{1}{3^n}\text{.}\) These terms \(\frac{1}{3^n}\) are the terms of a geometric series with common ratio \(\frac{1}{3}\text{,}\) which converges. Therefore, this series converges.
Example 1.3.4.
The terms \(\frac{n+1}{n^2}\) are larger than \(\frac{1}{n}\text{.}\) The latter are the terms of the divergent harmonic series, so this series diverges.
Example 1.3.5.
For \(n \geq 4\text{,}\) I can make this comparison.
Therefore, the terms of this series are smaller than the terms of a geometric series with common ratio \(\frac{1}{2}\text{,}\) which converges. Therefore, this series also converges (and converges to \(e^2\text{,}\) as it happens).
In Example 1.3.5, the comparison \(a_n \leq b_n\) was only true for \(n \geq 4\) instead of all \(n \in \NN\text{.}\) This is typical and perfectly acceptable; for everything involving series other than calculating the exact value, I only need to consider the long term behaviour. For comparison, it is enough that \(a_n \lt b_n\) for all \(n\) past some finite fixed value.
The second type of comparison is asymptotic comparison. I often prefer using asymptotic comparison because I don't actually have to calculate the inequalities to apply the comparison.
Proposition 1.3.6.
(Asymptotic Comparison) Let \(a_n,b_n \geq 0\) be the terms of two series. Assume that \(a_n \geq 0\) and \(b)n \geq 0\text{.}\) If \(a_n\) and \(b_n\) have the same asymptotic order (in the variable \(n\)), then the two series
have the same convergence behaviour: either they both converge or they both diverge.
In Example 1.3.2, I could have simply said that \(\frac{1}{n-2}\) is asymptotically the same order as \(\frac{1}{n}\text{.}\) The same is true for \(\frac{n+1}{n^2}\) in Example 1.3.4. I'll do one new example as well.
Example 1.3.7.
As an example for both asymptotic comparison and conditional convergence, here are three alternating series. They are all convergent by the alternating series test. Comparison to geometric series or a \(\zeta\) series is used to check their absolute convergence.
This series is absolutely convergent by asymptotic comparison to \(\frac{1}{n^6}\text{.}\)
This series is absolutely convergent by asymptotic comparison to \(\frac{1}{n^2}\text{.}\)
This series is analyzed by asymptotic comparison to \(\frac{1}{\ln n}\text{.}\) This series with terms \(\frac{1}{\ln n}\) satisfies \(\frac{1}{\ln n} \gt \frac{1}{n}\text{,}\) which does that it is divergent by direct comparison. The asymptotic order of \(\frac{(-1)^n}{\ln n}\) is that of a divergent series, so the series with absolute value diverges, meaning that the series is only conditionally convergent.
This last series is a good examply of what it is necessary to assume the terms are positive in asymptotic comparison. If the terms can also be negative, then two series can have the same asymptotic order but not the same convergence behaviour: the series with terms \(\frac{(-1)^m}{\ln n}\) converges but the series with terms \(\frac{1}{\ln n}\) diverges.
Subsection 1.3.2 The Integral Test
Now I'm going to move on to direct tests instead of convergence tests. The first test is the integral test.
Proposition 1.3.8.
(Integral Test) If a series has positive terms and \(a_n = f(n)\) for \(f\) an integrable function, then the series is convergent if and only if the following improper integral is convergent.
Note that the integral and the resulting series will sum to different numbers: this test doesn't calculate the value of the sum. It just tells whether the sum is convergent by comparing it to an improper integral.
Example 1.3.9.
As promised in Definition 1.1.14, the integral tests allows m to prove the that \(\zeta\) series converges if and only if \(p>1\text{.}\) Let me write the corresponding improper integral in three cases. First, when \(p=1\text{.}\) In this
This integral diverges, so the \(\zeta\) seris also diverges for \(p=1\text{.}\) Now consider any \(p \lt 1\text{.}\) When \(p \lt 1\text{,}\) \(n^p \lt m\text{,}\) so \(\frac{1}{n^p} \gt \frac{1}{n}\text{.}\) Since I already proved divergence for \(p=1\) and the terms are large here, direct comparison proves that the \(\zeta\) series diverges for \(p \lt 1\text{.}\) Finally, I'll do the integral when \(p \gt 1\text{.}\)
This integral converges, showing that the \(\zeta\) series converges when \(p \gt 1\text{.}\)
Subsection 1.3.3 The Ratio and Root Tests
There are two final test I want to introduce. These two test have very similar structure, so I'll group them togehter.
Proposition 1.3.10.
(Ratio Test) If \(a_n\) are the terms of a series, consider the limit of the ratio of the terms.
If this limit is greater than \(1\) (including infinity), then the series diverges. If this limit is less than \(1\text{,}\) then the series converges. If the limit is \(1\text{,}\) the test is inconclusive.
Proposition 1.3.11.
(Root Test) If \(a_n\) are the terms of a series, consider the limit of the roots of the terms.
If this limit is greater than \(1\) (or infinity), then the series diverges. If this limit is less than \(1\text{,}\) then the series converges. If the limit is \(1\text{,}\) the test is inconclusive.
The ratio test is useful for powers and particularly for factorials. The root test is obviously useful for powers.
Subsection 1.3.4 Testing Strategies
There are many approaches to testing the convergence of a series: looking at partial sums, testing for divergence, comparison, asymptotic comparison, the alternating series test, the integral test, the ratio test, and the root test. It is difficult to know where to start and which tests or techniques to use. Here are some pointers and strategies.
Looking at a series for asymptotic order is often the easiest first step. The main comparisons are with geometric series and \(\zeta\)-series.
Using the test for divergence is also often an easy first step. If the terms do not tend to \(0\text{,}\) the series cannot converge. Remember that this can only prove divergence, not convergence.
If the series is an alternating series, the alternating series test is likely the easiest approach, since the limit of the terms is the only calculation needed.
The integral test is often the best approach if the series involves transcendental functions, such as exponentials, logarithms or trigonometric functions. These functions can be difficult to handle with the other methods for series, which often are focused on discrete methods.
The ratio test is often the best approach when the terms involve factorials. It is also very useful for terms which have the index in the exponent.
The root test is rarely used. It also helps when the index is in the exponent, but most of those cases can also be done with the ratio test. There are a few rare cases where the root test is by far the best approach.
A final important observation is that convergence only cares about the long-term behaviour of the series. Any finite pieces at the start are negligible. This is a nice observation for many of the tests: comparisons only need to work eventually, integrals can be taken on \([a,\infty)\) for some \(a>0\text{,}\) and a series which eventually becomes an alternating series can use the alternating series test.
Example 1.3.12.
For an extreme example, consider this series.
The first \(10^{300}\) terms of this series are enormous numbers and their sum is simply ridiculous. However, the series is eventually is a \(\zeta\)-series with \(p=2\text{,}\) which converges. Therefore, this sum is finite. The ridiculous number I get from the first \(10^{300}\) terms is very, very large, but certainly finite. Any very, very large number is negligible when asking about infinity.
Subsection 1.3.5 Testing Examples
Now that I have all the tools at my disposal, here are a bunch of examples.
Example 1.3.13.
The terms are \(\frac{1}{n^{\frac{2}{3}}}\text{,}\) so this is a \(\zeta\) series. Since \(\frac{2}{3} \lt 1\text{,}\) this diverges.
Example 1.3.14.
The terms are \(\left( \frac{2}{e} \right)^k\text{,}\) so this is a geometric series. \(\frac{2}{e} \lt 1\) so the series converges.
Example 1.3.15.
This is an alternating series. The limit of the terms is zero, so the series converges using the alternating series test.
Example 1.3.16.
The terms do not tend to zero, so the series is divergent by the test for divergence.
Example 1.3.17.
The terms are asymptotically the same as \(\frac{1}{k^2}\text{.}\) Those terms are from a convergent \(\zeta\) series (with \(p=2\)). Therefore, the series converges by asymptotic comparison.
Example 1.3.18.
This is an alternating series. The limit of the terms is zero, so the series converges by the alternating series test.
Example 1.3.19.
The terms are asymptotically equivalent to \(\frac{1}{e^n}\text{.}\) There are the terms of a geometric series with common ratio \(\frac{1}{e}\text{,}\) which is a convergence geometric series. Therefore, the series converges by asymptotic comparison.
Example 1.3.20.
The term are asymptotically equivalent to \(\frac{1}{k^{\frac{3}{2}}}\text{.}\) These are the terms of a \(\zeta\) series with \(p = \frac{3}{2}\text{,}\) which is a convergent \(\zeta\) series. Therefore, this series converges by asymptotic comparison.
Example 1.3.21.
This is an alternating series. The limit of the terms is zero, so the alternating series test gives convergence. In addition, the absolute value of the terms is \(\frac{n}{n^3+4}\) which is asymptotically equivalent to \(\frac{1}{n^2}\text{.}\) These are the terms of a \(\zeta\) series with \(p=2\text{,}\) which is a convergent \(\zeta\) series. Therefore, the series is also absolutely convergnet by asymptotic analysis.
Example 1.3.22.
The factorial suggests that the ratio test is the best approach. I'll calculate the limit of the ratio of the terms.
By the ratio test, since the limit is less than one, the series is convergent.
Example 1.3.23.
The index is in the exponent, so I'll try the ratio test. I calcualte the limit of the ratio of the terms.
By the ratio test, since the limit is less than 1, this series is convergent.
Example 1.3.24.
This involves a logarithm, so I'll try integral test. The resulting integral uses a substitution with \(u = ln x\text{.}\)
By the integral test, since the matching improper integral converges, the series is convergent. Remember that the value of the series is not \(e\text{,}\) just that the convergence behaviours are the same.
Example 1.3.25.
The presence of a factorial means that ratio test is probably the best. I'll calculate the limit of the ratio of the tersm.
The limit is less than 1, so the series is convergent.
Example 1.3.26.
There are factorials again, so ratio test is likely the best choice.
The limit is less than 1, so the series is convergent.
Example 1.3.27.
With the logarithm and the square root functions involved, the integral test is most appropriate here. However, the resulting integral is still difficult. I'll use integration by parts.
The integral diverges, so the series must as well.
Example 1.3.28.
I use the integral text again. In the integral, I use the substitution \(u = \ln x\text{.}\)
The integral is divergent, so the sum is divergent as well. Also, note the following inequality.
It seems that comparison should be helpful with a series of this type. However, the inequality shows that this series is asymptotically between the harmonic series and the other convergent \(p\) series. In comparison, it's slightly larger than a convergent series and slightly smaller than a divergent series, which is entirely unhelpful. This frustration is not uncommon with comparison; often, the trickiest examples are those that fall unhelpfully right between the standard comparison series.
Example 1.3.29.
I use the integral test again. In the integral, I use the substitution \(u =x^2\text{.}\)
The integral converges, so the sum does as well. Note, again, that the sum does not have the value \(\frac{e}{2}\text{.}\)
Example 1.3.30.
The root test is good for exponents, so let me try that test. I calculate the limit of the \(n\)th root of the term.
This limit is 1, so the test is inconclusive. Instead of using the root test, I could look at the limit of the terms. That limit is \(\pm 1\text{,}\) which is not zero, so the series must diverge by the test for divergence.
Example 1.3.31.
I use the root test again. I take the limit of the \(n\) roots of the terms.
The limit is less than 1, so the series converges.
Example 1.3.32.
There is an interesting comparison argument which I can use to tackle this difficult example. The derivative of tangent is \(\sec^2 x\text{.}\) Since \(\sec x\) is always \(>1\) or \(\lt -1\text{,}\) I can conclude that \(\sec^2x > 1\text{.}\) That is, the slope of tangent is always larger than 1. Since \(\tan 0 = 0\text{,}\) that means that near the origin, \(\tan x > x\text{.}\) Equivalently, for large \(n\text{,}\) \(\tan \frac{1}{n} > \frac{1}{n}\text{.}\) This allows me to compare the series to the harmonic series: the terms are larger than the harmonic series and the harmonic series diverges, so this series also diverges.