Section 11.1 Power Series
Subsection 11.1.1 Series as Functions
The two main examples for comparison in
Subsection 10.3.1 were the geometric series and the
\(\zeta\) series. Both converged for certain values of a parameter;
\(|r|\lt 1\) for the geometric series and
\(p >\) for the
\(\zeta\) series. To start this section, I’d like to re-interpret these two series. Instead of thinking of a whole family of different series which depend on a parameter (
\(r\) or
\(p\)), I can think of each family of series as a
function of the parameter. In this view, there is only one series, but the series converges tofunction instead of just a number.
For the geometric series, this new perspective defines a function \(f(x)\text{.}\)
\begin{equation*}
f(x) = \sum_{n=0}^\infty x^n
\end{equation*}
The domain of this function is \(|x|\lt 1\text{,}\) since those are the values of the parameter (now the variable) where the geometric series converges. Since I know the value of the geometric series, I actually already have another expression for this function.
\begin{align*}
\amp f(x) = \sum_{n=0}^\infty x^n = \frac{1}{1-x} \amp \amp
|x| \lt 1
\end{align*}
In this way, the geometric series now defines the function \(f(x) = \frac{1}{1-x}\text{.}\) The domain restriction of the function is determined by the convergence of the series: a point \(x\) is in the domain of the function if the series converges for that choice of \(x\text{.}\)
I can do the same with the \(\zeta\) series. The reason I called these series ‘\(\zeta\) series’ is that the associated function is called the Riemann \(\zeta\)-function.
\begin{equation*}
\zeta(x) = \sum_{n=0}^\infty \frac{1}{n^x}
\end{equation*}
The domain of this function is \((1,\infty)\text{,}\) since that is where the series converges. (In other branches of mathematics, the domain of \(\zeta\) is extended in new and interesting ways. The vanishing of the \(\zeta\) function is the subject of the famous Riemann Hypothesis, an important unsolved problem in modern mathematics.)
In general, an infinite series can represent a function \(f(x)\) when the terms \(a_n\) of the series also depend on \(x\text{.}\)
\begin{equation*}
f(x) = \sum_{n=1}^\infty a_n(x)
\end{equation*}
Notice that the variable of the function, \(x\text{,}\) is not the index of the sum \(n\text{.}\) These two numbers are different and must not be confused. The domain of this function is the set of values of \(x\) for which the series converges. Instead of the conventions for determining the domains of functions used before (avoiding division by zero, square roots and other problems) domain restrictions for these new funcdtions are all about convergence. For these series, convergence is no longer a yes/no question. Instead, it is a domain question: for which real numbers does the series converge?
Subsection 11.1.2 Definition of Power Series
A polynomial \(p(x)\) of degree \(d\) can be written as a finite sum in sigma notation.
\begin{equation*}
p(x) = \sum_{n=0}^d c_n x^n
\end{equation*}
The terms involve powers of the variables (\(x^n\)) and coefficients of those powers (\(c_n\)). What if I let the degree become arbitrarily large, going to infinity? I get a particular kind of infinite series.
Definition 11.1.1.
A series of the form
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_n x^n
\end{equation*}
is called a power series. The real numbers \(c_n\) are called the coefficients of the power series. The whole expression \(c_nx^n\) is still the term of the power series.
The full definition is slightly more general. The previous series was a power series centered at 0. I can centre a power series at any \(\alpha \in \RR\text{.}\) This could also be done for polynomials, but it much more useful to do for power series.
Definition 11.1.2.
A series of the form
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_n (x-\alpha)^n
\end{equation*}
is called a power series centered at \(\alpha\). The numbers \(c_n\) are still called the coefficients and the number \(\alpha\) is called the centre point. The whole expression \(c_n(x-\alpha)^n\) is still the term.
Subsection 11.1.3 Radii of Convergence
Polynomials are defined for all real numbers; they had no domain restrictions. However, series do have domain restrictions. A power series may or may not converge for all real values of \(x\text{.}\) The first and most important issue when I start using series as functions is determining the domain of convergence. For power series, I will almost always use the ratio test. Recall that the ratio tests shows convergence when the limit of the ratio of the terms is \(\lt
1\text{.}\) I will use some examples to show the various types of behaviours.
Example 11.1.3.
Here is a power series centred at \(\alpha = -2\text{.}\) I’ll determine its convergence using the ratio test.
\begin{align*}
\sum_{n=0}^\infty \frac{(x+2)^n}{n^2} \amp\\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{\frac{(x+2)^{n+1}}{(n+1)^2}}{\frac{(x+2)^n}{n^2}}
\right| = \lim_{n \rightarrow \infty} \left|
\frac{(x+2)n^2}{(n+1)^2} \right|\\
\amp = |x+2| \lim_{n \rightarrow \infty}
\frac{n^2}{n^2+2n+1} = |x+2| \lt 1
\end{align*}
This series is centered at \(\alpha=-2\text{,}\) and the ratio test tells me that there is convergence on \(|x+2|\lt 1\text{,}\) which is the interval \((-3,-1)\text{.}\) Outside the interval, the series diverges and doesn’t represent a function. The convergence at the endpoints \(-3\) and \(-1\) is undetermined; I would need to check them individually using another type of test.
Example 11.1.4.
Here is a power series centred at \(\alpha = 0\text{.}\) I’ll determine its convergence using the ratio test.
\begin{align*}
\sum_{n=0}^\infty nx^n \amp\\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{x^{n+1} (n+1)}{x^n n} \right| = |x| \lim_{n
\rightarrow \infty} \frac{n+1}{n} = |x| \lt 1
\end{align*}
The ratio test allows me to conclude that this converges on \((-1,1)\text{.}\) Again, if I wanted to know about convergence at the endpoint (where the ratio test is inconclusive), I’d have to investigate those series with another test.
Example 11.1.5.
Here is a power series centred at \(\alpha = 0\text{.}\) I’ll determine its convergence using the ratio test.
\begin{align*}
\sum_{n=0}^\infty n!x^n \amp\\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{x^{n+1} (n+1)!}{x^n n!} \right| = |x| \lim_{n
\rightarrow \infty} \frac{n+1!}{n!} = |x| \lim_{n
\rightarrow \infty} (n+1)
\end{align*}
This limit is never finite unless \(x=0\) (when all the terms are zero), so this converges almost nowhere. This is essentially useless as the definition of a function, since its only value is \(f(0) = 0\text{.}\)
Example 11.1.6.
Here is a power series centred at \(\alpha = 7\text{.}\) I’ll determine its convergence using the ratio test.
\begin{align*}
\sum_{n=0}^\infty \frac{(-1)^n (x-7)^n}{2^n n!} \amp\\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{\frac{(x-7)^{n+1}}{2^{n+1}
(n+1)!}}{\frac{(x-7)^n}{2^n n!}} \right| = |x-7|\lim_{n
\rightarrow \infty} \frac{1}{2(n+1)} = 0 \lt 1
\end{align*}
The limit here is \(0\) regardless of the value of \(x\text{,}\) so I have established convergence for all real numbers.
The previous examples represent all of the possible types of convergence behaviour of power series. I summarize the situation in a proposition.
Proposition 11.1.7.
Consider a power series centered at \(x = \alpha\text{.}\)
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_n (x-\alpha)^n
\end{equation*}
Such a series will have precisely one of three convergence behaviours.
It will only converge for \(x = \alpha\text{,}\) where it has the value \(c_0\text{.}\)
If will converge for all of \(\RR\text{.}\)
There will be a real number \(R>0\) such that the series converges on \((\alpha - R, \alpha + R)\text{.}\) It will diverge outside this interval, and the behaviour at the end points is undetermined and has to be checked individually.
Definition 11.1.8.
The positive real number \(R\) in the third case is called the radius of convergence of a power series. I can use this terminology to cover the other two cases as well: in the first case, I can say \(R=0\) and in the second case, I can say \(R = \infty\text{.}\)
For power series, the most important things to do is to determine the radius of convergence. This determines the domain of the function defined by the series. I’ll do some examples. In each example, I’ll define a series and calculate its convergence using the ratio test.
Example 11.1.9.
\begin{align*}
f(x) \amp = \sum_{n=1}^\infty \sqrt{n} x^n \\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp =
\lim_{n \rightarrow \infty} \left|
\frac{\sqrt{n+1}x^{n+1}}{\sqrt{n} x^n} \right| = \lim_{n
\rightarrow \infty} |x| \sqrt{\frac{n+1}{n}}\\
\amp = |x| \lim_{n \rightarrow \infty} \sqrt{ 1 +
\frac{1}{n} } = |x| \lt 1
\end{align*}
This series converbes on \((-1,1)\text{,}\) so the radius of convergence is \(R=1\text{.}\)
Example 11.1.10.
\begin{align*}
f(x) \amp = \sum_{n=1}^\infty \frac{n(x-6)^n}{4^{2n+2}}\\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{\frac{(n+1)(x-6)^{n+1}}{4^{2n+3}}}
{\frac{n(x-6)^n}{4^{2n+2}}} \right|\\
\amp = |x-6| \lim_{n \rightarrow \infty} \frac{n+1}{n}
\frac{1}{4^2} = \frac{|x-6|}{16} \lt 1 \implies |x-6| \lt
16
\end{align*}
This series converges on \((-10, 22)\text{,}\) so the radius of convergence is \(R=16\text{,}\) centered around \(x=6\text{.}\)
Example 11.1.11.
\begin{align*}
f(x) \amp = \sum_{n=1}^\infty \frac{x^n}{7^n} \\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{\frac{x^{n+1}}{7^{n+1}}}{\frac{x^n}{7^n}} \right|\\
\amp = |x| \lim_{n \rightarrow \infty} \left| \frac{1}{7}
\right| = \frac{|x|}{7} \lt 1 \implies |x| \lt 7
\end{align*}
The series converges on \((-7,7)\text{,}\) so the radius of convergence is \(R=7\text{.}\)
Example 11.1.12.
\begin{align*}
f(x) \amp = \sum_{n=1}^\infty \frac{x^n}{(1)(3)(5) \ldots
(2n+1)} \\
\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n}
\right| \amp = \lim_{n \rightarrow \infty} \left|
\frac{\frac{x^{n+1}}{(1)(3)(5) \ldots
(2n+3)}}{\frac{x^n}{(1)(3)(5)\ldots (2n+1)}} \right|\\
\amp = |x| \lim_{n \rightarrow \infty} \frac{1}{2n+3} = 0
\end{align*}
This convergence doesn’t depend on \(x\text{,}\) since the limit is \(0\) in any case. Therefore, this has an infinite radius of convergence (\(R = \infty\)) and is a function defined on all of \(\RR\text{.}\)
In the previous examples, the use of the ratio test determined the radius of convergence. This is always a valid method, but there are some more direct methods to calculate \(R\) as well. In the following proposition, though I don’t give the proof, the first calculation is essentially still doing the ratio test and the second calculation is essentially doing the root test.
Proposition 11.1.13.
In a power series where all the coefficients \(c_n\) are non-zero, the radius of convergence can be directly calculated in either of two ways.
\begin{align*}
R \amp = \lim_{n \rightarrow \infty} \left|
\frac{c_n}{c_{n+1}} \right|\\
R \amp = \lim_{n \rightarrow \infty}
\frac{1}{\sqrt[n]{|c_n|}}
\end{align*}
Subsection 11.1.4 Properties of Power Series
Inside the radius of convergence, a power series has all the properties of a normal function. I can add and subtract two power series as long as I remain inside the radii of both series. I can multiply as well, though the calculations become difficult. The same is true for division: if a series is non-zero inside its radius of convergence, I can divide by the series (though the results of the calculation are difficult to use).
Other properties of series can be calculated with various ease or difficulty, depending on the series. I can investigate the growth of series, whether or not they are bounded, symmetric or periodic, and whether or not they are invertible. The key idea to remember is that power series, inside their radii of convergence, are functions; anything that applies to functions can be applied to power series.
Subsection 11.1.5 Calculus of Power Series
Since power series are functions, I can try to do calculus with them, investigating their limits, continuity, derivatives and integrals. It turns out that one of major advantages of working with power is that their calculus is remarkably approachable.
Proposition 11.1.14.
Consider power series centered at \(\alpha\text{.}\)
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_n (x-\alpha)^n
\end{equation*}
This \(f\) is a continuous function inside its radius of convergence. In addition, \(f\) is infinitely differentiable inside its radius of convergence.
There is a convenient notation for differentiability which I will use frequently, particularly in future courses.
Definition 11.1.15.
If \(f\) is a function on a domain \(D\) and the \(n\)-th derivative of \(f\) is defined and continuous, then \(f\) is in the class \(C^n(D)\text{.}\) If the domain is understood implicitly, I just say \(f\) is in the class \(C^n\text{.}\) If \(f\) is infinitely differentiable, then\(f\) is in the class \(C^\infty\text{.}\)
The proposition says that power series are in class \(C^\infty\text{,}\) but how are these derivatives calculated?
Proposition 11.1.16.
If \(f\) is a power series, then then derivative of \(f\) is calculated term-wise, simply by differentiating every term in the series.
\begin{equation*}
f^\prime(x) = \sum_{n=1}^\infty c_n n(x-\alpha)^{n-1}
\end{equation*}
Therefore, the derivative is a power series as well; moreover, it will have the same radius of convergence as the original.
Integration is just as pleasant for power series.
Proposition 11.1.17.
If \(f\) is a power series centered at \(\alpha\text{,}\) then \(f\) is integrable and its indefinite integral is calculated termwise.
\begin{equation*}
\int f(x) dx= \sum_{n=0}^\infty c_n
\frac{(x-\alpha)^{n+1}}{n+1} + C
\end{equation*}
The simplicity of integration is particularly helpful. As previous sections of this course have amply demonstrated, integration is difficult business. For functions which be expressed as series, integration is almost trivial. This makes power series a very useful and convenient class of functions.
Subsection 11.1.6 Series with Patterns of Exponents
If it often the case that a power series may contain non-zero terms only for certain exponents following some pattern. In this section, I’m going to briefly introduce some common notation for particular cases of this phenomenon. Consider a series where all the odd terms are zero (centered at \(0\) for convenience).
\begin{equation*}
f(x) = c_0 + 0x + c_2x^2 + 0x^3 + c_4x^4 + 0x^5 + \ldots
\end{equation*}
I could similarly consider a series where all the even terms are zero.
\begin{equation*}
f(x) = 0 + c_1x + 0 + c_3x^3 + 0x^4 + c_45^5 + 0x^6 + \ldots
\end{equation*}
If I want to index all the even numbers, I can write \(k =
2n\) for \(n \in \NN\text{.}\) Similarly, I can index all the odd numbers by writing \(k = (2n+1)\) for \(n \in \NN\text{.}\) Using these tools, I could write a series with only odd or even non-zero terms. The series
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_{2n} (x-\alpha)^{2n}
\end{equation*}
is a series with only even terms. The series.
\begin{equation*}
g(x) = \sum_{n=0}^\infty c_{2n-1} (x-\alpha)^{2n+1}
\end{equation*}
is a series with only odd terms.
Some extra care must be taken with calculating radii of convergence for these series. The formula
\begin{equation*}
R = \lim_{n \rightarrow \infty} \left| \frac{c_n}{c_{n+1}}
\right|
\end{equation*}
relies on the assumption that all \(c_n \neq 0\text{.}\) This is not true for these series with only odd term or even terms. For series with such patterns, I need a slightly different approach. Take the example series
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_{2n} (x-\alpha)^{2n}
\end{equation*}
that only has even terms. Implicitly, even though I can assume \(c_{2n} \neq 0\text{,}\) all of the missing coefficient \(c_{2n+1}\) are zero. To use the calculation for radius of convergence, a series needs all non-zero coefficients. I can force this series into such a form by some clever manipulation of exponents. Using the laws of exponents, I adjust the power of \((x-\alpha)\text{.}\)
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_{2n} ((x-\alpha)^2)^{n}
\end{equation*}
Then I also re-index the coefficients: what I called \(c_{2n}\) before, I’ll just call \(c_n\text{.}\)
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_{n} ((x-\alpha)^2)^{n}
\end{equation*}
Then this is a series with all non-zero coefficients (with the assumption that the original \(c_{2n}\) were non-zero). However, it is now a power series in \(x^2\text{,}\) not \(x\text{.}\) Therefore, if I calculate a radius of convergence \(R\text{,}\) the resulting inequality is
\begin{equation*}
(x-\alpha)^2 \in \left( - R, R \right)
\end{equation*}
To get the actual bound on \(x\) itself requires manipulating the inequalities that bound \(x^2\text{.}\) The result is this.
\begin{equation*}
(x-\alpha) \in \left( - \sqrt{R}, \sqrt{R} \right) \implies
x \in \left( \alpha - \sqrt{R}, \alpha + \sqrt{R} \right)
\end{equation*}
In summary, there is often a way to get to a domain of convergence for this type of series, but it can be difficult and unwieldy.
The approach above doesn’t just work for even and odd terms. Using similar ideas, I could encode all sorts of patterns in the exponents of our power series. If a power series had non-zero terms only when the exponent was a power of \(3\text{,}\) I could write it as
\begin{equation*}
f(x) = \sum_{n=0}^\infty (x-\alpha)^{3n}\text{.}
\end{equation*}
If the power series had non-zero exponents only for every fifth number starting at \(7\text{,}\) I could write it as
\begin{equation*}
f(x) = \sum_{n=0}^\infty (x-\alpha)^{5n + 7}\text{.}
\end{equation*}
In either case, I’d have to use similar kinds of tricks to reduce it to a series in \(x^3\) or \(x^5\text{,}\) then calculate the radius, then manipulate the inequalities to figure out the actual bounds on \(x\text{.}\)
Subsection 11.1.7 Non-Elementary Functions
One of the uses of power series is to construct entirely new functions. These are often called non-elementary functions (the elementary functions are those which we already have worked with: polynomials, roots, exponentials, logarithms, trig, and hyperbolics). Here are some examples.
Example 11.1.18.
The Bessel functions of order \(k \in \NN\) are given by this series.
\begin{equation*}
J_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n}}{2^{2n+k}
((n+k)!)^2}
\end{equation*}
The Bessel functions are like the trigonometric functions, but the terms in the denominators are larger. They oscilate like trig functions, but with decaying amplitude. They are important for spherical and circular waves, such as sound waves or ripples on a pond.
Example 11.1.19.
The Bessel-Clifford functions are givesn by this series.
\begin{equation*}
C_k(x) = \sum_{n=0}^\infty \frac{\pi (k+n) x^n}{n!}
\end{equation*}
Example 11.1.20.
The Polylogarithm functions are given by this series. (Note that for \(s=1\) the polylogarithm is \(Li_1(x) = -\ln
(1-x)\text{,}\) the conventional logarithm).
\begin{equation*}
Li_s(x) = \sum_{n=0}^\infty \frac{x^{n}}{n^s}
\end{equation*}
These three examples are just the very start of a huge world of non-elementary functions.