Section 11.2 Taylor Series
Subsection 11.2.1 Analytic Functions
Once again, consider the geometric series:
\begin{equation*}
f(x) = \sum_{n=0}^\infty x^n = \frac{1}{1-x}
\end{equation*}
Unlike most of the power series I could define, I actually know the values of the geometric series. This series, as a function, is the same as the function \(\frac{1}{1-x}\) on the domain \((-1,1)\text{.}\)(The function \(\frac{1}{1-x}\) is certainly defined on a larger domain, but the series is not). I can say that the geometric series lets me write \(\frac{1}{1-x}\) as an infinite series; it is the infinite series representation of the function on the domain \((-1,1)\text{.}\)
The theory of Taylor series generalizes this situation. For various functions \(f(x)\text{,}\) I want to a build representation of \(f(x)\) as a series. This will be a power series which is identical to \(f(x)\text{,}\) at least for part of its domain. To find the power series, I need to choose a centre point \(\alpha\) and find coefficients \(c_n\) to build the series.
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_n (x-\alpha)^n
\end{equation*}
These two pieces of information, the centre point and the coefficients, they are everything. Often I can choose the centre point, so I am really trying to calculate the coefficient to represent a function as a series. When is such a project possible? I need a definition before stating the relevant theorem.
Definition 11.2.1.
A function is called analytic at \(\alpha \in
\RR\) if it can be expressed as a power series centered at \(\alpha\) with a non-zero radius of convergence. Such a power series is called a Taylor series representation of the function. In the case that \(\alpha
= 0\text{,}\) a Taylor series is often called a MacLaurin series.
I know that power series (and therefore all possible Taylor series) are \(C^\infty\text{.}\) There is a nice theorem that provides the reverse implication.
Theorem 11.2.2.
A function \(f\) is \(C^\infty\) at a point \(\alpha
\in \RR\) if and only if \(f\) is analytic at \(\alpha\text{.}\) (Recall that analytic includes the condition that the radius of convergence of the resulting series must be positive or infinity).
This theorem answers the question of which functions have Taylor series representations: any function which is infinitely differentiable can be expressed as a series, but no other functions can be so expressed.
Subsection 11.2.2 Calculating Coefficients
The previous section defined a class of analytic functions, but it didn’t explain how to actually find the series for these functions. I get to choose the centre point \(\alpha\text{,}\) so I need to know how to calculate the coefficients \(c_n\text{.}\)
My approach is a typical on in mathematics: I’m going to assume there is a solution and see what the pieces need to be to make it work. Therefore, assume there is a series expression of \(f(x)\text{.}\)
\begin{equation*}
f(x) = \sum_{n=0}^\infty c_n (x - \alpha)^n
\end{equation*}
Now I will calculate the value of \(f\) and is derivatives evaluated at the centre point \(\alpha\text{.}\) Recall that to differentiate a series, I can differentiate term by term. Each time I differentiate, I move the starting point of the index up by one; I do this because the starting value of a power series is the constant piece and this will disappear when I differentiable. Here are the first few result.
\begin{align*}
f(\alpha) \amp = \sum_{n=0}^\infty c_n (\alpha-\alpha)^n =
c_0 + \sum_{n=1}^\infty c_n \cdot 0 = c_0 \implies c_0 =
f(\alpha)\\
f^{\prime} (\alpha) \amp = \sum_{n=1}^\infty c_n n
(\alpha-\alpha)^{n-1} = c_1 + \sum_{n=2}^\infty c_n \cdot 0
= c_1 \implies c_1 = f^\prime(\alpha)\\
f^{\prime \prime} (\alpha) \amp = \sum_{n=2}^\infty c_n n
(n-1) (\alpha-\alpha)^{n-2} = 2c_2 + \sum_{n=3}^\infty c_n
\cdot 0 = 2c_2 \implies c_2 =
\frac{f^{\prime\prime}(\alpha)}{2}\\
f^{(3)} (\alpha) \amp = \sum_{n=3}^\infty c_n n (n-1) (n-2)
(\alpha-\alpha)^{n-3} = 6c_3 + \sum_{n=4}^\infty c_n \cdot 0
= 6c_3 \implies c_3 = \frac{f^{(3)}(\alpha)}{6}\\
f^{(4)} (\alpha) \amp = \sum_{n=4}^\infty c_n n (n-1) (n-2)
(n-3) (\alpha-\alpha)^{n-4} = 24c_4 + \sum_{n=5}^\infty c_n
\cdot 0\\
f^{(4)} (\alpha) \amp = 24c_4 \implies c_4 =
\frac{f^{(4)}(\alpha)}{24}
\end{align*}
There is a pattern here relating to the coefficients of the series. I’ve solved for those coefficient in the calculations to try to show the pattern. Based on what I’ve done so far, I can argue for the following general pattern.
\begin{equation*}
c_n = \frac{f^{(n)}(\alpha)}{n!}
\end{equation*}
Now I have a way to calculate the coefficient in terms of the derivatives of \(f(x)\) at the chosen centre point. Therefore, to find a series representation of \(f(x)\) centered at \(\alpha\) (assuming \(f(x)\) is analytic at \(\alpha\)), I use this expression o calculate the coefficients. I summarize this in a proposition.
Proposition 11.2.3.
If \(f\) is analytic at \(\alpha\text{,}\) then the Taylor series for \(f\) has the following form.
\begin{equation*}
f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(\alpha)}{n!}
(x-\alpha)^n
\end{equation*}
The expression for the coefficients \(c_n\) allows for another important result.
Proposition 11.2.4.
(Uniqueness of Coefficients) Two power series centered at the same point are equal if an only if every coefficient is equal.
Proof.
Assume there is an equation of power series.
\begin{equation*}
\sum_{n=0}^\infty c_n (x-\alpha)^n = \sum_{n=0}^\infty b_n
(x-\alpha)^n
\end{equation*}
The coefficients are determined by the derivatives. But the functions are the same, so they must have the same derivatives at \(\alpha\text{.}\) Therefore, both \(b_n\) and \(c_n\) must be calculated by \(\frac{f^{(n)}(\alpha)}{n!}\text{,}\) hence \(b_n = c_n\text{.}\)
Uniqueness of coefficients is very important for doing algebra with series. If two series are equal, I can go directly to the equality of each of the coefficients to get explicit equations. Curiously, since all the coefficients are determined by the derivatives at the centre point, this means that the derivatives at the centre point encode the entire behaviour of the function (inside the radius of convergence). This is a surprising result, since functions can have a wide range of behaviours far away from their centre points.
Subsection 11.2.3 Examples
I will calculate the Taylor series for a number of common functions.
Example 11.2.5.
I start with the most important function in calculus: \(e^x\text{.}\) The derivatives of \(e^x\) are just \(e^x\text{.}\) If I choose \(\alpha=0\) as the centre point, then all these derivatives evaluate to \(1\text{.}\) Then I can write the series.
\begin{equation*}
e^x = \sum_{n=0}^\infty \frac{1}{n!}x^n = 1 + x +
\frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} +
\frac{x^5}{120} + \ldots
\end{equation*}
I can check that the radius of convergence for this series is \(R = \infty\text{,}\) so this is an expression for \(e^x\) which works for all real numbers.
The previous example finally allows for the proper definition of the exponential function. For \(r = \frac{a}{b}\) a rational number, \(a^r = \sqrt[b]{x^a}\text{,}\) which was well understood. But if \(r\) is irrational, I previously had no idea what \(a^r\) actually was nor how to calculate it. I worked on an unjustified assumption that the exponential function \(e^x\) was well defined for irrational numbers. Now, however, I can use this series. The value of \(e^{\pi}\text{,}\) which was completely opaque and mysterious before, is now given by a series.
\begin{equation*}
e^{\pi} = \sum_{n=0}^\infty \frac{\pi^n}{n!} = 1 + \pi +
\frac{\pi^2}{2} + \frac{\pi^3}{6} + \frac{\pi^4}{24} +
\frac{\pi^5}{120} + \ldots
\end{equation*}
Other important properties of the exponential function can be calculated from the series. I’ll differentiate the series. (I use a shift in the series in the last step.)
\begin{equation*}
\frac{d}{dx} e^x = \frac{d}{dx} \sum_{n=0}^\infty \frac{1}{n!}
x^n = \sum_{n=1}^\infty \frac{1}{n!} n x^{n-1} =
\sum_{n=1}^\infty \frac{1}{(n-1)!} x^{n-1} = \sum_{n=0}^\infty
\frac{1}{n!} x^n = e^x
\end{equation*}
This recovers the fact that the exponential function is its own derivative.
Example 11.2.6.
I can calculate series directly using the derivatives and centre point to calculate coefficients. However, I can also make adjustments to existing series to calculate series for new functions. I’m going to integrate the geoemtric series. Recall the integration of a series is just term-by-term integration.
\begin{align*}
\int \sum_{n=0}^\infty x^n dx \amp = \int \frac{1}{1-x}
dx\\
\sum_{n=0}^\infty \frac{x^{n+1}}{n+1} + c \amp = -\ln
|1-x| + c\\
\sum_{n=0}^\infty \frac{x^{n+1}}{n+1} \amp = -\ln
(1-x)
\end{align*}
This gives a Tayor series for \(- \ln (1-x)\) centered at \(\alpha = 0\text{,}\) at least for the domain \((-1,1)\text{.}\) (The domain was the reason I dropped the absolute value bars, since \((1-x)\) is always positive on this domain.)
Example 11.2.7.
Another way to build series is with composition, at least for certain compositions.
Example 11.2.5 calculate the series for the exponential function, which applied on the domain
\(\RR\text{.}\)
\begin{equation*}
e^x = \sum_{n=0}^\infty \frac{x^{n}}{n!}
\end{equation*}
Now I want a series for \(e^{x^2}\text{.}\) Here is a easy for series. Let’s look at the function \(e^{x^2}\text{.}\) It has no elementary anti-derivative, so I am unable to integrate it with conventional methods. However, if I put \(x^2\) into the series for the exponential function, I get a series for \(e^{x^2}\text{.}\)
\begin{equation*}
e^{x^2} = \sum_{n=0}^\infty \frac{x^{2n}}{n!}
\end{equation*}
Since this is a series, I can integrate it.
\begin{equation*}
\int e^{x^2} dx = \sum_{n=0}^\infty
\frac{x^{2n+1}}{(2n+1)n!} + c
\end{equation*}
This new series is the anti-derivative of \(e^{x^2}\text{.}\) I knew such a function should exist, and now I have a representation of it as a Taylor series. (The series has infinite radius of convergence).
Example 11.2.8.
The Taylor series for sine and cosine are important examples. Centered at \(x=0\text{,}\) the derivatives of \(\sin x\) form a cycle: \(\sin x\text{,}\) \(\cos x\text{,}\) \(-\sin x\text{,}\) and \(-\cos x\text{.}\) Evaluated at \(x=0\text{,}\) these gives values of \(0\text{,}\) \(1\text{,}\) \(0\text{,}\) and \(-1\text{.}\) Therefore, I get the following expressions for the coefficient of the Taylor series.Note I need to group the coefficients into odds and evens, writing \(2n\) for evens and \(2n+1\) for odds.
\begin{align*}
c_0 \amp = f(0) = 0 \amp
c_1 \amp = f^\prime(0) = 1 \\
c_2 \amp = \frac{f^{\prime\prime}(0)}{2!} = 0 \amp
c_3 \amp = \frac{f^{\prime\prime\prime}(0)}{3!}
= \frac{-1}{3!} \\
c_4 \amp = \frac{f^{(4)}(0)}{4!} = 0 \amp
c_5 \amp = \frac{f^{(5)}(0)}{5!} = \frac{1}{5!} \\
c_6 \amp = \frac{f^{(6)}(0)}{6!} = 0 \amp
c_7 \amp = \frac{f^{(7)}(0)}{7!} = \frac{-1}{7!} \\
c_8 \amp = \frac{f^{(8)}(0)}{8!} = 0 \amp
c_9 \amp = \frac{f^{(9)}(0)}{9!} = \frac{1}{9!} \\
c_{2k} \amp = 0 \amp
c_{2k+1} \amp = \frac{(-1)^k}{(2k+1)!}
\end{align*}
Using these coefficients, the Taylor series for sine centered at \(\alpha = 0\) is the following series. I only write the odd terms, using \((2n+1)\text{,}\) since the even coefficients are all zero.
\begin{equation*}
\sin x = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} x^{2n+1}
\end{equation*}
The radius of convergence of this series is \(R =
\infty\text{,}\) so it expresses \(\sin x\) for all real numbers. I can use similar steps to find the Taylor the series for cosine, which produces the following series. For cosine, the odd terms are zero, so I only write the even terms, using \((2n)\text{.}\)
\begin{equation*}
\cos x = \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} x^{2n}
\end{equation*}
The radius of convergence of this series is also \(R =
\infty\text{.}\)
Example 11.2.9.
Consider \(f(x) = \ln x\) centered at \(\alpha = 1\text{.}\) I’ll calculate the first few coefficients to look for a pattern.
\begin{align*}
f^\prime(x) \amp = \frac{1}{x} \amp
f^{\prime \prime}(x) \amp = \frac{-1}{x^2} \\
f^{\prime \prime \prime}(x) \amp = \frac{2}{x^3} \amp
f^{(4)}(x) \amp = \frac{-6}{x^4}
\end{align*}
There are three pieces to the pattern here: an alternating sign, a factorial multiplication growing in the numerator, an a power growing in the denominator. I’ll carefully match the details of these three pieces to the index to produce a general pattern.
\begin{equation*}
f^{(n)}(x) = \frac{(-1)^{n-1}(n-1)!}{x^n}
\end{equation*}
Then I evalute the pattern at the desired centre point.
\begin{equation*}
f^{(n)}(1) = (-1)^{n-1}(n-1)!
\end{equation*}
Once I have the general pattern evaluated at the centre point, I put it into standard the Taylor series form.
\begin{equation*}
\ln x = \sum_{n=1}^\infty \frac{(-1)^{n-1}(n-1)!}{n!}
(x-1)^n = \sum_{n=1}^\infty \frac{(-1)^{n-1}}{n} (x-1)^n
\end{equation*}
The radius of convergence is 1, found by ratio test.
Example 11.2.10.
Consider \(f(x) = \frac{1}{x^2}\) centered at \(\alpha =
3\text{.}\)
\begin{align*}
f^\prime(x) \amp = \frac{-2}{x^3} \amp
f^{\prime \prime}(x) \amp = \frac{6}{x^4} \amp
f^{\prime \prime \prime}(x) \amp = \frac{-24}{x^5}
\end{align*}
I look for a general pattern. Here, there is a factorial building in the numerator, an exponent in the denominator, and a alternating sign. I carefully match with the index to get the pattern.
\begin{equation*}
f^{(n)}(x) = \frac{(-1)^{n}(n+1)!}{x^{n+2}}
\end{equation*}
I evaluate the pattern at the centre point.
\begin{equation*}
f^{(n)}(3) = \frac{(-1)^{n}(n+1)!}{3^{n+2}}
\end{equation*}
j Then I put these derivatives evaluted at the centre point into the standard Taylor series form.
\begin{equation*}
\ln x = \sum_{n=1}^\infty
\frac{(-1)^{n}(n+1)!}{3^{n+2} n!} (x-3)^n =
\sum_{n=1}^\infty \frac{(-1)^n (n+1)}{3^{n+2}} (x-3)^n
\end{equation*}
The radius of convergence is 3, found by ratio test.
I want to point out one particularly useful technique from these examples, since this technique may be important for the activities and the assignments. For the series for \(e^{x^2}\text{,}\) I used an existing series and composition, putting \(x^2\) inside the series for \(e^x\text{.}\) This doesn’t produce taylor series for all compositions, but it can be particularly helpful in certain places. For example, I could calculate a series for \(sin(x^3)\) by inputing \(x^3+1\) for the variable in the series for sine.
\begin{equation*}
\sin(x^3) = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}
(x^3)^{2n+1} = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!} x^{6n+3}
\end{equation*}
When I use this technique, be aware that the radius of convergence can change via the composition. It may be necessary to recalculate the radius of convergence, and make use of techniques I discussed in
Subsection 11.1.6.
Lastly, a very observant reader may have noticed that for the series for \(\ln x\) and \(\frac{1}{x^2}\text{,}\) the radius of convergence was exactly the distance to the edge of the convential domain. There is a theorem, which I won’t get into here, which states that this situation is what should be expected (under some reasonable conditions). I will still ask you to calculate radii of convergence, but you should expect, for most series, that the radius of convergence should be the distance to the nearest undefined point (or infinite if there are no undefined points).