Section 2.2 Taylor Series
Subsection 2.2.1 Analytic Functions
Once again, consider the geometric series.
Unlike most of the power series we've seen so far, we actually know the values of the geometric series. This series, as a function, is the same as the function \(\frac{1}{1-x}\) on the domain \((-1,1)\text{.}\) (The function \(\frac{1}{1-x}\) is certainly defined on a larger domain, but the series is not). We can say that the geometric series lets us write \(\frac{1}{1-x}\) as an infinite series; it is the infinite series representation of the function on the domain \((-1,1)\text{.}\)
The theory of Taylor series generalizes this situation. For various functions \(f(x)\text{,}\) we want to a build representation of \(f(x)\) as a series. This will be a power series which is identical to \(f(x)\text{,}\) at least for part of its domain. To find the power series, we need to choose a centre point \(\alpha\) and find coefficients \(c_n\) such that
Definition 2.2.1.
A function is called analytic at \(\alpha \in \RR\) if it can be expressed as a power series centered at \(\alpha\) with a non-zero radius of convergence. Such a power series is called a Taylor series representation of the function. In the case that \(\alpha = 0\text{,}\) a Taylor series is often called a MacLaurin series.
We know that power series (and therefore all possible Taylor series) are \(C^\infty\text{.}\) There is a nice theorem that provides the reverse implication.
Theorem 2.2.2.
A function \(f\) is \(C^\infty\) at a point \(\alpha \in \RR\) if and only if there exists \(R>0\) such that \(f\) is analytic on \((\alpha-R,\alpha+R)\text{.}\)
This theorem answers the questions of which functions have Taylor series representations: any function which is infinitely differentiable can be expressed as a series, but no other functions can be so expressed.
Subsection 2.2.2 Calculating Coefficients
The previous section defined a class of analytic functions, but it didn't tell us how to actually find the series for these functions. We get to choose the centre point \(\alpha\text{,}\) so we need to know how to calculate the coefficients \(c_n\text{.}\) Assuming we have a series expression of \(f(x)\text{,}\) let's look at the values of \(f\) and its derivatives. Then we calculate the values of the derivatives at the centre point \(\alpha\text{.}\)
We generalize the pattern to write a general expression for the \(n\)th coefficient.
\begin{align*} c_n \amp = \frac{f^{(n)}(\alpha)}{n!} \end{align*}Now we have a way to calculate the coefficient in terms of the derivatives of \(f(x)\) at the chosen centre point. Therefore, to find a series representation of \(f(x)\) centered at \(\alpha\) (assuming \(f(x)\) is analytic at \(\alpha\)), we use this expression above to calculate the coefficients. We summarize this in a proposition.
Proposition 2.2.3.
If \(f\) is analytic at \(\alpha\text{,}\) then the Taylor series for \(f\) has this form:
The expression for the coefficients \(c_n\) allows for another important result.
Proposition 2.2.4.
(Uniqueness of Coefficients) Two power series centered at the same point are equal if an only if every coefficient is equal.
Proof.
Say we have an equation of power series.
The coefficients are determined by the derivatives. But the functions are the same, so they must have the same derivatives at \(\alpha\text{.}\) Therefore, both \(b_n\) and \(c_n\) must be calculated by \(\frac{f^{(n)}(\alpha)}{n!}\text{,}\) hence \(b_n = c_n\text{.}\)
Uniqueness of coefficients is very important for doing algebra with series. If two series are equal, we can then pass to the equality of each of the coefficients to get explicit equations. Curiously, since all the coefficients are determined by the derivatives at the centre point, this means that the derivatives at the centre point encode the entire behaviour of the function (inside the radius of convergence). This is a surprising result, since functions can have a wide range of behaviours far away from their centre points.
Subsection 2.2.3 Taylor Series Examples
Let's try to calculate some Taylor series for important functions.
Example 2.2.5.
We start with the most important function in calculus: \(e^x\text{.}\) The derivatives of \(e^x\) are just \(e^x\text{.}\) If we centre a series at \(x=0\text{,}\) then all these derivatives evaluate to \(1\text{.}\) Therefore
We can check that the radius of convergence for this series is \(R = \infty\text{,}\) so this is an expression for \(e^x\) which works for all real numbers.
As an aside, this finally allows for the proper definition of the exponential function. For \(r = \frac{a}{b}\) a rational number, \(a^r = \sqrt[b]{x^a}\text{,}\) which was well understood. But if \(r\) is irrational, we previously had no idea what \(a^r\) actually was nor how to calculate it. We worked on faith that the exponential function \(e^x\) was well defined for irrational numbers. Now, however, we can use this series. The value of \(e^{\pi}\text{,}\) which was completely opaque and mysterious before, is now given by a series.
Other important properties of the exponential function can be calculated from the series. Let's differentiate the series. (We use a shift in the series in the last step.)
This recovers the fact that the exponential function is its own derivative..
Example 2.2.6.
Let's integrate the geometric series (we set the integration constant to zero).
This gives a Tayor series for \(- \ln (1-x)\) centered at \(\alpha = 0\text{.}\) Integration can be a convenient way to calculate a series, since we didn't have to calculate all the coefficients directly.
Example 2.2.7.
We remarked in the previous section that integration was easy for series. Let's look at the function \(e^{x^2}\text{.}\) It has no elementary anti-derivative, so we unable to integrate it with conventional methods. However, if we put \(x^2\) into the series for the exponential function, we get a series for \(e^{x^2}\text{.}\)
Since this is a series, we can integrate it.
This new series is the anti-derivative of \(e^{x^2}\text{.}\) We knew such a function should exist, and now we have a representation of it as a Taylor series. (The series has infinite radius of convergence).
Example 2.2.8.
The Taylor series for sine and cosine are important examples. Centered at \(x=0\text{,}\) the derivatives of \(\sin x\) form a cycle: \(\sin x\text{,}\) \(\cos x\text{,}\) \(-\sin x\text{,}\) and \(-\cos x\text{.}\) Evaluated at \(x=0\text{,}\) these gives values of \(0\text{,}\) \(1\text{,}\) \(0\text{,}\) and \(-1\text{.}\) Therefore, we get the following expressions for the coefficient of the Taylor series. (Note we need to group the coefficients into odds and evens, writing \(n= 2k\) for evens and \(n = 2k+1\) for odds).
Using these coefficients, the Taylor series for sine centered at \(\alpha = 0\) is this series:
The radius of convergence of this series is \(R = \infty\text{,}\) so it expresses \(\sin x\) for all real numbers. We can use similar steps to find the Taylor the series for cosine.
The radius of convergence of this series is also \(R = \infty\text{.}\)
Example 2.2.9.
Consider \(f(x) = \ln x\) centered at \(\alpha = 1\text{.}\)
We look for a general pattern. There are three pieces: an alternating sign, a factorial multiplication growing in the numerator, an a power growing in the denominator. Careful to match the indices correctly to the first few elements of the pattern.
Once we have the general pattern, we evaluate it at the centre point and then we put it into the Taylor series form.
Notice that the series starts at \(n=1\text{.}\) The constant term here would be evaluating a logarithm at the centre point, which is \(\alpha = 1\text{.}\) Since \(\ln 1 = 0\text{,}\) the constant term is zero, so we start with the linear term.
The radius of convergence is 1, found by ratio test.
Example 2.2.10.
Consider \(f(x) = \frac{1}{x^2}\) centered at \(\alpha = 3\text{.}\)
We look for a general pattern as before.
\begin{align*} f^{(n)}(x) \amp = \frac{(-1)^{n}(n+1)!}{x^{n+2}}\\ \end{align*}We evaluate at the centre point.
\begin{align*} f^{(n)}(3) \amp = \frac{(-1)^{n}(n+1)!}{3^{n+2}}\\ \end{align*}We put this into the general Taylor series form.
\begin{align*} \ln x \amp = \sum_{n=1}^\infty \frac{(-1)^{n}(n+1)!}{3^{n+2} n!} (x-3)^n = \sum_{n=1}^\infty \frac{(-1)^n (n+1)}{3^{n+2}} (x-3)^n \end{align*}The radius of convergence is 3, found by ratio test.
Example 2.2.11.
Consider a function which has the following sequence of derivatives at \(x=0\text{.}\)
The pattern has a cycle of threes. All \(3n\) terms are 0. All \(3n+1\) terms are \((-1)^n\text{.}\) All \(3n+2\) terms are \(2^n\text{.}\) Therefore, the series is best expressed in two pieces.
The radius of convergence is \(\infty\text{,}\) found by a ratio test.