In a Taylor series, instead of taking the terms and coefficients all the way to infinity, I could instead truncate the process at some degree. The result is a polynomial. Moreover, since the partial sums are the approximation process for infinite series, this polynomial is an approximation to the Taylor series.
Definition11.3.1.
If \(f(x)\) is analytic, its \(d\)th Taylor polynomials centered at \(\alpha\) is the truncation of its Taylor series, stopping at \((x-\alpha)^d\text{.}\)
Taylor polynomials give the best possible polynomial approximations to analytic functions. I’ll give a couple examples.
Example11.3.2.
Figure11.3.3.Polynomials Approximations to \(e^x\text{.}\)
Look at the exponential function \(e^x\) centered at \(\alpha = 0\text{.}\) Its Taylor series was calculated in Example 11.2.5. Now I will calculate its first few Taylor polynomials. The graphs of these polynomials are shown in Figure 11.3.3.
In Figure 11.3.3, as the degree of the polynomial increases, the polynomial starts to match up with the graph of the exponential function more and more closely. Higher degree polynomials will give better and better approximations for more and more of the domain of the exponential function.
Example11.3.4.
Figure11.3.5.Polynomials Approximations to \(\sin x\text{.}\)
The approximations for sine only have odd exponents, since there are only odd monomials in the Taylor series for sine. Therefore, I’ll only calculate the odd Taylor polynomials here (the even Taylor polynomials simply add zero to the previous odd Taylor polynomial. The graphs of the first few Taylor polynomials are shown in Figure 11.3.5.
\begin{align*}
\sin x \amp \cong \sum_{k=0}^0 \frac{(-1)^k}{(2k+1)!}
x^{2k+1} = x = p_1\\
\sin x \amp \cong \sum_{k=0}^1 \frac{(-1)^k}{(2k+1)!}
x^{2k+1} = x - \frac{x^3}{3!} = p_3\\
\sin x \amp \cong \sum_{k=0}^2 \frac{(-1)^k}{(2k+1)!}
x^{2k+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} = p_5\\
\sin x \amp \cong \sum_{k=0}^3 \frac{(-1)^k}{(2k+1)!}
x^{2k+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} -
\frac{x^7}{7!} = p_7\\
\sin x \amp \cong \sum_{k=0}^4 \frac{(-1)^k}{(2k+1)!}
x^{2k+1} = x - \frac{x^3}{3!} + \frac{x^5}{5!} -
\frac{x^7}{7!} + \frac{x^9}{9!} = p_9
\end{align*}
The main application of approximation is calculating values of transcendental functions. I can’t directly calculate their values using basic arithmetic; I need a method.
Polynomials are particularly useful as approximation tools since they involve only the basic operations of arithmetic. Computers can calculate with the basic operations of arithmetic, so computers can understand polynomials. If I want to program a computer or calculator to calculate values of \(e^x\) or \(\sin x\) or \(\ln x\) or some other transcendental function, Taylor series are one of the best techniques. Here are a couple examples of doing those approximations
Example11.3.6.
The logarithm is a transcendental function which can’t be directly calculated. Here is a Taylor series for a particular version of the logarithm, which has a radius of convergence of \(R = 1\text{.}\)
Using some clever arithmetic, I can write \(\ln 2 = - \ln
\frac{1}{2} = - \ln \left( 1 - \frac{1}{2} \right)\text{.}\) This lets me use this series to calculate \(\ln 2\) by evaluating at \(x = \frac{1}{2}\text{.}\) Now I’ll take the \(6\)th Taylor polynomial as an approximation for \(\ln 2\text{.}\)
This is not to far off from the value of \(\ln 2 =
0.69314\ldots\text{,}\) accurate to the thousandths place. Already with a \(6\)th order approximation, this is already a usable approximation for many applications.
I knew the accuracy of the previous approximation by comparing to a ‘known value’. That was a bit artificial, since the known value is really just a more accurate approximation. If I was calcualting something for the first time, how would I know how accurate my result it?
This is the subject of error analysis of approximation methods. This is a major piece of mathematics in itself. Though I am not going to cover them here, there are theorems that give good understanding of the error of a Taylor polynomial approximation. Those theorems give confidence in approximations. I an approxmiation is needed that has some fixed precision, such an approximation can always be calculate with full confidence. This kind of mathematics is behind all the computer/calculator algorithms that calculate all values of logarithms, exponentials, square roots, etc.