next up previous
Next: About this document ...

Math 8

April 14, 2000



Taylor Polynomials As Approximations and the Error Formula:


We have seen that Taylor polynomials give a way of approximating a function $ f(x)$ for values of $ x$ near some specific value $ x = a$, and that in some cases, higher and higher degree Taylor polynomials give better and better approximations to the function $ f$. For example, here are some Taylor polynomials approximating $ \sin x$ near $ x=0$:


\epsfig{figure=taylorpix.ps,width=0.9\textwidth}


As the degree of the Taylor polynomial increases, the approximation to sine becomes better and better, and the polynomial is a reasonable approximation to the function over a larger and larger interval.

In contrast, here are some Taylor polynomials approximating $ \displaystyle\frac{1}{x+1}$ near $ x=0$:


\epsfig{figure=taylorpix2.ps,width=0.9\textwidth}


As the degree of the Taylor polynomial increases, the approximation to $ \displaystyle\frac{1}{x+1}$ becomes better and better, but only for values of $ x$ between $ -1$ and $ 1$; outside of this interval, we don't seem to be getting a good approximation to the function.

We have seen how to use the error formula for Taylor polynomial approximations to determine whether the Taylor polynomials $ P_n(x)$ really do give better and better approximations to $ f(x)$ as $ n$ gets large. For the function $ f(x) = \sin x$, we can use the error formula to show that for every value of $ x$,

$\displaystyle \lim_{n \to \infty} P_n(x) = \sin x.$

For the function $ f(x) = \displaystyle\frac{1}{x+1}$, the error formula shows that for $ -1 < x
< 1$,

$\displaystyle \lim_{n \to \infty} P_n(x) = \frac{1}{x+1},$

but for other values of $ x$, the Taylor polynomials $ P_n(x)$ may not approach a limit as $ n \to
\infty$. In fact, by looking at the actual form of the Taylor polynomials

$\displaystyle P_n(x) = 1 - x + x^2 - x^3 + x^4 - x^5 + \cdots,$

we can see that if $ \vert x\vert
> 1$, then

$\displaystyle \lim_{n \to \infty} \vert P_n(x)\vert = + \infty.$

The error formula is rather unwieldy and difficult to apply. Today we will learn a new method of finding the values of $ x$ for which

$\displaystyle \lim_{n \to \infty} P_n(x) = f(x).$


Taylor Series: Infinite Taylor Polynomials:


First, some terminology. Recall that we can write

$\displaystyle P_n(x) = \sum_{i=0}^n a_i (x-a)^i,$

where the coefficients $ a_i$ are given by

$\displaystyle a_i = \frac {f^{(i)}(a)}{i!}.$

In particular, if $ n > m$, then $ P_n(x)$ is just $ P_m(x)$ with some additional terms in higher powers of $ (x-a)$ added on. For $ f(x) = \displaystyle\frac{1}{x+1}$ around $ x=0$,

$\displaystyle P_5(x) = 1 - x + x^2 - x^3 + x^4 - x^5, \qquad\qquad\qquad\qquad\quad$

$\displaystyle P_9(x) = 1 - x + x^2 - x^3 + x^4 - x^5 + x^6 - x^7 + x^8 - x^9.$

If, for some particular value of $ x$, the values of $ P_n(x)$ approach a limit as $ n \to
\infty$,

$\displaystyle \lim_{n \to \infty}P_n(x) = \lim_{n \to \infty}\left[\sum_{i=0}^n a_i (x-a)^i \right]= L,$

then we say that

$\displaystyle \sum_{i=0}^\infty a_i (x-a)^i = L.$

An infinite sum like this is called an ``infinite series'', and this particular one is the Taylor series for the function $ f$ around the point $ x = a$. You can think of the Taylor series as an infinite degree polynomial whose finite pieces are the Taylor polynomials $ P_n(x)$. The Taylor series for $ f(x) = \displaystyle\frac{1}{x+1}$ around $ x=0$ is

$\displaystyle \sum_{i=0}^\infty (-1)^i x^i = 1 - x + x^2 - x^3 + x^4 - x^5 + \cdots$

We saw from the error formula that for $ -1 < x
< 1$, this Taylor series actually gives the values of the function,

$\displaystyle 1 - x + x^2 - x^3 + x^4 - x^5 + \cdots = \lim_{n \to \infty}\left[1 - x + x^2 - x^3 +
\cdots \pm x^n\right] = \frac{1}{x+1}.$

We are always hoping that this is the case, that the Taylor polynomials are better and better approximations to $ f(x)$ so that the Taylor series actually equals $ f(x)$. Since the Taylor series is an infinite sum, and what it ``gives'' us is a limit, we say in this happy circumstance that the Taylor series converges to $ f(x)$.

For most of the functions that we know, the Taylor series either gives us the function $ f(x)$, as the Taylor series for $ f(x) = \displaystyle\frac{1}{x+1}$ around $ x=0$ does when $ \vert x\vert < 1$, or else diverges (doesn't give us a number at all), as the Taylor series for $ f(x) = \displaystyle\frac{1}{x+1}$ around $ x=0$ does when $ \vert x\vert \geq 1$. In the next section we will see a method that can give us an interval of $ x$'s for which the Taylor series is guaranteed to converge (to give us an answer.)

Notice that this method will guarantee that the Taylor series converges, NOT that the thing it converges to is the function $ f$. Almost always, that is, for virtually all the functions we meet in this course, if the Taylor series converges to anything then it converges to $ f(x)$. To be absolutely sure of that for any one particlar function, we'd have to go back to the error formula (or to use some facts we'll learn later about how you can build new Taylor series from old ones.)


Infinite Sums and the Ratio Test:


You may have been staring at the equation

$\displaystyle 1 - x + x^2 - x^3 + x^4 - x^5 + \cdots = \frac{1}{x+1}$

and wondering how an infinite sum of numbers can equal anything at all. This is actually a deep philosophical problem if you start worrying about it. Mathematicians finess the issue by saying, ``Well, all we're really saying is that the finite pieces of the infinite sum approach a limit as you go include more and more terms,'' and that's the answer we will give in this course. However, to work comfortably with infinite series, it helps if you can develop an intuitive picture that these infinitely many numbers do add up to something. Try this one:

$\displaystyle \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots = 1.$

If you can make sense out of that, then you can use that sense to think about other infinite series.

This last example is a particular case of a geometric series, a series in which each term is gotten from the preceding one by multiplying by a fixed number $ r$, in this case $ r = \displaystyle\frac{1}{2}$. Putting this another way, if we write the series as

$\displaystyle A_0 + A_1 + A_2 + \cdots + A_i + A_{i+1} + \cdots,$

then for every $ i$ we have

$\displaystyle \frac{A_{i+1}}{A_i} = r.$

It turns out that for the same reason this series converges (equals a finite number) when $ r = \displaystyle\frac{1}{2}$, it converges whenever $ \vert r\vert < 1$. And it also turns out, amazingly enough, that we don't need to know that

$\displaystyle \frac{A_{i+1}}{A_i} = r,$

we just need to know that as $ i$ gets big,

$\displaystyle \frac{A_{i+1}}{A_i} \approx r.$

This is the idea behind the ratio test.

The Ratio Test: To find out whether an infinite series

$\displaystyle A_0 + A_1 + A_2 + \cdots + A_i + A_{i+1} + \cdots$

converges, find the limit of the absolute values of the ratios of successive terms,

$\displaystyle \lim_{i \to \infty}\left\vert\frac{A_{i+1}}{A_i}\right\vert = r.$

If $ \vert r\vert < 1$ then the series converges. If $ \vert r\vert>1$ then the series diverges. If $ \vert r\vert = 1$ then the ratio test does not tell us whether or not the series converges.

Let's see how we can apply the ratio test to Taylor series. First we'll look at the Taylor series for $ f(x) = \displaystyle\frac{1}{x+1}$ around $ x=0$,

$\displaystyle 1 - x + x^2 - x^3 + x^4 - x^5 + \cdots$

The limit we need to find is

$\displaystyle \lim_{i \to \infty} \left\vert\frac{(-1)^{i+1}x^{i+1}}{(-1)^{i}x^{i}}\right\vert = \vert x\vert.$

So in this case $ r = \vert x\vert$, and the ratio test tells us that if $ \vert x\vert < 1$ then the Taylor series converges, and if $ \vert x\vert
> 1$ then the Taylor series diverges. For $ \vert x\vert = 1$, the ratio test does not answer the question, and we have to look at the actual series. For $ x = -1$ the series is

$\displaystyle 1 + 1 + 1 + 1 + 1 + \cdots,$

which clearly diverges to infinity, and for $ x = 1$, the series is

$\displaystyle 1 - 1 + 1 - 1 + 1 - 1 + \cdots;$

the finite pieces of this bounce back and forth between zero and one, so again this does not converge.

(In general, if we want to tell whether a Taylor series converges for the values of $ x$ for which $ r=1$, we need to know some other convergence tests for infinite series besides the ratio test. We will not cover that topic in this course. Chapter 9 of the textbook develops some of the general theory of infinite series.)

Now let's look at the Taylor series for $ f(x) = \sin x$ around $ x=0$,

$\displaystyle x - \frac{x^3}{3!} + \frac{x^5}{5!} - \cdots + (-1)^{i}\frac{x^{2i+1}}{(2i+1)!} +
\cdots.$

The limit we need to find is

$\displaystyle \lim_{i \to \infty} \left\vert\frac{(-1)^{i+1}\displaystyle\frac{...
...ht\vert
=\lim_{i \to \infty}
\left\vert\frac{x^2(2i+1)!}{(2i+3)!}\right\vert = $

$\displaystyle \lim_{i \to \infty} \left\vert\frac{x^2}{(2i+3)(2i+2)}\right\vert =x^2\lim_{i \to \infty}
\frac{1}{(2i+3)(2i+2)} = 0.$

In this case, we have $ r=0$ for every $ x$, and so the ratio test tells us that the Taylor series converges for every $ x$.

Of course, in both these cases, the ratio test does not tell us what value the Taylor series converges to. We need to go back to the error formula for Taylor polynomials to see that it does converge to the function $ f(x)$ itself.


Radius of Convergence:


For $ f(x) = \sin x$, the Taylor series around $ x=0$ converged for all $ x$; for $ f(x) = \displaystyle\frac{1}{x+1}$, it converged for $ \vert x\vert < 1$. In both cases, the Taylor series converged for $ x$ in some interval centered at $ x=0$. In the case of $ \sin x$, it was an infinite interval, $ -
\infty < x < \infty$.

>From the ratio test, we can see that something like this always happens. Let's look at a Taylor series,

$\displaystyle \sum_{i=0}^\infty a_i (x-a)^i,$

and try to use the ratio test to see where it converges. We have to look at the limit

$\displaystyle \lim_{i \to \infty} \left\vert \frac{a_{i+1}(x-a)^{i+1}}{a_i(x-a)...
...x-a) \lim_{i \to
\infty}
\left\vert
\frac{a_{i+1}}{a_i} \right\vert = (x-a)(L),$

where $ L$ is the limit of the absolute values of the ratios of the coefficients $ a_i$.

So we have $ r = (x-a)L$, and the ratio test tells us that if $ (x-a)L<1$, or $ (x-a) <
\displaystyle\frac{1}{L}$, then the series converges, and if $ (x-a)L>1$, or $ (x-a) >
\displaystyle\frac{1}{L}$, then the series diverges. (In the case where $ L=0$, we have convergence for all $ x$, and in the case where $ L = +\infty$, we have convergence only for $ x = a$.) In other words, there is a radius of convergence $ R$, which may be zero or infinity or anything in between, such that if $ \vert x-a\vert<R$ then the Taylor series converges and if $ \vert x-a\vert>R$ then the Taylor series diverges. The point $ x = a$, being the center of the interval of $ x$'s for which the series converges, is sometimes called the center of convergence.

For $ \sin x$ (about $ x=0$) the radius of convergence was infinity, and for $ \displaystyle\frac{1}{x+1}$ (about $ x=0$) the radius of convergence was 1.

Let's do one more example. Let's find the radius of convergence of the Taylor series for $ f(x) =
\displaystyle\frac{1}{x^2+1}$ around $ x=0$. It turns out that there is a slick way to find the Taylor series: Just replace $ x$ by $ x^2$ in the Taylor series for $ \displaystyle\frac{1}{x+1}$. So we get

$\displaystyle \frac{1}{x^2+1} = 1 - x^2 + x^4 - x^6 + x^8 - x^{10} + \cdots $

Applying the ratio test, we look at the limit

$\displaystyle \lim_{i \to \infty} \left\vert \frac {(-1)^{i+1}x^{2(i+1)}} {(-1)^{i}x^{2i}} \right\vert = x^2
.$

We have $ r = x^2$, and so the series converges for $ x^2 < 1$ (i.e., for $ \vert x\vert < 1$), and diverges for $ x^2 > 1$ (i.e., for $ \vert x\vert
> 1$.) The center of convergence is $ x=0$, and the radius of convergence is 1.


A Secret Trick Using Complex Numbers:


It isn't surprising that the radius of convergence for the Taylor series for $ \displaystyle\frac{1}{x+1}$ about $ x=0$ has radius of convergence 1. The function goes off to infinity at $ x = -1$, so we might expect the Taylor series to be stymied at that point. But what about $ \displaystyle\frac{1}{x^2+1}$? That is a perfectly good function on the whole real number line. Why doesn't its Taylor series work for every $ x$?

The answer is that Taylor series are good not only for real numbers, but also for complex numbers. And the radius of convergence works the same way, except now we're talking modulus instead of absolute value. The Taylor series for $ \displaystyle\frac{1}{x^2+1}$ about $ x=0$ converges for all complex values of $ x$ for which $ \vert x\vert < 1$, and diverges when $ \vert x\vert
> 1$. In the complex plane, we have this picture:


\epsfig{figure=taylorpix3.ps,width=0.5\textwidth}


The circle around the point $ 0 + 0 i$ consists of all complex numbers with modulus equal to 1. The Taylor series converges inside this circle, and diverges outside it.

And now we can see why this circle can't be pushed outward any farther: The function $ \displaystyle\frac{1}{x^2+1}$ goes off to infinity at $ x = i$, and that's what stops the Taylor series from converging inside any larger circle.

This always happens for functions like this. Say $ f(x)$ is a quotient of polynomials. To find the radius of convergence for the Taylor series for $ f(x)$ about $ x = a$, find all the points in the complex plane where the denominator is zero, so $ f(x)$ goes off to infinity.1 The radius of convergence will be the radius of the largest circle with center $ x = a$ that does not surround any of those points at which the function goes off to infinity.

This fact is beyond the purview of this course and you're not ``responsible'' for it. If you take a course in complex analysis, you'll learn about this, and other fun things as well.




next up previous
Next: About this document ...
Math 8 Spring 2000
2000-04-17