Elements of the Differential and Integral Calculus/Chapter XVIII
|←Chapter XVII||Elements of the Differential and Integral Calculus by
Chapter XVIII, §134-149
143. Introduction. The student is already familiar with some methods of expanding certain functions into series. Thus, by the Binomial Theorem,
giving a finite power series from which the exact value of for any value of may be calculated. Also by actual division,
we get an equivalent series, all of whose coefficients except that of are constants, being a positive integer.
Suppose we wish to calculate the value of this function when , not by substituting directly in
but by substituting in the equivalent series
Assuming , (C) gives for
If we then assume the value of the function to be the sum of the first eight terms of series (C), the error we make is .0078125. However, in case we need the value of the function correct to two decimal places only, the number 1.99 is as close an approximation to the true value as we care for, since the error is less than .01. It is evident that if a greater degree of accuracy is desired, all we need to do is to use more terms of the power series
Since, however, we see at once that
there is no necessity for the above discussion, except for purposes of illustration. As a matter of fact the process of computing the value of a function from an equivalent series into which it has been expanded is of the greatest practical importance, the values of the elementary transcendental functions such as the sine, cosine, logarithm, etc., being computed most simply in this way.
So far we have learned how to expand only a few special forms into series ; we shall now consider a method of expansion applicable to an extensive and important class of functions and called Taylor's Theorem.
where lies between and . (61), which is one of the most far reaching theorems in the Calculus, is called Taylor's Theorem. We see that it expresses as the sum of a finite series in .
The last term in (61), namely , is sometimes called the remainder in Taylor's Theorem after n terms. If this remainder converges toward zero as the number of terms increases without limit, then the right-hand side of (61) becomes an infinite power series called Taylor's Series. In that case we may write (61) in the form
and we say that the function has been expanded into a Taylor's Series. For all values of for which the remainder approaches zero as increases without limit, this series converges and its sum gives the exact value of , because the difference (= the remainder) between the function and the sum of terms of the series approaches the limit zero (§15).
If the series converges for values of for which the remainder, does not approach zero as increases without limit, then the limit of the sum of the series is not equal to the function .
The infinite series (62) represents the function for those values of , and those only, for which the remainder approaches zero as the number of terms increases without limit.
It is usually easier to determine the interval of convergence of the series than that for which the remainder approaches zero; but in simple cases the two intervals are identical.
When the values of a function and its successive derivatives are known for some value of the variable, as , then (62) is used for finding the value of the function for values of near , and (62) is also called the expansion off in the vicinity of .
Illustrative Example 1. Expand in powers of .
- Substituting in (62), Ans.
- This converges for values of between and 2 and is the expansion of in the vicinity of , the remainder converging to zero.
When a function of the sum of two numbers and is given, say , it is frequently desirable to expand the function into a power series in one of them, say . For this purpose we use another form of Taylor's Series, got by replacing by in (62), namely,
Illustrative Example 2. Expand in power of
- Solution. Here
- Hence, placing
- Substituting in (61),
|1.||Expand in powers of .||Ans.|
|2.||Expand in powers of .||Ans.|
|3.||Expand in powers of .||Ans.|
|4.||Expand in powers of .||Ans.|
|5.||Expand in powers of .|
|6.||Expand in powers of .|
|7.||Expand in powers of .|
|8.||Expand in powers of .|
|9.||Expand in powers of .|
|10.||Expand in powers of .||Ans.|
|11.||Expand in powers of .||Ans.|
|12.||Expand in powers of .||Ans.|
|13.||Expand the following in in powers of .|
145. Maclaurin's Theorem and Maclaurin's Series. A particular case of Taylor's Theorem is found by placing in (61), §144, giving
a special case of Taylor's Series that is very useful. The statements made concerning the remainder and the convergence of Taylor's Series apply with equal force to Maclaurin's Series, the latter being merely a special case of the former.
The student should not fail to note the importance of such an expansion as (65). In all practical computations results correct to a certain number of decimal places are sought, and since the process in question replaces a function perhaps difficult to calculate by an ordinary polynomial with constant coefficients, it is very useful in simplifying such computations. Of course we must use terms enough to give the desired degree of accuracy.
In the case of an alternating series ( §139) the error made by stopping at any term is numerically less than that term, since the sum of the series after that term is numerically less than that term.
Illustrative Example 1. Expand into an infinite power series and determine for what values of it converges.
- Solution. Differentiating first and then placing , we get
Substituting in (65),
Comparing with Ex. 20, §20, we see that the series converges for all values of . In the same way for .
- Solution. Here radian; that is, the angle is expressed in circular measure. Therefore, substituting in (B) of the last example,
- Summing up the positive and negative terms separately,
- which is correct to four decimal places, since the error made must be less than; i.e. less than .000003. Obviously the value of may be calculated to any desired degree of accuracy by simply including a sufficient number of additional terms.
Verify the following expansions of functions into power series by Maclaurin's Series and determine for what values of the variable they are convergent:
|1.||Convergent for all values of .|
|2.||Convergent for all values of .|
|3.||Convergent for all values of .|
|4.||Convergent for all values of , being any constant.|
|5.||Convergent for all values of , being any constant.|
|6.||Convergent if .|
|7.||Convergent if .|
|8.||Convergent if .|
|9.||Convergent if .|
|10.||Convergent if .|
|11.||Convergent for all values of .|
|12.||Convergent for all values of .|
13. Find three terms of the expansion in each of the following functions:
14. Show that cannot be expanded by Maclaurin's Theorem.
Compute the values of the following functions by substituting directly in the equivalent power series, taking terms enough until the results agree with those given below.
Solution. Let in series of Ex. 1; then
First term Second term Third term Fourth term (Dividing third term by 3.) Fifth term (Dividing fourth term by 4.) Sixth term (Dividing fifth term by 5.) Seventh term (Dividing sixth term by 6.) Eighth term etc. (Divding sevent term by 7.) Adding, Ans.
16. use series in Ex. 9.
17. use series in Ex. 2.
18. use series in Ex. 2.
19. use series
20. use series in Ex. 8.
21. use series (B), §145.
22. use series (B), §145.
In more advanced treatises it is shown that, for values of within the interval of convergence, the sum of a power series is differentiable and that its derivative is obtained by differentiating the series term by term as in an ordinary sum. Thus from (B), §145, Differentiating both sides,
Differentiating both sides, we get
which is the series of Ex. 2, §145. This illustrates how we may obtain a new power series from a given power series by differentiation.
Differentiating the power series of Ex. 6, §145, we obtain
In the same way from Ex. 2, §145,
146. Computation by series. I. Alternating series. Exs. 15-24 of the last exercise illustrate to what use series may be put for purposes of computation. Obviously it is very important to know the percentage of error in a result, since the computation must necessarily stop at some term in the series, the sum of the subsequent terms being thereby neglected. The absolute error made is of course equal to the limit of the sum of all the neglected terms. In some series this error is difficult to find, but in the case of alternating series it has been shown in §140 that the sum is less than the first of these terms. Hence the absolute error made is less than the first term neglected. Fortunately a large proportion of the series used for computation purposes are alternating series, and therefore this easy method for finding the upper limit of the absolute error and the percentage of error is available. Let us illustrate by means of an example.
Illustrative Example 1. Determine the greatest possible error and percentage of error made in computing the numerical value of the sine of one radian from the sine series,
- (a) when all terms beyond the second are neglected;
- (b) when all terms beyond the third are neglected.
- Solution. Let in series; then
- (a) Using only the first two terms,
- the absolute error is less than ; i.e. , and the percentage of error is less than 1 per cent.
- (b) Using only the first three terms,
- the absolute error is less than ; i.e. , and the percentage of error is less than of 1 per cent.
- Moreover, the exact value of lies between .8333 and .841666, since for an alternating series is alternately greater and less than .
Determine the greatest possible error and percentage of error made in computing the numerical value of each of the following functions from its corresponding series
- (a) when all terms beyond the second are neglected;
- (b) when all terms beyond the third are neglected.
II. The computation of by series.
From Ex. 8, §145, we have
Since this series converges for values of between -1 and +1, we may let , giving
Evidently we might have used the series of Ex 9, §145, instead. Both of these series converge rather slowly, but there are other series, found by more elaborate methods, by means of which the correct value of to a large number of decimal places may be easily calculated.
III. The computation of logarithms by series.
Series play a very important role in making the necessary calculations for the construction of logarithmic tables.
From Ex. 6, §145, we have
This series converges for , and we can find by placing in (A), giving
But this series is not well adapted to numerical computation, because it converges so slowly that it would be necessary to take 1000 terms in order to get the value of correct to three decimal places. A rapidly converging series for computing logarithms will now be deduced.
By the theory of logarithms,
|(B)||By 8, §1|
which is convergent when is numerically less than unity. Let
a series which is convergent for all positive values of and ; and it is always possible to choose and so as to make it converge rapidly.
Placing and in (E), we get
Placing and in (E), we get
It is only necessary to compute the logarithms of prime numbers in this way, the logarithms of composite numbers being then found by using theorems 7-10, §1. Thus
All the above are Napierian or natural logarithms, i.e. the base is . If we wish to find Briggs's or common logarithms, where the base 10 is employed, all we need to do is to change the base by means of the formula
In the actual computation of a table of logarithms only a few of the tabulated values are calculated from series, all the rest being found by employing theorems in the theory of logarithms and various ingenious devices designed for the purpose of saving work.
Calculate by the methods of this article the following logarithms:
147. Approximate formulas derived from series. Interpolation. In the two preceding sections we evaluated a function from its equivalent power series by substituting the given value of in a certain number of the first terms of that series, the number of terms taken depending on the degree of accuracy required. It is of great practical importance to note that this really means that we are considering the function as approximately equal to an ordinary polynomial with constant coefficients. For example, consider the series
This is an alternating series for both positive and negative values of . Hence the error made if we assume to be approximately equal to the sum of the first terms is numerically less than the th term ( §139). For example, assume
and let us find for what values of x this is correct to three places of decimals. To do this, set
This gives numerically less than ; i.e. (B) is correct to three decimal places when lies between and .
hence we can find for what values of x a polynomial represents the functions to any desired degree of accuracy by writing the inequality
|(E)||limit of error,|
and solving for , provided we know the maximum value of . Thus if we wish to find for what values of the formula
Therefore gives the correct value of to two decimal places if ; i.e. if lies between and . This agrees with the discussion of (A) as an alternating series.
Again, if we expand by Taylor's Series, (62), §144, in powers of , we get
Hence for all values of in the neighborhood of some fixed value we have the approximate formula
Transposing sin a and dividing by , we get
Since is constant, this means that:
- The change in the value of the sine is proportional to the change in the angle for values of the angle near .
For example, let radians, and suppose it is required to calculate the sines of and by the approximate formula (G). Then
This discussion illustrates the principal known as interpolation by first differences. In general, then, by Taylor's Series, we have the approximate formula
If the constant , this formula asserts that the ratio of the increments of function and variable for all values of the latter differing little from the fixed value a is constant.
Care must however be observed in applying (H). For while the absolute error made in using it in a given case may be small, the percentage of error may be so large that the results are worthless.
Then interpolation by second differences is necessary. Here we use one more term in Taylor's Series, giving the approximate formula
These results are correct to four decimal places.
1. Using formula (H) for interpolation by first differences, calculate the following functions:
(a) , taking . (c) , taking . (b) , taking . (d) , taking .
2. Using formula (I) for interpolation by second differences, calculate the following functions:
(a) , taking . (c) , taking . (b) , taking . (d) , taking .
3. Draw the graphs of the functions , , , respectively, and compare them with the graph of .
148. Taylor's Theorem for functions of two or more variables. The scope of this book will allow only an elementary treatment of the expansion of functions involving more than one variable by Taylor's Theorem. The expressions for the remainder are complicated and will not be written down.
Having given the function
it is required to expand the function
in powers of and .
Consider the function
which may then be expanded in powers of by Maclaurin's Theorem, (64), § 145, giving
Let us now express the successive derivatives of with respect to in terms of the partial derivatives of with respect to and . Let
then by (51), §125,
But from (F),
and since is a function of and through and ,
or, since from (F), and ,
Replacing by in (J), we get
In the same way the third derivative is
|, i.e. is replaced by ,|
and so on.
Substituting these results in (E), we get
To get , replace by 1 in (66), giving Taylor's Theorem for a function of two independent variables,
which is the required expansion in powers of and . Evidently (67) is also adapted to the expansion of in powers of and by simply interchanging with and with . Thus
Similarly, for three variables we shall find
and so on for any number of variables.
1 Given , expand in powers of and .
The third and higher partial derivatives are all zero. Substituting in (67),
2. Given , expand in powers of , , .
The third and higher partial derivatives are all zero. Substituting in (68),
3. Given , expand in powers of and .
4. Given , expand in powers of , , .
149. Maxima and minima of functions of two independent variables. The function is said to be a maximum at when is greater than for all values of and in the neighborhood of and . Similarly, is said to be a minimum at when is less than for all values of and in the neighborhood of and .
These definitions may be stated in analytical form as follows:
If, for all values of and numerically less than some small positive quantity,
(A) a negative number, then is a maximum value of .
(B) a positive number, then is a minimum value of .
These statements may be interpreted geometrically as follows: a point on the surface
It was shown in §81 and §82, that a necessary condition that a function of one variable should have a maximum or a minimum for a given value of the variable was that its first derivative should be zero for the given value of the variable. Similarly, for a function of two independent variables, a necessary condition that should be a maximum or a minimum (i.e. a turning value) is that for ,
is always negative or always positive for all values of sufficiently small numerically. By §81, §82, a necessary condition for this is that shall vanish for , or, what amounts to the same thing, shall vanish for . Similarly, (A) and (B) must hold when , giving as a second necessary condition that shall vanish for . In order to determine sufficient conditions that shall be a maximum or a minimum, it is necessary to proceed to higher derivatives. To derive sufficient conditions for all cases is beyond the scope of this book. The following discussion, however, will suffice for all the problems given here.
Expanding by Taylor's Theorem, (67), §148, replacing by and by , we get
where the partial derivatives are evaluated for , and denotes the sum of all the terms not written down. All such terms are of a degree higher than the second in and .
Since and , from (C), we get, after transposing ,
If is a turning value, the expression on the left-hand side of must retain the same sign for all values of and sufficiently small in numerical value, the negative sign for a maximum value (see (A)) and the positive sign for a minimum value (see (B); i.e. will be a maximum or a minimum according as the right-hand side of (E) is negative or positive. Now is of a degree higher than the second in and . Hence as and diminish in numerical value, it seems plausible to conclude that the numerical value of will eventually become and remain less than the numerical value of the sum of the three terms of the second degree written down on the right-hand side (E). Then the sign of the right-hand side (and therefore also of the left-hand side) will be the same as the sign of the expression
But from Algebra we know that the quadratic expression
Hence the following rule for finding maximum and minimum values of a function .
- First Step. Solve the simultaneous equations
- Second Step. Calculate for these values of x and y the value of
- Third Step. The function will have a
|neither a maximum nor a minimum if .|
|The question is undecided if .|
The student should notice that this rule does not necessarily give all maximum and minimum values. For a pair of values of and determined by the First Step may cause to vanish, and may lead to a maximum or a minimum or neither. Further investigation is therefore necessary for such values. The rule is, however, sufficient for solving many important examples.
The question of maxima and minima of functions of three or more independent variables must be left to more advanced treatises.
Illustrative Example 1. Examine the function for maximum and minimum values.
Solution. First step. Solving these two simultaneous equations, we get
Second Step. Third Step. When and , and there can be neither a maximum nor a minimum at .
When and ; and since , we have the conditions for a maximum value of the function fulfilled at . Substituting in the given function, we get its maximum value equal to .
Illustrative Example 2. Divide into three parts such that their product shall be a maximum.
Solution Let first part, second part; then third part, and the function to be examined is First Step. Solving simultaneously, we get as one pair of values .  Second step. Third Step. When and ; and since , it is seen that our product is a maximum when . Therefore the third part is also , and the maximum value of the product is .
|1. Find the minimum value of .||Ans. .|
2. Show that is a minimum when , and a maximum when .
3. Show that has neither a maximum nor a minimum.
4. Show that the maximum value of is .
5. Find the greatest rectangular parallelepiped that can be inscribed in an ellipsoid. That is, find the maximum value of (= volume) subject to the condition
Hint. Let , and substitute the value of from the equation of the ellipsoid. This gives
where is a function of only two variables. 6. Show that the surface of a rectangular parallelepiped of given volume is least when the solid is a cube.
7. Examine . for maximum and minimum values.
|Ans.||Maximum when ;|
|minimum when , and when .|
8. Show that when the radius of the base equals the depth, a steel cylindrical standpipe of a given capacity requires the least amount of material in its construction.
9. Show that the most economical dimensions for a rectangular tank to hold a given volume are a square base and a depth equal to one half the side of the base.
10. The electric time constant of a cylindrical coil of wire is
where is the mean radius, is the difference between the internal and external radii, is the axial length, and are known constants. The volume of the coil is . Find the values of which make a minimum if the volume of the coil is fixed.
- Also known as Taylor's Formula.
- Published by [Brook Tyalor] (1685-1731) in his [Methods Incrmentorum], London, 1715.
- In these examples assume that the functions can be developed into a power series.
- Named after [Maclaurin] (1698-1746), being first published in his [Treatise of Fluxions], Edinburgh, 1742. The series is really due to [] (1692-1770).
- Since here and we have, by substituting in the last term of (65),
But can never exceed unity, and from (Ex. 19, §142), for all values of ; that is, in this case the limit of the remainder is for all values of for which the series converges. This is also the case for all the functions considered in this book.
- Since .
- Since .
- The student should notice that we have treated the series as if they were ordinary sums, but they are not; they are limits of sums. To justify this step is beyond the scope of this book.
- See Cours d' Analyse, Vol. I by C. Jordan
- Peano has shown that this conclusion does not always hold. See the article on "Maxima and Minima of Functions of Several Variables," by Professor James Pierpont in the Bulletin of the American Mathematical Society, Vol. IV.
- The discussion of the text merely renders the given rule plausible. The student should observe that the case is omitted from the discussion
- are not considered, since from the nature of the problem we would then have a minimum.