# Elements of the Differential and Integral Calculus/Chapter XVIII

 Elements of the Differential and Integral Calculus by William Anthony Granville Chapter XVIII, §134-149

EXPANSION OF FUNCTIONS

143. Introduction. The student is already familiar with some methods of expanding certain functions into series. Thus, by the Binomial Theorem,

 (A) $\left( a + x \right)^4 = a^4 + 4a^3x + 6a^2x^2 + 4ax^3 + x^4,$

giving a finite power series from which the exact value of $(a + x)^4$ for any value of $x$ may be calculated. Also by actual division,

 (B) $\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots + x^{n-1} + \left( \frac{1}{1-x}\right)x^n,$

we get an equivalent series, all of whose coefficients except that of $x^n$ are constants, $n$ being a positive integer.

Suppose we wish to calculate the value of this function when $x = .5$, not by substituting directly in

$\frac{1}{1-x},$

but by substituting $x = .5$ in the equivalent series

 (C) $\left( 1 + x + x^2 + x^3 + \cdots + x^{x-1} \right) + \left( frac{1}{1-x}\right) x^n.$

Assuming $n = 8$, (C) gives for $x = .5$

 (D) $\frac{1}{1-x} = 1.9921875 + .0078125.$

If we then assume the value of the function to be the sum of the first eight terms of series (C), the error we make is .0078125. However, in case we need the value of the function correct to two decimal places only, the number 1.99 is as close an approximation to the true value as we care for, since the error is less than .01. It is evident that if a greater degree of accuracy is desired, all we need to do is to use more terms of the power series

 (E) $1 + x + x^2 + x^3 + \cdots.$

Since, however, we see at once that

$\left[ \frac{1}{1-x} \right ]_{x=.5} = 2,$

there is no necessity for the above discussion, except for purposes of illustration. As a matter of fact the process of computing the value of a function from an equivalent series into which it has been expanded is of the greatest practical importance, the values of the elementary transcendental functions such as the sine, cosine, logarithm, etc., being computed most simply in this way.

So far we have learned how to expand only a few special forms into series ; we shall now consider a method of expansion applicable to an extensive and important class of functions and called Taylor's Theorem.

144. Taylor's Theorem[1] and Taylor's Series. Replacing $b$ by $x$ in (E), §107, the extended theorem of the mean takes on the form

 (61) $f(x) = f(a) + \frac{\left(x-a\right)}{1!}f'\left(a\right) + \frac{\left(x-a\right)}{1!}f'\left(a\right) + \frac{\left(x-a\right)^3}{3!}f'''\left(a\right) + \cdots$ $+ \frac{\left(x-a\right)^{n-1}}{\left(n-1\right)!}f^{n-1}\left(a\right) + \frac{\left(x-a\right)^{n}}{n!}f^{n}\left(x_1\right),$

where $x_1$ lies between $a$ and $x$. (61), which is one of the most far reaching theorems in the Calculus, is called Taylor's Theorem. We see that it expresses $f(x)$ as the sum of a finite series in $(x - a)$.

The last term in (61), namely $\frac{\left(x-a\right)^n}{n!}f^{\left(n\right)}\left(x_1\right)$, is sometimes called the remainder in Taylor's Theorem after n terms. If this remainder converges toward zero as the number of terms increases without limit, then the right-hand side of (61) becomes an infinite power series called Taylor's Series[2]. In that case we may write (61) in the form

 (62) $f(x) = f(a) + \frac{\left(x-a\right)}{1!}f'\left(a\right) + \frac{\left(x-a\right)}{1!}f'\left(a\right) + \frac{\left(x-a\right)^3}{3!}f'''\left(a\right) + \cdots,$

and we say that the function has been expanded into a Taylor's Series. For all values of $x$ for which the remainder approaches zero as $n$ increases without limit, this series converges and its sum gives the exact value of $f(x)$, because the difference (= the remainder) between the function and the sum of $n$ terms of the series approaches the limit zero (§15).

If the series converges for values of $x$ for which the remainder, does not approach zero as $n$ increases without limit, then the limit of the sum of the series is not equal to the function $f(x)$.

The infinite series (62) represents the function for those values of $x$, and those only, for which the remainder approaches zero as the number of terms increases without limit.

It is usually easier to determine the interval of convergence of the series than that for which the remainder approaches zero; but in simple cases the two intervals are identical.

When the values of a function and its successive derivatives are known for some value of the variable, as $x = a$, then (62) is used for finding the value of the function for values of $x$ near $a$, and (62) is also called the expansion off $f(x)$ in the vicinity of $x = a$.

Illustrative Example 1. Expand $log x$ in powers of $(x - 1)$.

Solution
$\begin{array}{rclrcl} f(x) & = & \log x, & f(1) & = & 0; \\ f'(x) & = & \tfrac{1}{x}, & f'(1) & = & 1; \\ f''(x) & = & \tfrac{1}{x^2}, & f''(1) & = & -1; \\ f'''(x) & = & \tfrac{2}{x^3}, & f'''(1) & = & 2. \end{array}$
Substituting in (62), $\log x = x - \tfrac{1}{2}(x-1)^2 + \tfrac{1}{3}(x-1)^3 - \cdots.$ Ans.
This converges for values of $x$ between and 2 and is the expansion of $\log x$ in the vicinity of $x = 1$, the remainder converging to zero.

When a function of the sum of two numbers $a$ and $x$ is given, say $f(a + x)$, it is frequently desirable to expand the function into a power series in one of them, say $x$. For this purpose we use another form of Taylor's Series, got by replacing $x$ by $a + x$ in (62), namely,

 (63) $f(a + x) = f(a) + \frac{x}{1!}f'\left(a\right) + \frac{x^2}{2!}f''\left(a\right) + \frac{x^3}{3!}f'''\left(a\right) + \cdots.$

Illustrative Example 2. Expand $\sin(a+x)$ in power of $x$

Solution. Here $f(a+x) = \sin(a+x).$
Hence, placing
$\begin{array}{rcl} x &=& 0, \\ f(a) &=& \sin a, \\ f'(a) &=& \cos a, \\ f''(a) &=& -\sin a, \\ f'''(a) &=& -\cos a, \\ \end{array}$
Substituting in (61),
$\sin(a+x) = \sin a +\tfrac{x}{1}\cos a - \tfrac{x^2}{2!}\sin a - \tfrac{x^3}{3!}\cos a + \cdots.$ Ans.
EXAMPLES[3]
 1 Expand $e^x$ in powers of $x-2$. Ans. $e^x = e^2 +e^2(x-2) + \tfrac{e^2}{2!}(x-2)^2 + \cdots.$ 2 Expand $x^3 - 2x^2 + 5x - 7$ in powers of $x-1$. Ans. $-3 + 4(x-1) + (x-1)^2 + (x-1)^3.$ 3 Expand $3y^2 - 14y + 7$ in powers of $y-3$. Ans. $-8 + 4(y-3) + 2(y-3)^2.$ 4 Expand $5z^2 + 7z + 3$ in powers of $z-2$. Ans. $37 + 27(z-2) + 5(z-2)^2.$ 5 Expand $4x^3 - 17x^2 + 11x + 2$ in powers of $x-4$. 6 Expand $5y^3 + 6y^3 - 17y^2 + 18y -20$ in powers of $y+4.$. 7 Expand $e^x$ in powers of $x+1$. 8 Expand $\sin x$ in powers of $x-\alpha$. 9 Expand $\cos x$ in powers of $x-\alpha$. 10 Expand $\cos (a+x)$ in powers of $x$. Ans. $\cos(a+x) = \cos a - x \sin a - \tfrac{x^2}{2!} + \tfrac{x^3}{3!}\sin a + \cdots.$ 11 Expand $\log (x+h)$ in powers of $x.$. Ans. $\log(x+h) = \log h + \tfrac{x}{h} + \tfrac{x^2}{2h^2} + \tfrac{x^3}{3h^3} + \cdots.$ 12 Expand $\tan (x+h)$ in powers of $h.$. Ans. $\tan(x+h) = \tan h + h \sec^2x +h^2\sec^2x\tan x + \cdots.$ 13 Expand the following in in powers of $h.$.
 (a) $(x+h)^n = x^n + nx^{n-1}h + \tfrac{n(n-1)}{2!}x^{n-2}h^2 + \tfrac{n(n-1)(n-2)}{3!}x^{n-3}h^3 + \cdots.$ (b) $e^{x+h} = e^x \left( 1 + h + \tfrac{h^2}{2!} + \tfrac{h^3}{3!} + \cdots\right).$

145. Maclaurin's Theorem and Maclaurin's Series. A particular case of Taylor's Theorem is found by placing $a=0$ in (61), §144, giving

 (64) $f(x) = f(0) + \frac{x)}{1!}f'\left(0\right) + \frac{x^2}{2!}f''\left(0\right) + \frac{x^3}{3!}f'''\left(0\right) + \cdots$ $+ \frac{x^{n-1}}{\left(n-1\right)!}f^{\left(n-1\right)}\left(0\right) + \frac{x^{n}}{\left(x-1\right)!}f^{\left(n\right)}\left(x_1\right),$

where $x_1$ lies between 0 and $x$. (64) is called Maclaurin's Theorem. The right-hand member is evidently a series in $x$ in the same sense that (62), §144, is a series in $x - a$.

Placing $a = 0$ in (62), §144 we get Maclaurin's Series'[4],

 (65) $f(x) = f(0) + \frac{x)}{1!}f'\left(0\right) + \frac{x^2}{2!}f''\left(0\right) + \frac{x^3}{3!}f'''\left(0\right) + \cdots,$

a special case of Taylor's Series that is very useful. The statements made concerning the remainder and the convergence of Taylor's Series apply with equal force to Maclaurin's Series, the latter being merely a special case of the former.

The student should not fail to note the importance of such an expansion as (65). In all practical computations results correct to a certain number of decimal places are sought, and since the process in question replaces a function perhaps difficult to calculate by an ordinary polynomial with constant coefficients, it is very useful in simplifying such computations. Of course we must use terms enough to give the desired degree of accuracy.

In the case of an alternating series ( §139) the error made by stopping at any term is numerically less than that term, since the sum of the series after that term is numerically less than that term.

Illustrative Example 1. Expand $\cos x$ into an infinite power series and determine for what values of $x$ it converges.

Solution. Differentiating first and then placing $x = 0$, we get
$\begin{array}{rclrcl} f(x) &=& \cos x, &f(0) &=& 1,\\ f'(x) &=& -\sin x, &f'(0) &=& 0,\\ f''(x) &=& -\cos x, &f''(0) &=& -1,\\ f'''(x) &=& \sin x, &f'''(0) &=& 0,\\ f^{iv}(x) &=& \cos x, &f^{iv}(0) &=& 1,\\ f^{v}(x) &=& -\sin x, &f^{v}(0) &=& 0,\\ f^{vi}(x) &=& -\cos x, &f^{vi}(0) &=& -1,\\ &etc.&, & &etc.& \end{array}$

Substituting in (65),

 (A) $\cos x = 1 - \tfrac{x^2}{2!} + \tfrac{x^4}{4!} - \tfrac{x^6}{6!} + \cdots.$

Comparing with Ex. 20, §20, we see that the series converges for all values of $x$. In the same way for $\sin x$.

 (B) $\sin x = 1 - \tfrac{x^3}{3!} + \tfrac{x^5}{5!} - \tfrac{x^7}{7!} + \cdots,$

which converges for all values of $x$ (Ex. 21, §142). [5]

Illustrative Example 2. Using the series (B) found in the last example, calculate $sin 1$ correct to four decimal places.

Solution. Here $x = 1$ radian; that is, the angle is expressed in circular measure. Therefore, substituting $x = 1$ in (B) of the last example,
$1 - \tfrac{1}{3!} + \tfrac{1}{5!} - \tfrac{1}{7!} + \cdots.$
Summing up the positive and negative terms separately,
$\begin{array}{rclrcl} 1 &=& 1.00000\cdots & \qquad \frac{1}{3!} &=& 0.16667\cdots \\ &=& 0.00833\cdots & \qquad \frac{1}{7!} &=& 0.00019\cdots \\ \hline & & 1.00833\cdots & & & 0.16686\cdots \\ \end{array}$
Hence $\sin 1 = 1.00833 - 0.16686 = 0.84147\cdots,$
which is correct to four decimal places, since the error made must be less than; $\tfrac{1}{9!};$ i.e. less than .000003. Obviously the value of $\sin 1$ may be calculated to any desired degree of accuracy by simply including a sufficient number of additional terms.
EXAMPLES

Verify the following expansions of functions into power series by Maclaurin's Series and determine for what values of the variable they are convergent:

 1 $e^x = 1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \tfrac{x^4}{4!} + \cdots.$ Convergent for all values of $x$. 2 $\cos x = 1 - \tfrac{x^2}{2!} + \tfrac{x^4}{4!} - \tfrac{x^6}{6!} + \tfrac{x^8}{8!} - \cdots.$ Convergent for all values of $x$. 3 $a^x = 1 + x \log a + \frac{x^2\log^2a}{2!} + \frac{x^3 \log^3 a}{3!} + \cdots.$ Convergent for all values of $x$. 4 $\sin kx = kx - \frac{k^3x^3}{3!} + \frac{k^5x^5}{5!} - \frac{k^7x^7}{7!} + \cdots.$ Convergent for all values of $x$, $k$ being any constant. 5 $e^{-kx} = 1 - kx + \frac{k^2x^2}{2!} - \frac{k^3x^3}{3!} + \frac{k^4x^4}{4!} + \cdots.$ Convergent for all values of $x$, $k$ being any constant. 6 $\log\left( 1 + x \right) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \frac{x^5}{5} - \cdots.$ Convergent if $-1 < x \le 1$. 7 $\log\left( 1 - x \right) = x - \frac{x^2}{2} - \frac{x^3}{3} - \frac{x^4}{4} - \frac{x^5}{5} - \cdots.$ Convergent if $-1 < x \le 1$. 8 $\arcsin x = x + \frac{1 \cdot x^3}{2 \cdot 3} + \frac{1 \cdot 3x^5}{2 \cdot 4 \cdot 5} + \cdots.$ Convergent if $-1 \le x \le 1$. 9 $\arctan x = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \frac{x^9}{9} - \cdots.$ Convergent if $-1 \le x \le 1$. 10 $\sin x^2 = x^2 - \frac{2x^4}{3!} + \frac{32x^6}{6!} + \cdots.$ Convergent if $-1 \le x \le 1$. 11 $e^{\sin \phi} = 1 + \phi + \frac{\phi^2}{2} - \frac{\phi^4}{8} + \cdots.$ Convergent for all values of $\phi$. 12 $e^{\theta} \sin \theta = \theta + \theta^2 + \frac{\theta^3}{3} - \frac{4\theta^5}{5!} - \frac{8\theta^6}{6!} - \cdots.$ Convergent for all values of $\theta$.

13. Find three terms of the expansion in each of the following functions:

 (a) $\tan x.$ (b) $\sec x.$ (c) $e^{\cos x}.$ (d) $\cos 2x.$ (e) $\arccos x.$ (f) $a^{-x}.$

14. Show that $\log x$ cannot be expanded by Maclaurin's Theorem.

Compute the values of the following functions by substituting directly in the equivalent power series, taking terms enough until the results agree with those given below.

15 $e = 2.7182\cdots.$

Solution. Let $x = 1$ in series of Ex. 1; then

$e = 1 + 1 \tfrac{1}{2!} + \tfrac{1}{3!} + \tfrac{1}{4!} + \tfrac{1}{5!} + \cdots.$
 First term $= 1.00000$ Second term $= 1.00000$ Third term $= 0.50000$ Fourth term $= 0.16667\cdots$ (Dividing third term by 3.) Fifth term $= 0.04167\cdots$ (Dividing fourth term by 4.) Sixth term $= 0.00833\cdots$ (Dividing fifth term by 5.) Seventh term $= 0.00139\cdots$ (Dividing sixth term by 6.) Eighth term $= 0.00019\cdots,$ etc. (Divding sevent term by 7.) Adding, $e$ $= 2. 71825\cdots$ Ans.

16. $\arctan \left( \tfrac{1}{5} \right) = 0.1973\cdots;$ use series in Ex. 9.

17. $\cos 1 = 0.5403\cdots;$ use series in Ex. 2.

18. $\cos 10^\circ = 0.9848\cdots;$ use series in Ex. 2.

19. $\sin .1 = .0998\cdots;$ use series $x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots.$

20. $\arcsin 1 = 1.5708\cdots;$ use series in Ex. 8.

21. $\sin \tfrac{\pi}{4} = 0.7071\cdots;$ use series (B), §145.

22. $\sin .5 = 0.4794\cdots;$ use series (B), §145.

23. $e^2 = 1 + 2 + \tfrac{2^2}{2!} + \tfrac{2^3}{3!} + \cdots = 7.3891.$

24. $\sqrt{e} = 1 \frac{1}{2} + \frac{1}{2^2\left(2!\right)} + \frac{1}{2^3\left(3!\right)} + \cdots = 1.6487.$

In more advanced treatises it is shown that, for values of $x$ within the interval of convergence, the sum of a power series is differentiable and that its derivative is obtained by differentiating the series term by term as in an ordinary sum. Thus from (B), §145, Differentiating both sides,

$\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots.$

Differentiating both sides, we get

$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots,$

which is the series of Ex. 2, §145. This illustrates how we may obtain a new power series from a given power series by differentiation.

Differentiating the power series of Ex. 6, §145, we obtain

$\frac{1}{1+x} = 1 -x + x^2 - x^3 +x^4 - \cdots.$

In the same way from Ex. 2, §145,

$\frac{1}{sqrt{1-x^2}} = 1 + \frac{1}{2}x^2 + \frac{1 \cdot 3}{2 \cdot 4}x^4 + \frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6} x^6 + \cdots.$

146. Computation by series. I. Alternating series. Exs. 15-24 of the last exercise illustrate to what use series may be put for purposes of computation. Obviously it is very important to know the percentage of error in a result, since the computation must necessarily stop at some term in the series, the sum of the subsequent terms being thereby neglected. The absolute error made is of course equal to the limit of the sum of all the neglected terms. In some series this error is difficult to find, but in the case of alternating series it has been shown in §140 that the sum is less than the first of these terms. Hence the absolute error made is less than the first term neglected. Fortunately a large proportion of the series used for computation purposes are alternating series, and therefore this easy method for finding the upper limit of the absolute error and the percentage of error is available. Let us illustrate by means of an example.

Illustrative Example 1. Determine the greatest possible error and percentage of error made in computing the numerical value of the sine of one radian from the sine series,

$\sin x = x \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots;$
(a) when all terms beyond the second are neglected;
(b) when all terms beyond the third are neglected.
Solution. Let $x = 1$ in series; then
$\sin 1 = 1 - \frac{1}{3!} + \frac{1}{5!} - \frac{1}{7!} + \cdots.$
(a) Using only the first two terms,
$\sin 1 = 1 - \tfrac{1}{6} = \tfrac{5}{6} = .8333,$
the absolute error is less than $\tfrac{1}{5!}$; i.e. $\le \tfrac{1}{120} (= .0083)$, and the percentage of error is less than 1 per cent.[6]
(b) Using only the first three terms,
$\sin 1 = 1 - \tfrac{1}{6} + \tfrac{1}{120} = .841666,$
the absolute error is less than $\tfrac{1}{7}$; i.e. $\le \tfrac{1}{5040} (= .000198)$, and the percentage of error is less than $\tfrac{1}{40}$ of 1 per cent.[7]
Moreover, the exact value of $sin 1$ lies between .8333 and .841666, since for an alternating series $S_n$ is alternately greater and less than $lim_{n \to \infty} S_n$.
EXAMPLES

Determine the greatest possible error and percentage of error made in computing the numerical value of each of the following functions from its corresponding series

(a) when all terms beyond the second are neglected;
(b) when all terms beyond the third are neglected.
 1 $\cos 1.$ 4 $\arctan 1.$ 7 $e^{-\tfrac{1}{2}}.$ 2 $\sin 2.$ 5 $e^{-2}.$ 8 $\arctan 2.$ 3 $\cos \tfrac{1}{2}.$ 6 $\sin \tfrac{\pi}{3}.$ 9 $\sin 15^\circ.$

II. The computation of $\pi$ by series.

From Ex. 8, §145, we have

$\arcsin x = x + \frac{1 \cdot x^3}{2 \cdot 3} + \frac{1 \cdot 3 x^5}{2 \cdot 4 \cdot 5} + \frac{1 \cdot 3 \cdot 5 x^7}{2 \cdot 4 \cdot 6 \cdot 7} + \cdots.$

Since this series converges for values of $x$ between -1 and +1, we may let $x = \tfrac{1}{2}$, giving

$\frac{\pi}{6} = \frac{1}{2} + \frac{1}{2} \cdot \frac{1}{3} \left( \frac{1}{2} \right)^3 + \frac{1 \cdot 3}{2 \cdot 4} \cdot \frac{1}{5} \left( \frac{1}{2} \right)^5 + \cdots,$

or

$\pi = 3.1415\cdots.$

Evidently we might have used the series of Ex 9, §145, instead. Both of these series converge rather slowly, but there are other series, found by more elaborate methods, by means of which the correct value of $\pi$ to a large number of decimal places may be easily calculated.

III. The computation of logarithms by series.

Series play a very important role in making the necessary calculations for the construction of logarithmic tables.

From Ex. 6, §145, we have

 (A) $\log \left( 1 + x \right) = x = \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} - \frac{x^5}{5} - \cdots.$

This series converges for $x = 1$, and we can find $\log 2$ by placing $x =1$ in (A), giving

$\log 2 = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \frac{1}{6} + \cdots.$

But this series is not well adapted to numerical computation, because it converges so slowly that it would be necessary to take 1000 terms in order to get the value of $\log 2$ correct to three decimal places. A rapidly converging series for computing logarithms will now be deduced.

By the theory of logarithms,

 (B) $\log \frac{1 + x}{1-x} = \log \left( 1 + x \right) - \log \left( 1 - x \right)$ By 8, §1

Substituting in (B) the equivalent series for $\log (1 + x)$ and $log(1 - x)$ found in Exs. 6 and 7 §145, we get[8]

 (C) $\log \frac{1 + x}{1-x} = 2 \left[ x + \frac{x^3}{3} +\frac{x^5}{5} + \frac{x^7}{7} + \cdots \right],$

which is convergent when $x$ is numerically less than unity. Let

 (D) $\frac{1 + x}{1-x} = \frac{M}{N},$ whence $x = \frac{M-N}{M+N},$

and we see that $x$ will always be numerically less than unity for all positive values of $M$ and $N$. Substituting from (D) into (C), we get

 (E) \begin{align} \log \frac{M}{N} & = \log M - \log N \\ & = 2 \left[ \frac{M-N}{M+N} + \frac{1}{3} \left( \frac{M-N}{M+N} \right)^3 + \frac{1}{5} \left( \frac{M-N}{M+N} \right)^5 + \cdots \right], \\ \end{align}

a series which is convergent for all positive values of $M$ and $N$; and it is always possible to choose $M$ and $N$ so as to make it converge rapidly.

Placing $M = 2$ and $N = 1$ in (E), we get

$\log 2 = 2 \left[ \frac{1}{3} + \frac{1}{3}\cdot\frac{1}{3^3} + \frac{1}{5}\cdot\frac{1}{3^5} + \frac{1}{7}\cdot\frac{1}{3^7} + \cdots \right] = 0.69314718 \cdots.$
[ Since $\log N = \log 1 = 0$, and $\tfrac{M-N}{M+N} = \tfrac{1}{3}$.]

Placing $M = 3$ and $N= 2$ in (E), we get

$\log 3 = \log 2 + 2 \left[ \frac{1}{5} + \frac{1}{3}\cdot\frac{1}{5^8} + \frac{1}{5}\cdot\frac{1}{5^5} + \cdots \right] = 1.09861229 \cdots.$

It is only necessary to compute the logarithms of prime numbers in this way, the logarithms of composite numbers being then found by using theorems 7-10, §1. Thus

\begin{align} \log 8 &= \log 2{^3} &= 3 \log 2 &= 2.07944154 \cdots, \\ \log 6 &= \log 3 &+ \log 2 &= 1.79175947 \cdots. \\ \end{align}

All the above are Napierian or natural logarithms, i.e. the base is $e = 2.7182818$. If we wish to find Briggs's or common logarithms, where the base 10 is employed, all we need to do is to change the base by means of the formula

$\log_{10} n = \frac{\log_e n}{\log_e 10}.$

Thus

$\log_{10} 2 = \frac{\log_e 2}{\log_e 10} = \frac{0.693\cdots}{2.302\cdots} = 0.301 \cdots.$

In the actual computation of a table of logarithms only a few of the tabulated values are calculated from series, all the rest being found by employing theorems in the theory of logarithms and various ingenious devices designed for the purpose of saving work.

EXAMPLES

 1 $\log_e 5 = 1.6094\cdots.$ 2 $\log_e 24 = 3.1781 \cdots.$ 2 $\log_e 10 = 2.3025 \cdots.$ 4 $\log_{10} 5 = 0.6990 \cdots.$

147. Approximate formulas derived from series. Interpolation. In the two preceding sections we evaluated a function from its equivalent power series by substituting the given value of $x$ in a certain number of the first terms of that series, the number of terms taken depending on the degree of accuracy required. It is of great practical importance to note that this really means that we are considering the function as approximately equal to an ordinary polynomial with constant coefficients. For example, consider the series

 (A) $\sin x = x +\frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots.$

This is an alternating series for both positive and negative values of $x$. Hence the error made if we assume $\sin x$ to be approximately equal to the sum of the first $n$ terms is numerically less than the $(n+1)$th term ( §139). For example, assume

 (B) $\sin x = x,$

and let us find for what values of x this is correct to three places of decimals. To do this, set

 (C) $\left| \frac{x^3}{3!} \right| < .001.$

This gives $x$ numerically less than $\sqrt[3]{.006}(=.1817)$; i.e. (B) is correct to three decimal places when $x$ lies between $+10.4^\circ$ and $-10.4^\circ$.

The error made in neglecting all terms in (B) after the one in $x^{n-1}$ is given by the remainder (see (64), §145)

 (D) $R = \frac{x^n}{n!}f^{(n)}\left(x_1\right);$

hence we can find for what values of x a polynomial represents the functions to any desired degree of accuracy by writing the inequality

 (E) $|R| <$ limit of error,

and solving for $x$, provided we know the maximum value of $f^{(n)}(x_1)$. Thus if we wish to find for what values of $x$ the formula

 (F) $\sin x = x - \frac{x^3}{6}$

is correct to two decimal places (i.e. error < .01), knowing that $|f^{(v)}(x_1)| \le 1$ we have, from (D) and (E),

$\frac{\left| x^5 \right|}{120} < .01;$ i.e. $\left| x \right| <\sqrt[5]{1.2};$ or $\left | x \right| \le 1.$

Therefore $x$ gives the correct value of $\sin x$ to two decimal places if $|x| = 1$; i.e. if $x$ lies between $+57^\circ$ and $-57^\circ$. This agrees with the discussion of (A) as an alternating series.

Since in a great many practical problems accuracy to two or three decimal places only is required, the usefulness of such approximate formulas as (B) and (F) is apparent.

Again, if we expand $sin x$ by Taylor's Series, (62), §144, in powers of $x - a$, we get

$\sin x = \sin a + \cos a \left(x-a\right) - \frac{\sin a}{2!}\left(x-a\right)^2 + \cdots.$

Hence for all values of $x$ in the neighborhood of some fixed value $a$ we have the approximate formula

 (G) $\sin x = \sin a + \cos a \left( x - a \right)$

Transposing sin a and dividing by $x - a$, we get

$\frac{\sin x - \sin a}{x - a} = \cos a.$

Since $\cos a$ is constant, this means that:

The change in the value of the sine is proportional to the change in the angle for values of the angle near $a$.

For example, let $a = 30^\circ = .5236$ radians, and suppose it is required to calculate the sines of $31^\circ$ and $32^\circ$ by the approximate formula (G). Then

 $\sin 31^\circ$ $= \sin 30^\circ + \cos 30^\circ (.01745)$[9] $= .5000 + .8660 \times .01745$ $= .5000 + .0151$ $= .5151.$

Similarly, $\sin 32^\circ = \sin 30^\circ + \cos 30^\circ (.03490) = .5302$.

This discussion illustrates the principal known as interpolation by first differences. In general, then, by Taylor's Series, we have the approximate formula

 (H) $f(x) = f(a) + f'(a)(x-a).$

If the constant $f'\left( a \right) \ne 0$, this formula asserts that the ratio of the increments of function and variable for all values of the latter differing little from the fixed value a is constant.

Care must however be observed in applying (H). For while the absolute error made in using it in a given case may be small, the percentage of error may be so large that the results are worthless.

Then interpolation by second differences is necessary. Here we use one more term in Taylor's Series, giving the approximate formula

 (I) $f\left( x \right) = f\left( a \right) + f'\left( a \right)\left( x-a \right) + \frac{f''\left( a \right)}{2!}\left( x-a \right)^2.$

The values of $\sin 31^\circ$ and $\sin 32^\circ$ calculated in (G) are correct to only three decimal places. If greater accuracy than this is desired, we may use (I), which gives, for $f\left( x \right) = \sin x$,

 (J) $\sin x$ $= \sin a + cos a\left( x-a \right) - \frac{\sin a}{2!}\left( x-a \right)^2.$ Let $a$ $= 30^\circ = .5236$ radian. Then $\sin 31^\circ$ $= \sin 30^\circ + \cos 30^\circ\left( .01745 \right) - \frac{\sin 30^\circ}{2}\left( 0.1745 \right)^2$ $= .50000 + .01511 - .00008$ $= .51503.$ $\sin 32^\circ$ $= \sin 30^\circ + \cos 30^\circ \left( .03490 \right) - \frac{\sin 30^\circ}{2} \left( .03490 \right)^2$ $= .50000 + .03022 - .00030$ $= .52992.$

These results are correct to four decimal places.

EXAMPLES

1. Using formula (H) for interpolation by first differences, calculate the following functions:

 (a) $\cos 61^\circ$, taking $a = 60^\circ$. (c) $\sin 85.1^\circ$, taking $a = 85^\circ$. (b) $\tan 46^\circ$, taking $a = 45^\circ$. (d) $\cot 70.3^\circ$, taking $a = 70^\circ$.

2. Using formula (I) for interpolation by second differences, calculate the following functions:

 (a) $\sin 11^\circ$, taking $a = 10^\circ$. (c) $\cot 15.2^\circ$, taking $a = 15^\circ$. (b) $\cos 86^\circ$, taking $a = 85^\circ$. (d) $\tan 69^\circ$, taking $a = 70^\circ$.

3. Draw the graphs of the functions $x$, $x - \tfrac{x^3}{3!}$, $x - \tfrac{x^3}{3!} + \tfrac{x^5}{5!}$, respectively, and compare them with the graph of $\sin x$.

148. Taylor's Theorem for functions of two or more variables. The scope of this book will allow only an elementary treatment of the expansion of functions involving more than one variable by Taylor's Theorem. The expressions for the remainder are complicated and will not be written down.

Having given the function

 (A) $f\left( x, y \right),$

it is required to expand the function

 (B) $f\left( x + h, y + k \right)$

in powers of $h$ and $k$.

Consider the function

 (C) $f\left( x + ht, y + kt \right).$

Evidently (B) is the value of (C) when $t = 1$. Considering (C) as a function of, we may write

 (D) $f\left( x + ht, y + kt \right) = F\left( t \right),$

which may then be expanded in powers of $t$ by Maclaurin's Theorem, (64), § 145, giving

 (E) $F\left( t \right) = F\left( 0 \right) + tF'\left( 0 \right) + \frac{t^2}{2!}F''\left( 0 \right) + \frac{t^3}{3}F'''\left( 0 \right) + \cdots.$

Let us now express the successive derivatives of $F(t)$ with respect to $t$ in terms of the partial derivatives of $F(t)$ with respect to $x$ and $y$. Let

 (F) $\alpha = x + ht,\qquad \beta = y + kt;$

then by (51), §125,

 (G) $F'\left( t \right) = \frac{\partial F}{\partial \alpha}\frac{d\alpha}{dt} + \frac{\partial F}{\partial \beta}\frac{d\beta}{dt}$

But from (F),

 (H) $\frac{d\alpha}{dt} = h \qquad$ and $\frac{d\Beta}{dt} = k;$

and since $F(t)$ is a function of $x$ and $y$ through $\alpha$ and $\beta$,

$\frac{\partial F}{\partial x} = \frac{\partial F}{\partial \alpha}\frac{\partial\alpha}{\partial x}\qquad$ and $\frac{\partial F}{\partial y} = \frac{\partial F}{\partial \beta}\frac{\partial\beta}{\partial y};$

or, since from (F), $\tfrac{\partial\alpha}{\partial x} = 1$ and $\tfrac{\partial\beta}{\partial y} =1$,

 (I) $\frac{\partial F}{\partial x} = \frac{\partial F}{\partial \alpha}$ and $\frac{\partial F}{\partial y} = \frac{\partial F}{\partial\beta}.$

Substituting in (G) from (I) and (H),

 (J) $F'\left( t \right) = h \frac{\partial F}{\partial x} + k \frac{\partial F}{\partial y}.$

Replacing $F(t)$ by $F'(t)$ in (J), we get

$F''\left( t \right) = h \frac{\partial F'}{\partial x} + k \frac{\partial F'}{\partial y} = h \left\{ h \frac{\partial^2 F}{\partial x^2} + k \frac{\partial^2 F}{\partial x \partial y} \right\} + k \left\{ h \frac{\partial^2 F}{\partial x \partial y} + k \frac{\partial^2 F}{\partial y^2} \right\}.$
 (K) $\therefore F''\left( t \right) = h^2 \frac{\partial^2 F}{\partial x^2} + 2hk \frac{\partial^2 F}{\partial x \partial y} + k^2 \frac{\partial^2 F}{\partial y^2}.$

In the same way the third derivative is

 (L) $F'''\left( t \right) = h^3 \frac{\partial^3 F}{\partial x^3} + 3h^2k \frac{\partial^3 F}{\partial x^2 \partial y} + 3hk^2 \frac{\partial^3 F}{\partial x \partial y^2} + k^3 \frac{\partial^3 F}{\partial y^3},$

and so on for higher derivatives. When $t = 0$, we have from (D), (G), (J), (K), (L)),

 $F\left( 0 \right)$ $= f\left( x, y \right)$, i.e. $F\left( t \right)$ is replaced by $f\left( x, y \right)$, $F'\left( 0 \right)$ $= h \frac{\partial f}{\partial x} + k \frac{\partial f}{\partial y},$ $F''\left( 0 \right)$ $= h^2 \frac{\partial^2 f}{\partial x^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} + k^2 \frac{\partial^2 f}{\partial y^2},$ $F'''\left( 0 \right)$ $= h^3 \frac{\partial^3 f}{\partial x^3} + 3h^2k \frac{\partial^3 f}{\partial x^2 \partial y} + 3hk^2 \frac{\partial^3 f}{\partial x \partial y^2} + k^3 \frac{\partial^3 f}{\partial y^3},$

and so on.

Substituting these results in (E), we get

 (66) \begin{align} f\left( x +ht, y +kt \right) &= f\left( x, y \right) + t \left( h \frac{\partial f}{\partial x} + k \frac{\partial f}{\partial y} \right) \\ &+ \frac{t^2}{2!} \left( h^2 \frac{\partial^2 f}{\partial x^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} + k^2 \frac{\partial^2 f}{\partial y^2} \right) + \cdots. \\ \end{align}

To get $f(x + h, y + k)$, replace $t$ by 1 in (66), giving Taylor's Theorem for a function of two independent variables,

 (67) \begin{align} f\left( x + h, y + k \right) &= f\left( x, y \right) + h \frac{\partial f}{\partial x} + k \frac{\partial f}{\partial y} \\ &+ \frac{1}{2!} \left( h^2 \frac{\partial^2 f}{\partial x^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} + k^2 \frac{\partial^2 f}{\partial y^2} \right) + \cdots. \\ \end{align}

which is the required expansion in powers of $h$ and $k$. Evidently (67) is also adapted to the expansion of $f(x + h, y + k)$ in powers of $x$ and $y$ by simply interchanging $x$ with $h$ and $y$ with $k$. Thus

 (67a) \begin{align} f\left( x + h, y + k \right) &= f\left( h, k \right) + x \frac{\partial f}{\partial h} + y \frac{\partial f}{\partial k} \\ &+ \frac{1}{2!} \left( x^2 \frac{\partial^2 f}{\partial h^2} + 2xy \frac{\partial^2 f}{\partial h \partial k} + y^2 \frac{\partial^2 f}{\partial k^2} \right) + \cdots. \\ \end{align}

Similarly, for three variables we shall find

 (68) \begin{align} f\left( x + h, y + k, z + l \right) &= f\left( x, y, z \right) + h \frac{\partial f}{\partial x} + k \frac{\partial f}{\partial y} + l \frac{\partial f}{\partial z} \\ &+ \frac{1}{2!} \left( h^2 \frac{\partial^2 f}{\partial x^2} + k^2 \frac{\partial^2 f}{\partial y^2} + l^2 \frac{\partial^2 f}{\partial z^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} \right .\\ &\left . + 2lh \frac{\partial^2 f}{\partial z \partial x} + 2kl \frac{\partial^2 f}{\partial y \partial z} \right) + \cdots. \\ \end{align}

and so on for any number of variables.

EXAMPLES

1 Given $f(x, y) \equiv Ax^2 + Bxy + Cy^2$ , expand $f(x + h, \ y + k)$ in powers of $h$ and $k$.

 Solution. $\frac{\partial f}{\partial x} = 2 Ax + By,\qquad \frac{\partial f}{\partial y} = Bx + 2Cy;$ $\frac{\partial^2 f}{\partial x^2} = 2A,\qquad \frac{\partial^2 f}{\partial x \partial y} = B, \qquad \frac{\partial^2 f}{\partial y^2} = 2C.$

The third and higher partial derivatives are all zero. Substituting in (67),

\begin{align} f\left( x + h,y + k \right) \equiv & Ax^2 + Bxy + Cy^2 + \left( 2 Ax + By \right) h + \left( Bx + 2 Cy \right) k \\ & + Ah^2- + Bhk + Ck^2. \end{align} Ans.

2. Given $f(x, y, z) \equiv Ax^2 + By^2 + Cz^2$, expand $f(x + l,\ y + m, z + n)$ in powers of $l$, $m$, $n$.

 Solution. $\frac{\partial f}{\partial x} = 2 Ax, \qquad \frac{\partial f}{\partial y} = 2 By, \qquad \frac{\partial f}{\partial z} = 2Cz;$ $\frac{\partial^2 f}{\partial x^2} = 2A, \qquad \frac{\partial^2 f}{\partial y^2} = 2B, \qquad, \frac{\partial^2 f}{\partial z^2} = 2C, \qquad \frac{\partial^2 f}{\partial x \partial y } = \frac{\partial^2 f}{\partial y \partial z} = \frac{\partial^2 f}{\partial z \partial x} = 0.$

The third and higher partial derivatives are all zero. Substituting in (68),

\begin{align} f\left(x + l, y + m, z + n \right) \equiv & Ax^2 + By^2 + Cz^2 + 2 Axl + 2 Bym + 2 Czn \\ & + Al^2 + Bm^2 + Cn^2. \\ \end{align} Ans.

3. Given $f(x, y) \equiv \sqrt{x} \tan y$, expand $f(x + h, y + k)$ in powers of $h$ and $k$.

4. Given $f(x,\ y,\ z) \equiv Ax^2 + By^2 + Cz^2 + Dxy + Eyz + Fzx$, expand $f(x + h,\ y + fc,\ z + l)$ in powers of $h$, $k$, $l$.

149. Maxima and minima of functions of two independent variables. The function $f(x, y)$ is said to be a maximum at $x = a,\, y = b$ when $f(a, b)$ is greater than $f(x, y)$ for all values of $x$ and $y$ in the neighborhood of $a$ and $b$. Similarly, $f(a, b)$ is said to be a minimum at $x = a,\, y = b$ when $f(a, b)$ is less than $f(x, y)$ for all values of $x$ and $y$ in the neighborhood of $a$ and $b$.

These definitions may be stated in analytical form as follows:

If, for all values of $h$ and $k$ numerically less than some small positive quantity,

 (A) $f(a + h, b + k) - f(a, b )=$ a negative number, then $f(a,b)$ is a maximum value of $f(x,y)$.

If

 (B) $f(a + h, b + k) - f(a, b )=$ a positive number, then $f(a,b)$ is a minimum value of $f(x,y)$.

These statements may be interpreted geometrically as follows: a point $P$ on the surface

$z=f(x,\ y)$
is a maximum point when it is "higher" than all other points on the surface in its neighborhood, the coordinate plane $XOY$ being assumed horizontal. Similarly, $P'$ is a minimum point on the surface when it is "lower" than all other points on the surface in its neighborhood. It is therefore evident that all vertical planes through P cut the surface in curves (as $APE$ or $DPE$ in the figure),
each of which has a maximum ordinate $z (= MP)$ at $P$. In the same manner all vertical planes through $P'$ cut the surface in curves (as $BP'C$ or $FP'G$), each of which has a minimum ordinate $z(=NP')$ at $P'$. Also, any contour (as $HIJK$) cut out of the surface by a horizontal plane in the immediate neighborhood of $P$ must be a small closed curve. Similarly, we have the contour $LSRT$ near the minimum point $P'$.

It was shown in §81 and §82, that a necessary condition that a function of one variable should have a maximum or a minimum for a given value of the variable was that its first derivative should be zero for the given value of the variable. Similarly, for a function $f(x, y)$ of two independent variables, a necessary condition that $f(a, b)$ should be a maximum or a minimum (i.e. a turning value) is that for $x = a, y = 5$,

 (C) $\frac{\partial f}{\partial x} = 0, \qquad \frac{\partial f}{\partial y} = 0.$

'Proof. Evidently (A) and (B) must hold when $k = 0$; that is,

$f(a + h,\ b) - f(a,\ b)$

is always negative or always positive for all values of $h$ sufficiently small numerically. By §81, §82, a necessary condition for this is that $\tfrac{d}{dx}f(x,b)$ shall vanish for $x = a$, or, what amounts to the same thing, $\tfrac{\partial}{\partial x}f(x, y)$ shall vanish for $x = a, \quad y = b$. Similarly, (A) and (B) must hold when $h = 0$, giving as a second necessary condition that $\tfrac{\partial}{\partial y}f(x, y)$ shall vanish for $x = a, \quad y = b$. In order to determine sufficient conditions that $f(a, b)$ shall be a maximum or a minimum, it is necessary to proceed to higher derivatives. To derive sufficient conditions for all cases is beyond the scope of this book.[10] The following discussion, however, will suffice for all the problems given here.

Expanding $f(a + h, b + k)$ by Taylor's Theorem, (67), §148, replacing $x$ by $a$ and $y$ by $b$, we get

 (D) \begin{align} f\left( a + h, b+ k \right) =& f\left( a, b \right) + h \frac{\partial f}{\partial x} + k \frac{\partial f}{\partial y} \\ & + \frac{1}{2!} \left( h^2 \frac{\partial^2 f}{\partial x^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} + k^2 \frac{\partial^2 f}{\partial y^2} \right) + R, \\ \end{align}

where the partial derivatives are evaluated for $x = a, \quad y = b$, and $R$ denotes the sum of all the terms not written down. All such terms are of a degree higher than the second in $h$ and $k$.

Since $\tfrac{\partial f}{\partial x} = 0$ and $\tfrac{\partial f}{\partial y} = 0$, from (C), we get, after transposing $f(a, b)$,

 (E) $f\left( a+h, b+k \right) - f\left( a,b \right) = \frac{1}{2} \left( h^2 \frac{\partial^2 f}{\partial x^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} + k^2 \frac{\partial^2 f}{\partial y^2} \right) + R.$

If $f(a, b)$ is a turning value, the expression on the left-hand side of must retain the same sign for all values of $h$ and $k$ sufficiently small in numerical value, $-$ the negative sign for a maximum value (see (A)) and the positive sign for a minimum value (see (B); i.e. $f(a, b)$ will be a maximum or a minimum according as the right-hand side of (E) is negative or positive. Now $R$ is of a degree higher than the second in $h$ and $k$. Hence as $h$ and $k$ diminish in numerical value, it seems plausible to conclude that the numerical value of $R$ will eventually become and remain less than the numerical value of the sum of the three terms of the second degree written down on the right-hand side (E).[11] Then the sign of the right-hand side (and therefore also of the left-hand side) will be the same as the sign of the expression

 (F) $h^2 \frac{\partial f}{\partial x^2} + 2hk \frac{\partial^2 f}{\partial x \partial y} + k^2 \frac{\partial^2 f}{\partial y^2}.$

But from Algebra we know that the quadratic expression

$h^2A + 2hkC + k^2B$

always has the same sign as $A$ (or $B$) when $AB - C^2 > 0$. Applying this to (F), $A=\tfrac{\partial^2 f}{\partial x^2},\quad B = \tfrac{\partial^2 f}{\partial y^2}, \quad C = \tfrac{\partial^2 f}{\partial x \partial y}$ and we see that (F), and therefore also the left-hand member of (E), has the same sign as $\tfrac{\partial^2 f}{\partial x^2}$ ( or $\tfrac{\partial^2 f}{\partial y^2}$) when

$\frac{\partial^2 f}{\partial x^2}\frac{\partial^2 f}{\partial y^2} - \left( \frac{\partial^2 f}{\partial x \partial y} \right)^2 > 0.$

Hence the following rule for finding maximum and minimum values of a function $f(x,y)$.

First Step. Solve the simultaneous equations
$\frac{\partial f}{\partial x} = 0, \qquad \frac{\partial f}{\partial y} = 0.$
Second Step. Calculate for these values of x and y the value of
$\Delta = \frac{\partial^2 f}{\partial x^2}\frac{\partial^2 f}{\partial y^2} - \left( \frac{\partial^2 f}{\partial x \partial y} \right)^2.$
Third Step. The function will have a
 maximum if $\Delta > 0$ and $\frac{\partial^2 f}{\partial x^2}\left( \text{or} \frac{\partial^2 f}{\partial y^2} \right) < 0;$ minimum if $\Delta > 0$ and $\frac{\partial^2 f}{\partial x^2}\left( \text{or} \frac{\partial^2 f}{\partial y^2}\right) > 0;$ neither a maximum nor a minimum if $\Delta < 0$. The question is undecided if $\Delta = 0$.[12]

The student should notice that this rule does not necessarily give all maximum and minimum values. For a pair of values of $x$ and $y$ determined by the First Step may cause $\Delta$ to vanish, and may lead to a maximum or a minimum or neither. Further investigation is therefore necessary for such values. The rule is, however, sufficient for solving many important examples.

The question of maxima and minima of functions of three or more independent variables must be left to more advanced treatises.

Illustrative Example 1. Examine the function $3axy - x^3 - y^3$ for maximum and minimum values.

 Solution. $f(x,\ y) = 3 axy - x^3 -y^3.$ First step. $\frac{\partial f}{\partial x} = 3ay - ex^2 = 0, \qquad \frac{\partial f}{\partial y} = 3ax - 3y^2 =0.$ Solving these two simultaneous equations, we get $x = 0, \qquad x = a,$ $y = 0; \qquad y = a.$
 Second Step. $\frac{\partial^2 f}{\partial x^2} = -6x, \qquad \frac{\partial^2 f}{\partial x \partial y} = 3a, \qquad \frac{\partial^2 f}{\partial y^2} = -6y;$ $\Delta = \frac{\partial^2 f}{\partial x^2} \frac{\partial^2}{\partial y^2} - \left( \frac{\partial^2 f}{\partial x \partial y} \right)^2 = 36xy - 9a^2$ Third Step. When $x = 0$ and $y = 0, \quad \Delta = -9 a^2$, and there can be neither a maximum nor a minimum at $(0, 0)$.

When $x =a$ and $y = a, \quad \Delta = + 27 a^2$; and since $\tfrac{\partial^2 f}{\partial x^2}= -6a$, we have the conditions for a maximum value of the function fulfilled at $(a, a)$. Substituting $x = a, \quad y = a$ in the given function, we get its maximum value equal to $a^3$.

Illustrative Example 2. Divide $a$ into three parts such that their product shall be a maximum.

 Solution Let $x =$ first part, $y =$ second part; then $a - (x+y) = a - x- y =$ third part, and the function to be examined is $f(x,\ y) = xy(a-x-y).$ First Step. $\frac{\partial f}{\partial x} = ay - 2xy - y^2 =0, \qquad \frac{\partial f}{\partial y} = ax - 2xy -x^2 =0.$ Solving simultaneously, we get as one pair of values $x = \tfrac{a}{3}, \quad \tfrac{a}{3}$. [13] Second step. $\frac{\partial^2 f}{\partial x^2} = -26, \qquad \frac{\partial^2 f}{\partial x \partial y} = a - 2x - 2y, \qquad \frac{\partial^2 f}{\partial y^2} = 2x;$ $\Delta = 4xy - \left( a -2x - 2y \right)^2.$ Third Step. When $x =\tfrac{a}{3}$ and $y = \tfrac{a}{3}, \Delta =\tfrac{a^2}{3}$; and since $\tfrac{\partial^2 f}{\partial x^2} = - \tfrac{2a}{3}$, it is seen that our product is a maximum when $x = \tfrac{a}{3}, y = \tfrac{a}{3}$. Therefore the third part is also $\tfrac{a}{3}$, and the maximum value of the product is $\tfrac{a^3}{27}$.

EXAMPLES
 1. Find the minimum value of $x^2 + xy + y^2 - ax - by$. Ans. $\tfrac{1}{3}\left( ab -a^2 - b^2 \right)$.

2. Show that $\sin x + \sin y + \cos\left(x + y\right)$ is a minimum when $x = y =\tfrac{3 \pi}{2}$, and a maximum when $x = y = \tfrac{\pi}{6}$.

3. Show that $xe^{y + x\sin y}$ has neither a maximum nor a minimum.

4. Show that the maximum value of $\tfrac{\left( ax + by + c \right)^2}{x^2 + y^2 + 1}$ is $a^2 + b^2 + c^2$.

5. Find the greatest rectangular parallelepiped that can be inscribed in an ellipsoid. That is, find the maximum value of $8 xyz$ (= volume) subject to the condition

 $\frac{x^2}{a^2} + \frac{y^2}{b^2} + {z^2}{c^2} = 1.$ Ans. $\frac{8 abc}{3 \sqrt{3}}.$

Hint. Let $u = xyz$, and substitute the value of $z$ from the equation of the ellipsoid. This gives

$u^2 = x^2y^2c^2\left( 1 - \frac{x^2}{a^2} - \tfrac{y^2}{b^2} \right),$

where $u$ is a function of only two variables. 6. Show that the surface of a rectangular parallelepiped of given volume is least when the solid is a cube.

7. Examine $x^4 + y^4 - x^2 + xy - y^2$. for maximum and minimum values.

 Ans. Maximum when $x = 0, \ y = 0$; minimum when $x = y \pm \tfrac{1}{2}$, and when $x = -y = \pm \tfrac{1}{2} \sqrt{3}$.

8. Show that when the radius of the base equals the depth, a steel cylindrical standpipe of a given capacity requires the least amount of material in its construction.

9. Show that the most economical dimensions for a rectangular tank to hold a given volume are a square base and a depth equal to one half the side of the base.

10. The electric time constant of a cylindrical coil of wire is

$u = \frac{mxyz}{ax +by +cz},$

where $x$ is the mean radius, $y$ is the difference between the internal and external radii, $z$ is the axial length, and $m,\ a,\ b,\ c$ are known constants. The volume of the coil is $nxyz = g$. Find the values of $x,\ y,\ z$ which make $u$ a minimum if the volume of the coil is fixed.

 Ans. $ax = by = cz = \sqrt[3]{\frac{abcg}{n}}.$

1. Also known as Taylor's Formula.
2. Published by [Brook Tyalor] (1685-1731) in his [Methods Incrmentorum], London, 1715.
3. In these examples assume that the functions can be developed into a power series.
4. Named after [Maclaurin] (1698-1746), being first published in his [Treatise of Fluxions], Edinburgh, 1742. The series is really due to [[1]] (1692-1770).
5. Since here $f^n(x) = \sin \left( x + \tfrac{n \pi}{2} \right)$ and $f^n(x_1) = \sin\left( x_i + \tfrac{n \pi}{2} \right),$ we have, by substituting in the last term of (65),
 remainder $= \tfrac{x^n}{n!} \sin \left( x_i + \tfrac{n \pi}{x} \right).$ $0 < x_1 < x$

But $\sin \left( x_1 + \tfrac{n\pi}{x} \right)$ can never exceed unity, and from (Ex. 19, §142), $\lim_{n \to \infty} \tfrac{x^n}{n!}= 0$ for all values of $x$; that is, in this case the limit of the remainder is for all values of $x$ for which the series converges. This is also the case for all the functions considered in this book.

6. Since $.0083 \div .8333 = .01$.
7. Since $.000198 \div .841666 = .00023$.
8. The student should notice that we have treated the series as if they were ordinary sums, but they are not; they are limits of sums. To justify this step is beyond the scope of this book.
9. $x -a = 1^\circ = .01745$radian.
10. See Cours d' Analyse, Vol. I by C. Jordan
11. Peano has shown that this conclusion does not always hold. See the article on "Maxima and Minima of Functions of Several Variables," by Professor James Pierpont in the Bulletin of the American Mathematical Society, Vol. IV.
12. The discussion of the text merely renders the given rule plausible. The student should observe that the case $\delta = 0$ is omitted from the discussion
13. $x=0,\quad y=0$ are not considered, since from the nature of the problem we would then have a minimum.