Page:EB1911 - Volume 08.djvu/247

From Wikisource
Jump to navigation Jump to search
This page has been proofread, but needs to be validated.
230
DIFFERENTIAL EQUATION

wherein p0, p1 . . . are power series in x, y, should satisfy the equation, it is necessary, as we find by equating like terms, that

p1 = δ0p0, p2 = δ0p1 + δ1p0, &c.

and in generalProof of the existence of integrals.

ps+1 = δ0ps + s1δ1ps−1 + s2δ2ps−2 +... + δsp0,

where

sr = (s!)/(r!) (sr)!

Now compare with the given equation another equation

A(xyt)dF/dx + B(xyt)dF/dy = dF/dt,

wherein each coefficient in the expansion of either A or B is real and positive, and not less than the absolute value of the corresponding coefficient in the expansion of a or b. In the second equation let us substitute a series

F = P0 + tP1 + t²P2/2! + ...,

wherein the coefficients in P0 are real and positive, and each not less than the absolute value of the corresponding coefficient in p0; then putting Δr = Ard/dx + Brd/dy we obtain necessary equations of the same form as before, namely,

P1 = Δ0P0, P2 = Δ0P1 + Δ1P0, ...

and in general Ps+1 = Δ0Ps, + s1Δ1Ps−1 + ... + ΔsP0. These give for every coefficient in Ps+1 an integral aggregate with real positive coefficients of the coefficients in Ps, Ps−1, ..., P0 and the coefficients in A and B; and they are the same aggregates as would be given by the previously obtained equations for the corresponding coefficients in ps+1 in terms of the coefficients in ps, ps−1, ..., p0 and the coefficients in a and b. Hence as the coefficients in P0 and also in A, B are real and positive, it follows that the values obtained in succession for the coefficients in P1, P2, ... are real and positive; and further, taking account of the fact that the absolute value of a sum of terms is not greater than the sum of the absolute values of the terms, it follows, for each value of s, that every coefficient in ps+1 is, in absolute value, not greater than the corresponding coefficient in Ps+1. Thus if the series for F be convergent, the series for ƒ will also be; and we are thus reduced to (1), specifying functions A, B with real positive coefficients, each in absolute value not less than the corresponding coefficient in a, b; (2) proving that the equation

AdF/dx + BdF/dy = dF/dt

possesses an integral P0 + tP1 + t²P2/2! + ... in which the coefficients in P0 are real and positive, and each not less than the absolute value of the corresponding coefficient in p0. If a, b be developable for x, y both in absolute value less than r and for t less in absolute value than R, and for such values a, b be both less in absolute value than the real positive constant M, it is not difficult to verify that we may take A = B = M[1 − (x + y)/r]−1 (1 − t/R)−1, and obtain

and that this solves the problem when x, y, t are sufficiently small for the two cases p0 = x, p0 = y. One obvious application of the general theorem is to the proof of the existence of an integral of an ordinary linear differential equation given by the n equations dy/dx = y1, dy1/dx = y2, ...,

dyn−1/dx = pp1yn−1 − ... − pny;

but in fact any simultaneous system of ordinary equations is reducible to a system of the form

dxi/dt = φi(tx1, ... xn).

Suppose we have k homogeneous linear partial equations of the first order in n independent variables, the general equation being aσ1dƒ/dx1 + ... + aσndƒ/dxn = 0, where σ = 1, ... k, and that Simultaneous linear partial equations. we desire to know whether the equations have common solutions, and if so, how many. It is to be understood that the equations are linearly independent, which implies that kn and not every determinant of k rows and columns is identically zero in the matrix in which the i-th element of the σ-th row is aσi}(i = 1, ... n, σ = 1, ... k). Denoting the left side of the σ-th equation by Pσƒ, it is clear that every common solution of the two equations Pσƒ = 0, Pρƒ = 0, is also a solution of the equation Pρ(pσƒ), Pσ(pρƒ), We immediately find, however, that this is also a linear equation, namely, ΣHidƒ/dxi = 0 where Hi = Pρaσ − Pσaρ, and if it be not already contained among the given equations, or be linearly deducible from them, it may be added to them, as not introducing any additional limitation of the possibility of their having common solutions. Proceeding thus with every pair of the original equations, and then with every pair of the possibly augmented system so obtained, and so on continually, we shall arrive at a system of equations, linearly independent of each other and therefore not more than n in number, such that the combination, in the way described, of every pair of them, leads to an equation which is linearly deducible from them. If the number of this so-called complete system is n, the equations give dƒ/dx1 = 0 ... dƒ/dxn = 0, leading to the nugatory result ƒ = a constant. Suppose, then, the number of this system to be r < n; suppose, further, that from the Complete systems of linear partial equations. matrix of the coefficients a determinant of r rows and columns not vanishing identically is that formed by the coefficients of the differential coefficients of ƒ in regard to x1 ... xr; also that the coefficients are all developable about the values x1 = xº1, ... xn= xºn, and that for these values the determinant just spoken of is not zero. Then the main theorem is that the complete system of r equations, and therefore the originally given set of k equations, have in common nr solutions, say ωr+1, ... ωn, which reduce respectively to xr+1, ... xn when in them for x1, ... xr are respectively put xº1, ... xºr; so that also the equations have in common a solution reducing when x1 = xº1, ... xr = xºr to an arbitrary function ψ(xr+1, ... xn) which is developable about xºr+1, ... xºn, namely, this common solution is ψ(ωr+1, ... ωn). It is seen at once that this result is a generalization of the theorem for r = 1, and its proof is conveniently given by induction from that case. It can be verified without difficulty (1) that if from the r equations of the complete system we form r independent linear aggregates, with coefficients not necessarily constants, the new system is also a complete system; (2) that if in place of the independent variables x1, ... xn we introduce any other variables which are independent functions of the former, the new equations also form a complete system. It is convenient, then, from the complete system of r equations to form r new equations by solving separately for dƒ/dx1, ..., dƒ/dxr; suppose the general equation of the new system to be

Qσƒ = dƒ/dxσ + cσjr+1dƒ/dxr+1 + ... + cσndƒ/dxn = 0 (σ = 1, ... r).

Then it is easily obvious that the equation QρQσƒ − QσQρƒ = 0 contains only the differential coefficients of ƒ in regard to xr+1 ... xn; as it is at most a linear function of Q1ƒ, ... Qrƒ, it must be identically zero. So reduced the system is called a Jacobian system. Of this system Q1ƒ=0 has n − 1 principal solutions reducing respectively Jacobian systems. to x2, ... xn when

x1 = xº1,

and its form shows that of these the first r − 1 are exactly x2 ... xr. Let these n − 1 functions together with x1 be introduced as n new independent variables in all the r equations. Since the first equation is satisfied by n − 1 of the new independent variables, it will contain no differential coefficients in regard to them, and will reduce therefore simply to dƒ/dx1 = 0, expressing that any common solution of the r equations is a function only of the n − 1 remaining variables. Thereby the investigation of the common solutions is reduced to the same problem for r − 1 equations in n − 1 variables. Proceeding thus, we reach at length one equation in nr + 1 variables, from which, by retracing the analysis, the proposition stated is seen to follow.

The analogy with the case of one equation is, however, still closer. With the coefficients cσj, of the equations Qσƒ = 0 in transposed array (σ = 1, ... r, j = r + 1, ... n) we can put down the (nr) equations, dxj = c1jdx1 + ... + crjdxr, equivalent to System of total differential equations. the r(nr) equations dxj/dxσ = cσr. That consistent with them we may be able to regard xr+1, ... xn as functions of x1, ... xr, these being regarded as independent variables, it is clearly necessary that when we differentiate cσj in regard to xρ on this hypothesis the result should be the same as when we differentiate cρj, in regard to xσ on this hypothesis. The differential coefficient of a function ƒ of x1, ... xn on this hypothesis, in regard to xρj is, however,

dƒ/dxρ + cρjr+1dƒ/dxr+1 + ... + cρndƒ/dxn,

namely, is Qρƒ. Thus the consistence of the nr total equations requires the conditions Qρcσj − Qσcρj = 0, which are, however, verified in virtue of Qρ(Qσƒ) − Qσ(Qρƒ) = 0. And it can in fact be easily verified that if ωr+1, ... ωn be the principal solutions of the Jacobian system, Qσƒ = 0, reducing respectively to xr+1, ... xn when x1 = xº1, ... xr = xºr, and the equations ωr+1 = x0r+1, ... ωn = xºn be solved for xr+1, ... xn to give xj = ψj(x1, ... xr, x0r+1, ... xºn), these values solve the total equations and reduce respectively to x0r+1, ... xºn when x1 = xº1 ... xr = xºr. And the total equations have no other solutions with these initial values. Conversely, the existence of these solutions of the total equations can be deduced a priori and the theory of the Jacobian system based upon them. The theory of such total equations, in general, finds its natural place under the heading Pfaffian Expressions, below.

A practical method of reducing the solution of the r equations of a Jacobian system to that of a single equation in nr + 1 variables may be explained in connexion with a geometrical interpretation which will perhaps be clearer in a particular Geometrical interpretation and solution. case, say n = 3, r = 2. There is then only one total equation, say dz = adz + bdy; if we do not take account of the condition of integrability, which is in this case da/dy + bda/dz = db/dx + adb/dz, this equation may be regarded as defining through an arbitrary point (x0, y0, z0) of three-dimensioned space (about which a, b are developable) a plane, namely, zz0 = a0(xx0) + b0(yy0), and therefore, through this arbitrary point ∞2 directions, namely, all those in the plane. If now there be a surface z = ψ(x, y), satisfying dz = adz + bdy and passing through (x0, y0, z0), this plane will touch the surface, and the operations of passing along the surface from (x0, y0, z0) to

(x0 + dx0, y0, z0 + dz0)

and then to (x0 + dx0, y0 + dy0, Z0 + d1z0), ought to lead to the same value of d1z0 as do the operations of passing along the surface from (x0, y0, z0) to (x0, y0 + dy0, z0 + δz0), and then to

(x0 + dx0, y0 + dy0, z0 + δ1z0),

namely, δ1z0 ought to be equal to d1z0. But we find

and so at once reach the condition of integrability. If now we put