Jump to content

1911 Encyclopædia Britannica/Algebraic Forms

From Wikisource
13876541911 Encyclopædia Britannica, Volume 1 — Algebraic FormsPercy Alexander MacMahon

ALGEBRAIC FORMS. The subject-matter of algebraic forms is to a large extent connected with the linear transformation of algebraical polynomials which involve two or more variables. The theories of determinants and of symmetric functions and of the algebra of differential operations have an important bearing upon this comparatively new branch of mathematics. They are the chief instruments of research, and have themselves much benefited by being so employed. When a homogeneous polynomial is transformed by general linear substitutions as hereafter explained, and is then expressed in the original form with new coefficients affecting the new variables, certain functions of the new coefficients and variables are numerical multiples of the same functions of the original coefficients and variables. The investigation of the properties of these functions, as well for a single form as for a simultaneous set of forms, and as well for one as for many series of variables, is included in the theory of invariants. As far back as 1773 Joseph Louis Lagrange, and later Carl Friedrich Gauss, had met with simple cases of such functions, George Boole, in 1841 (Camb. Math. Journ. iii. pp. 1-20), made important steps, but it was not till 1845 that Arthur Cayley (Coll. Math. Papers, i. pp. 80-94, 95-112) showed by his calculus of hyper-determinants that an infinite series of such functions might be obtained systematically. The subject was carried on over a long series of years by himself, J. J. Sylvester, G. Salmon, L. O. Hesse, S. H. Aronhold, C. Hermite, Francesco Brioschi, R. F. A. Clebsch, P. Gordon, &c. The year 1868 saw a considerable enlargement of the field of operations. This arose from the study by Felix Klein and Sophus Lie of a new theory of groups of substitutions; it was shown that there exists an invariant theory connected with every group of linear substitutions. The invariant theory then existing was classified by them as appertaining to “finite continuous groups.” Other “Galois” groups were defined whose substitution coefficients have fixed numerical values, and are particularly associated with the theory of equations. Arithmetical groups, connected with the theory of quadratic forms and other branches of the theory of numbers, which are termed “discontinuous,” and infinite groups connected with differential forms and equations, came into existence, and also particular linear and higher transformations connected with analysis and geometry. The effect of this was to co-ordinate many branches of mathematics and greatly to increase the number of workers. The subject of transformation in general has been treated by Sophus Lie in the classical work Theorie der Transformationsgruppen. The present article is merely concerned with algebraical linear transformation. Two methods of treatment have been carried on in parallel lines, the unsymbolic and the symbolic; both of these originated with Cayley, but he with Sylvester and the English school have in the main confined themselves to the former, whilst Aronhold, Clebsch, Gordan, and the continental schools have principally restricted themselves to the latter. The two methods have been conducted so as to be in constant touch, though the nature of the results obtained by the one differs much from those which flow naturally from the other. Each has been singularly successful in discovering new lines of advance and in encouraging the other to renewed efforts. P. Gordan first proved that for any system of forms there exists a finite number of covariants, in terms of which all others are expressible as rational and integral functions. This enabled David Hilbert to produce a very simple unsymbolic proof of the same theorem. So the theory of the forms appertaining to a binary form of unrestricted order was first worked out by Cayley and P. A. MacMahon by unsymbolic methods, and later G. E. Stroh, from a knowledge of the results, was able to verify and extend the results by the symbolic method. The partition method of treating symmetrical algebra is one which has been singularly successful in indicating new paths of advance in the theory of invariants; the important theorem of expressibility is, directly we exclude unity from the partitions, a theorem concerning the expressibility of covariants, and involves the theory of the reducible forms and of the syzygies. The theory brought forward has not yet found a place in any systematic treatise in any language, so that it has been judged proper to give a fairly complete account of it.[1]

I. The Theory of Determinants.[1]

Let there be given quantities

and form from them a product of quantities

where the first suffixes are the natural numbers taken in order, and is some permutation of these numbers. This permutation by a transposition of two numbers, say becomes and by successively transposing pairs of letters the permutation can be reduced to the form Let such transpositions be necessary; then the expression

the summation being for all permutations of the numbers, is called the determinant of the quantities. The quantities are called the elements of the determinant; the term is called a member of the determinant, and there are evidently members corresponding to the permutations of the numbers The determinant is usually written

the square array being termed the matrix of the determinant. A matrix has in many parts of mathematics a signification apart from its evaluation as a determinant. A theory of matrices has been constructed by Cayley in connexion particularly with the theory of linear transformation. The matrix consists of rows and columns. Each row as well as each column supplies one and only one element to each member of the determinant. Consideration of the definition of the determinant shows that the value is unaltered when the suffixes in each element are transposed.

Theorem.—If the determinant is transformed so as to read by columns as it formerly did by rows its value is unchanged. The leading member of the determinant is and corresponds to the principal diagonal of the matrix.

We write frequently

If the first two columns of the determinant be transposed the expression for the determinant becomes , viz. and are transposed, and it is clear that the number of transpositions necessary to convert the permutation of the second suffixes to the natural order is changed by unity. Hence the transposition of columns merely changes the sign of the determinant. Similarly it is shown that the transposition of any two columns or of any two rows merely changes the sign of the determinant.

Theorem.—Interchange of any two rows or of any two columns merely changes the sign of the determinant.

Corollary.—If any two rows or any two columns of a determinant be identical the value of the determinant is zero.

Minors of a Determinant.—From the value of we may separate those members which contain a particular element as a factor, and write the portion ; , the cofactor of , is called a minor of order of the determinant.

Now , wherein is not to be changed, but the second suffixes in the product assume all permutations, the number of transpositions necessary determining the sign to be affixed to the member.

Hence , where the cofactor of is clearly the determinant obtained by erasing the first row and the first column.

Hence

Similarly , the cofactor of , is shown to be the product of and the determinant obtained by erasing from the i th row and k th column. No member of a determinant can involve more than one element from the first row. Hence we have the development

,

proceeding according to the elements of the first row and the corresponding minors.

Similarly we have a development proceeding according to the elements contained in any row or in any column, viz.



This theory enables the evaluation of a determinant by successive reduction of the orders of the determinants involved.

Ex. gr.

Since the determinant

, having two identical rows,

vanishes identically; we have by development according to the elements of the first row

;

and, in general, since

,

if we suppose the ith and kth rows identical

;

and proceeding by columns instead of rows,

identical relations always satisfied by these minors.

If in the first relation of we write we find that so that breaks up into a sum of determinants, and we also obtain a theorem for the addition of determinants which have rows in common. If we multiply the elements of the second row by an arbitrary magnitude , and add to the corresponding elements of the first row, becomes , showing that the value of the determinant is unchanged. In general we can prove in the same way the—

Theorem.—The value of a determinant is unchanged if we add to the elements of any row or column the corresponding elements of the other rows or other columns respectively each multiplied by an arbitrary magnitude, such magnitude remaining constant in respect of the elements in a particular row or a particular column.

Observation.—Every factor common to all the elements of a row or of a column is obviously a factor of the determinant, and may be taken outside the determinant brackets.

Ex. gr.

The minor is , and is itself a determinant of order . We may therefore differentiate again in regard to any element where , ; we will thus obtain a minor of , which is a minor also of of order . It will be and will be obtained by erasing from the determinant the row and column containing the element ; this was originally the r th row and the sth column of ; the r th row of is the r th or (r–1)th row of according as and the sth column of is the sth or (s−1)th column of according as . Hence, if denote the number of transpositions necessary to bring the succession into ascending order of magnitude, the sign to be attached to the determinant arrived at by erasing the ith and r th rows and the k th and s th columns from in order produce will be raised to the power of .

Similarly proceeding to the minors of order , we find that is obtained from by erasing the i th, r th, t th, rows, the k th, s th, u th columns, and multiplying the resulting determinant by raised to the power and the general law is clear.

Corresponding Minors.—In obtaining the minor in the form of a determinant we erased certain rows and columns, and we would have erased in an exactly similar manner had we been forming the determinant associated with , since the deleting lines intersect in two pairs of points. In the latter case the sign is determined by raised to the same power as before, with the exception that , replaces ; but if one of these numbers be even the other must be uneven; hence

.

Moreover

,

where the determinant factor is given by the four points in which the deleting lines intersect. This determinant and that associated with are termed corresponding determinants. Similarly lines of deletion intersecting in points yield corresponding determinants of orders and respectively. Recalling the formula

,

it will be seen that and involve corresponding determinants. Since is a determinant we similarly obtain

,

and thence

;

and as before

,

an important expansion of .

Similarly

,

and the general theorem is manifest, and yields a development in a sum of products of corresponding determinants. If the jth column be identical with the ith the determinant vanishes identically; hence if be not equal to , or ,

.

Similarly, by putting one or more of the deleted rows or columns equal to rows or columns which are not deleted, we obtain, with Laplace, a number of identities between products of determinants of complementary orders.

Multiplication.—From the theorem given above for the expansion of a determinant as a sum of products of pairs of corresponding determinants it will be plain that the product of and may be written as a determinant of order , viz.


Multiply the 1st, 2nd ... nth rows by respectively, and add to the (n+1)th row; by , and add to the (n+2)th row; by and add to the (n+3)rd row, &c. C then becomes

and all the elements of D become zero. Now by the expansion theorem the determinant becomes

We thus obtain for the product a determinant of order . We may say that, in the resulting determinant, the element in the i th row and kth column is obtained by multiplying the elements in the kth row of the first determinant severally by the elements in the i th row of the second, and has the expression

,

and we obtain other expressions by transforming either or both determinants so as to read by columns as they formerly did by rows.

Remark.—In particular the square of a determinant is a determinant of the same order such that ; it is for this reason termed symmetrical.

The Adjoint or Reciprocal Determinant arises from by substituting for each element the corresponding minor so as to form . If we form the product by the theorem for the multiplication of determinants we find that the element in the i th row and kth column of the product is

,

the value of which is zero when is different from , whilst it has the value when . Hence the product determinant has the principal diagonal elements each equal to and the remaining elements zero. Its value is therefore and we have the identity

or .

It can now be proved that the first minor of the adjoint determinant, say is equal to .

From the equations

we derive


and thence


and comparison of the first and third systems yields

.

In general it can be proved that any minor of order of the adjoint is equal to the complementary of the corresponding minor of the original multiplied by the (p – 1)th power of the original determinant.

Theorem.—The adjoint determinant is the (n – 1)th power of the original determinant. The adjoint determinant will be seen subsequently to present itself in the theory of linear equations and in the theory of linear transformation.

Determinants of Special Forms.—It was observed above that the square of a determinant when expressed as a determinant of the same order is such that its elements have the property expressed by . Such determinants are called symmetrical. It is easy to see that the adjoint determinant is also symmetrical, viz. such that , for the determinant got by suppressing the i th row and kth column differs only by an interchange of rows and columns from that got by suppressing the kth row and i th column. If any symmetrical determinant vanish and be bordered as shown below

it is a perfect square when considered as a function of . For since , with similar relations, we have a number of relations similar to , and either or for all different values of and . Now the determinant has the value

in general, and hence by substitution

A skew symmetric determinant has and for all values of and . Such a determinant when of uneven degree vanishes, for if we multiply each row by we multiply the determinant by , and the effect of this is otherwise merely to transpose the determinant so that it reads by rows as it formerly did by columns, an operation which we know leaves the determinant unaltered. Hence or . When a skew symmetric determinant is of even degree it is a perfect square. This theorem is due to Cayley, and reference may be made to Salmon’s Higher Algebra, 4th ed. Art. 39. In the case of the determinant of order 4 the square root is

.

A skew determinant is one which is skew symmetric in all respects, except that the elements of the leading diagonal are not all zero. Such a determinant is of importance in the theory of orthogonal substitution. In the theory of surfaces we transform from one set of three rectangular axes to another by the substitutions

where . This relation implies six equations between the coefficients, so that only three of them are independent. Further we find

and the problem is to express the nine coefficients in terms of three independent quantities.

In general in space of dimensions we have substitutions similar to

,

and we have to express the coefficients in terms of independent quantities; which must be possible, because

where and for all values of and . There are then quantities . Let the determinant of the b’s be and , the minor corresponding to . We can eliminate the quantities and obtain relations

and from these another equivalent set

and now writing

we have a transformation which is orthogonal, because and the elements , are functions of the independent quantities . We may therefore form an orthogonal transformation in association with every skew determinant which has its leading diagonal elements unity, for the quantities are clearly arbitrary.

For the second order we may take

,

and the adjoint determinant is the same; hence

Similarly, for the order 3, we take

and the adjoint is

,

leading to the orthogonal substitution

.
Functional determinants were first investigated by Jacobi in a work De Determinantibus Functionalibus. Suppose dependent variables , each of which is a function of independent variables , so that . From the differential coefficients of the y’s with regard to the x’s we form the functional determinant

If we have new variables z such that zsφs(y1, y2,...yn), we have also zsψs(x1, x2,...xn), and we may consider the three determinants

(y1, y2,...yn
x1, x2,...xn
), (z1, z2,...zn
y1, y2,...yn
), (z1, z2,...zn
x1, x2,...xn
)

Forming the product of the first two by the product theorem, we obtain for the element in the ith row and kth column

zi/y1 y1/xk+zi/y2 y2/xk+...+zi/yn yn/xk

which is zi/xk, the partial differential coefficient of zi, with regard to xk . Hence the product theorem

(z1, z2,...zn
y1, y2,...yn
), (y1, y2,...yn
x1, x2,...xn
)(z1, z2,...zn
x1, x2,...xn
);
and as a particular case
(y1, y2,...yn
x1, x2,...xn
) (x1, x2,...xn
y1, y2,...yn
)=1.

Theorem.—If the functions y1, y2,...yn be not independent of one another the functional determinant vanishes, and conversely if the determinant vanishes, y1, y2,...yn are not independent functions of x1, x2,...xn.
Linear Equations.—It is of importance to study the application of the theory of determinants to the solution of a system of linear equations. Suppose given the n equations

ƒ1a11x1+ a12x2+ ... a1nxn=0,
ƒ2a21x1+ a22x2+ ... a2nxn=0,
.......
ƒnan1x1+ an2x2+ ... annxn=0.

Denote by Δ the determinant (a11a22...ann).

Multiplying the equations by the minors A1μ, A2μ,...Anμ respectively, and adding, we obtain

xμ(a1μA1μ+a2μA2μ+...+anμAnμ)=xμΔ=0,

since from results already given the remaining coefficients of x1, x2,...xμ–1, xμ+1,...xn vanish identically.

Hence if Δ does not vanish x1x1=... =xn=0 is the only solution; but if Δ vanishes the equations can be satisfied by a system of values other than zeros. For in this case the n equations are not independent since identically

A1μƒ1 + A2μƒ2+...+Anμƒn=0,

and assuming that the minors do not all vanish the satisfaction of n–1 of the equations implies the satisfaction of the nth.

Consider then the system of n–1 equations

a21x1+ a22x2 +...+ a2nxn=0
a31x1+ a32x2 +...+ a3nxn=0
......
an1x1+ an2x2 +...+ annxn=0,
which becomes on writing xs/xnys,
a21y1+ a22y2 +...+ a2,n−1yn−1 +a2n=0
a31y1+ a32y2 +...+ a3,n−1yn−1 +a3n=0
.......
an1y1+ an2y2 +...+ an,n−1yn−1 +ann=0.
We can solve these, assuming them independent, for the n−1 ratios y1, y2,...yn−1.
Now
a21A11 + a22A12+...+a2nA1n=0
a31A11 + a32A12+...+a3nA1n=0
.......
an1A11 + an2A12+...+annA1n=0
and therefore, by comparison with the given equations, xiρA1i, where ρ is an arbitrary factor which remains constant as i varies.

Hence yiA1i/A1n where A li and A1n, are minors of the complete determinant
(a11a22...ann).

a21 a22 ...a2,i–1 a2,i+1... a2n
a31 a32 ...a3,i–1 a3,i+1 ...a3n
...........

yi=(−)i+n 
an1 an2 ...an,i–1 an,i+1 ...a2nn
 ————————————,
a21 a22 ...a2,n–1
a31 a22 ...a2,n–1
......
an1 an2 ...an,n–1

or, in words, yi is the quotient of the determinant obtained by erasing the i th column by that obtained by erasing the nth column, multiplied by (–1)i+n. For further information concerning the compatibility and independence of a system of linear equations, see Gordon, Vorlesungen über Invariantentheorie, Bd. 1, § 8.

Resultants.—When we are given k homogeneous equations in k variables or k non-homogeneous equations in k − 1 variables, the equations being independent, it is always possible to derive from them a single equation R=0, where in R the variables do not appear. R is a function of the coefficients which is called the "resultant" or "eliminant" of the k equations, and the process by which it is obtained is termed "elimination." We cannot combine the equations so as to eliminate the variables unless on the supposition that the equations are simultaneous, i.e. each of them satisfied by a common system of values; hence the equation R=0 is derived on this supposition, and the vanishing of R expresses the condition that the equations can be satisfied by a common system of values assigned to the variables.

Consider two binary equations of orders m and n respectively expressed in non-homogeneous form, viz.

ƒ(x) =ƒ=a0xma1xm–1 + a2xm–2 – ...=0,
ƒ(φ)=φb0xnb1xn–1 + b2xn–2 – ...=0,
If α1, α2, ...αm be the roots of ƒ=0, β1, β2, ...βn the roots of φ=0, the condition that some root of φ=0 may cause ƒ to vanish is clearly
Rƒ,φ=ƒ (β1)ƒ(β2)...ƒ(β2)=0;
so that Rƒ,φ is the resultant of ƒ and φ, and expressed as a function of the roots, it is of degree m in each root β, and of degree n in each root α, and also a symmetric function alike of the roots α and of the roots β; hence, expressed in terms of the coefficients, it is homogeneous and of degree n in the coefficients of ƒ, and homogeneous and of degree m in the coefficients of φ
Ex. gr.
ƒ=a0x² − a1x+a2=0, φb0x² − b1x+b2.
We have to multiply a0β2
1
a1β1+a2 by a0β2
2
a1β2+a2 and we obtain
a2
0
β2
1
β2
2
a0a1(β2
1
β2 + β1β2
2
) + a0a2(β2
1
β2
1
+ β1β2
2
) + a2
1
0
β1β2a1a2(β1 + β2) + a2
2
,
where
β1 + β2b1/b0,β1 β2b2/b0, β1 β2b2
1
– 2b0b2
/b2
0
,
and clearing of fractions
Rƒ,φ=(a0b2a2b0)² + (a1b0a0b1)(a1b2a2b1).

We may equally express the result as
φ(α(1)φ(α2)...φ(αm)=0,
or as
II
s,t
(αsβt=0.

This expression of R shows that, as will afterwards appear, the resultant is a simultaneous invariant of the two forms.

The resultant being a product of mn root differences, is of degree mn in the roots, and hence is of weight mn in the coefficients of the forms; i.e. the sum of the suffixes in each term of the resultant is equal to mn.

Resultant Expressible as a Determinant.—From the theory of linear equations it can be gathered that the condition that p linear equations in p variables (homogeneous and independent) may be simultaneously satisfied is expressible as a determinant, viz. if
a11x1 + a12x2 +...+ a1pxp=0,
a21x1 + a22x2 +...+ a2pxp=0,
......
ap1x1 + ap2x2 +...+ appxp=0,

be the system the condition is, in determinant form

(a11a22...app)=0;

in fact the determinant is the resultant of the equations.

Now, suppose ƒ and φ to have a common factor xγ,

ƒ(x)=ƒ1(x)(xγ); φ(x)=φ1(x)(xγ),

ƒ1 and φ1 being of degrees m – 1 and n – 1 respectively; we have the identity φ1ƒ(x)=ƒ1(x)φ(x) of degree m + n – 1.

Assuming then φ1 to have the coefficients B1, B2,...Bn
and ƒ1the coefficients A1, A2,...Am,

we may equate coefficients of like powers of x in the identity, and obtain m + n homogeneous linear equations satisfied by the m + n quantities B1, B2,...Bn, A1, A2,...Am. Forming the resultant of these equations we evidently obtain the resultant of ƒ and φ.

Thus to obtain the resultant of

ƒ=a0x3 + a1x2 + a2x+ a3, , φb0x2 + b1x+ b2

we assume the identity

(B0x + B1)(a0x3 + a1x2 + a2x+ a3)=(A0x2 + A1x+ A2)(b0x2 + b1x+ b2),

and derive the linear equations

B0a0 −A0b0 =0,
B0a1 +B1a0 −A0b1 −A1b0 =0,
B0a2 +B1a1 −A0b2 −A1b1 −A2b0 =0,
B0a3 +B1a2 −A1b2 −A2b1 =0,
B1a3 −A2b2 =0,
and by elimination we obtain the resultant

This is Euler’s method. Sylvester’s leads to the same expression, but in a simpler manner.

He forms n equations from ƒ by separate multiplication by xn–1, xn–2,...x, 1, in succession, and similarly treats φ with m multipliers xm –1, xm –2,...x, 1. From these m + n equations he eliminates the m + n powers xm +n –1, xm +n–2, x,.. 1, treating them as independent unknowns. Taking the same example as before the process leads to the system of equations

a0x4+ a1x3+ a2x2+ a3x =0,
a0x3+ a1x2+ a2x+ a3 =0,
b0x4+ b1x3+ b2x2 =0,
b0x3+ b1x2+ b2x =0,
b0x2+ b1x+ b2 =0,

whence by elimination the resultant

a0 a1 a2 a3 0
0 a0 a1 a2 a3
b0 b1 b2 0 0
0 b0 b1 b2 0
0 0 b9 b1 b2

which reads by columns as the former determinant reads by rows, and is therefore identical with the former. E. Bézout’s method gives the resultant in the form of a determinant of order m or n, according as m is ≷ n. As modified by Cayley it takes a very simple form. He forms the equation
ƒ(x)φ(x ′) − ƒ(x ′)φ(x)=0,
which can be satisfied when ƒ and φ possess a common factor. He first divides by the factor xx ′, reducing it to the degree m − 1 in both x and x ′ where m > n; he then forms m equations by equating to zero the coefficients of the various powers of x ′; these equations involve the m powers x0, x, x2,... xm−1 of x, and regarding these as the unknowns of a system of linear equations the resultant is reached in the form of a determinant of order m. Ex. gr. Put
 (a0x3+a1x2+a2x +a3) (b0x ′2+b1x ′+b2) − (a0x ′3+a1x ′2+a2x ′ +a3) (b0x2+b1x+b2)=0;
after division by xx ′ the three equations are formed

a0b0x2+a0b1x+a0b2 =0,
a0b1x2+(a0b2+a1b1a0b2)x+a1b2a3b0 =0,
a0b2x2+(a1b2a3b0)x+a2b2a3b1 =0

and thence the resultant

a0b0 a0b1 a0b2
a0b1 a0b2+a1b1a0b2 a1b2a3b0
a0b2 a1b2a3b0 a2b2a3b1

which is a symmetrical determinant.

Case of Three Variables.—In the next place we consider the resultants of three homogeneous polynomials in three variables. We can prove that if the three equations be satisfied by a system of values of the variable, the same system will also satisfy the Jacobian or functional determinant. For if u, v, w be the polynomials of orders m, n, p respectively, the Jacobian is (u1 v2 w3), and by Euler’s theorem of homogeneous functions
xu1 + yu2 + zu3mu
xv1 + yv2 + zv3nv
xw1 + yw2 + zw3pw;
denoting now the reciprocal determinant by (U1 V2 W3) we obtain JxmuU1 + nvV1 + pwW1; Jy=..., Jz=..., and it appears that the vanishing of u, v, and w implies the vanishing of J. Further, if mnp, we obtain by differentiation
J + x∂J/x =m (u∂U1/x. + v∂V1/x + u∂W1/x + u1U1 v1V1 w1W1).
or
x∂J/x =m – 1)J + m (u∂U1/x. + v∂V1/x + u∂W1/x).

Hence the system of values also causes ∂J/x to vanish in this case; and by symmetry ∂J/y and ∂J/z also vanish.

The proof being of general application we may state that a system of values which causes the vanishing of k polynomials in k variables causes also the vanishing of the Jacobian, and in particular, when the forms are of the same degree, the vanishing also of the differential coefficients of the Jacobian in regard to each of the variables.

There is no difficulty in expressing the resultant by the method of symmetric functions. Taking two of the equations
axm + (by + cz) xm–1 +... =0,
a′xn + (b′y + c′z) xn–1 +... =0,
we find that, eliminating x, the resultant is a homogeneous function of y and z of degree mn; equating this to zero and solving for the ratio of y to z we obtain mn solutions; if values of y and z, given by any solution, be substituted in each of the two equations, they will possess a common factor which gives a value of x which, combined with the chosen values of y and z, yields a system of values which satisfies both equations. Hence in all there are mn such systems. If, therefore, we have a third equation, and we substitute each system of values in it successively and form the product of the mn expressions thus formed, we obtain a function which vanishes if any one system of values, common to the first two equations, also satisfies the third. Hence this product is the required resultant of the three equations.

Now by the theory of symmetric functions, any symmetric functions of the mn values which satisfy the two equations, can be expressed in terms of the coefficient of those equations. Hence, finally, the resultant is expressed in terms of the coefficients of the three equations, and since it is at once seen to be of degree mn in the coefficient of the third equation, by symmetry it must be of degrees np and pm in the coefficients of the first and second equations respectively. Its weight will be mnp (see Salmon’s Higher Algebra, 4th ed. § 77). The general theory of the resultant of k homogeneous equations in k variables presents no further difficulties when viewed in this manner.

The expression in form of a determinant presents in general considerable difficulties. If three equations, each of the second degree, in three variables be given, we have merely to eliminate the six products x², y², z², yz, zx, xy from the six equations
uvw∂J/x∂J/y∂J/z=0; if we apply the same process to these equations each of degree three, we obtain similarly a determinant of order 21, but thereafter the process fails. Cayley, however, has shown that, whatever be the degrees of the three equations, it is possible to represent the resultant as the quotient of two determinants (Salmon, l.c. p. 89).

Discriminants.—The discriminant of a homogeneous polynomial in k variables is the resultant of the k polynomials formed by differentiations in regard to each of the variables.

It is the resultant of k polynomials each of degree m–1, and thus contains the coefficients of each form to the degree (m–1)k–1; hence the total degrees in the coefficients of the k forms is, by addition, k(m–1)k–1; it may further be shown that the weight of each term of the resultant is constant and equal to m(m–1)k–1 (Salmon, l.c. p. 100).

A binary form which has a square factor has its discriminant equal to zero. This can be seen at once because the factor in question being once repeated in both differentials, the resultant of the latter must vanish.

Similarly, if a form in k variables be expressible as a quadratic function of k – 1, linear functions X1, X2, ... Xk – 1, the coefficients being any polynomials, it is clear that the k differentials have, in common, the system of roots derived from X1=X2=...=Xk – 1=0, and have in consequence a vanishing resultant. This implies the vanishing of the discriminant of the original form.

Expression in Terms of Roots.—Since x∂ƒ/x+∂ƒ/ymƒ, if we take any root x1, y1, of ∂ƒ/x, and substitute in mf we must obtain, y1(∂ƒ/y)
xx1

yy1
; hence the resultant of ∂ƒ/x and ƒ is, disregarding numerical factors, y1y2...yn–1 × discriminant of ƒ=a0 × disct. of ƒ.

Now
ƒ=(xy1x1y)(xy2x2y) ... (xymxmy),
∂ƒ/x =Σ1 y1(xymxmy),
and substituting in the latter any root of ƒ and forming the product, we find the resultant of ƒ and ∂ƒ/x, viz.

y1y2...ym(x1y2x2y1)2(x1y3x3y1)2...(xrysxsyr)2...

and, dividing by y1y2...ym, the discriminant of ƒ is seen to be equal to the product of the squares of all the differences of any two roots of the equation. The discriminant of the product of two forms is equal to the product of their discriminants multiplied by the square of their resultant. This follows at once from the fact that the discriminant is
II(αrαs)2II(βrβs)2{II(αrβs}2.

References for the Theory of Determinants.—T. Muir’s “List of Writings on Determinants,” Quarterly Journal of Mathematics. vol. xviii. pp. 110-149, October 1881, is the most important bibliographical article on the subject in any language; it contains 589 entries, arranged in chronological order, the first date being 1693 and the last 1880. The bibliography has been continued, and published at various dates (vol. xxi. pp. 299-320; vol. xxxvi. pp. 171-267) in the same periodical. These lists contain 1740 entries. T. Muir, History of the Theory of Determinants (2nd ed., London, 1906). School treatises are those of Thomson, Mansion, Bartl, Mollame, in English, French, German and Italian respectively.—Advanced treatises are those of William Spottiswoode (1851), Francesco Brioschi (1854), Richard Baltzer (1857), George Salmon (1859), N. Trudi (1862), Giovanni Garbieri (1874), Siegmund Gunther (1875), Georges J. Dostor (1877), Baraniecki (the most extensive of all) (1879), R. F. Scott (2nd ed., 1904), T. Muir (1881).

II. The Theory Of Symmetric Functions

Consider quantities .

Every rational integral function of these quantities, which does not alter its value however the suffixes be permuted, is a rational integral symmetric function of the quantities. If we write , are called the elementary symmetric functions.




The general monomial symmetric function is

,

the summation being for all permutations of the indices which result in different terms. The function is written

for brevity, and repetitions of numbers in the bracket are indicated by exponents, so that is written The weight of the function is the sum of the numbers in the bracket, and the degree the highest of those numbers.

Ex. gr. The elementary functions are denoted by

,

are all of the first degree, and are of weights respectively.

Remark.—In this notation ; ; ... , &c. The binomial coefficients appear, in fact, as symmetric functions, and this is frequently of importance.

The order of the numbers in the bracket is immaterial; we may therefore always place them, as is most convenient, in descending order of magnitude; the numbers then constitute an ordered partition of the weight , and the leading number denotes the degree.

The sum of the monomial functions of a given weight is called the homogeneous-product-sum or complete symmetric function of that weight; it is denoted by ; it is connected with the elementary functions by the formula

,

which remains true when the symbols and are interchanged, as is at once evident by writing for . This proves, also, that in any formula connecting with the symbols and may be interchanged.

Ex. gr, from we derive .

The function being as above denoted by a partition of the weight, viz. , it is necessary to bring under view other functions associated with the same series of numbers: such, for example, as

.

The expression just written is in fact a partition of a partition, and to avoid confusion of language will be termed a separation of a partition. A partition is separated into separates so as to produce a separation of the partition by writing down a set of partitions, each separate partition in its own brackets, so that when all the parts of these partitions are reassembled in a single bracket the partition which is separated is reproduced. It is convenient to write the distinct partitions or separates in descending order as regards weight. If the successive weights of the separates be enclosed in a bracket we obtain a partition of the weight which appertains to the separated partition. This partition is termed the specification of the separation. The degree of the separation is the sum of the degrees of the component separates. A separation is the symbolic representation of a product of monomial symmetric functions. A partition, , can be separated in the manner , and we may take the general form of a partition to be and that of a separation when denote the distinct separates involved.

Theorem.— The function symbolized by , viz. the sum of the nth powers of the quantities, is expressible in terms of functions which are symbolized by separations of any partition of the number . The expression is—


,

being a separation of and the summation being in regard to all such separations. For the particular case

To establish this write—

,

the product on the right involving a factor for each of the quantities , and being arbitrary.

Multiplying out the right-hand side and comparing coefficients

,
,
,
,

,

the summation being for all partitions of .

Auxiliary Theorem.—The coefficient of in the product is where is a separation of of specification , and the sum is for all such separations.

To establish this observe the result.

and remark that is a separation of of specification . A similar remark may be made in respect of

,

and therefore of the product of those expressions. Hence the theorem.

Now


whence, expanding by the exponential and multinomial theorems, a comparison of the coefficients of gives


and, by the auxiliary theorem, any term on the right-hand side is such that the coefficient of in is

,

where since is the specification of , . Comparison of the coefficients of therefore yields the result


,

for the expression of in terms of products of symmetric functions symbolized by separations of .

Let denote the sums of the nth powers of quantities whose elementary symmetric functions are ; ; respectively: then the result arrived at above from the logarithmic expansion may be written

,

exhibiting as an invariant of the transformation given by the expressions of in terms of .

The inverse question is the expression of any monomial symmetric function by means of the power functions .

Theorem of Reciprocity.—If

,

where is a numerical coefficient, then also

.

We have found above that the coefficient of in the product is

,

the sum being for all separations of which have the specification . We can multiply out this expression so as to obtain a series of monomials of the form . It can be shown that the number enumerates distributions of a certain nature defined by the partitions , , and it is seen intuitively that the number remains unaltered when the first two of these partitions are interchanged (see Combinatorial Analysis). Hence the theorem is established.

Putting and we find a particular law of reciprocity given by Cayley and Betti,

and another by putting , for then becomes , and we have

Theorem of Expressibility.—“If a symmetric function be symboilized by and be any partitions of respectively, the function is expressible by means of functions symbolized by separation of

For, writing as before,

is a linear function of separations of of specification and if is a linear function of separations of of specification Suppose the separations of to involve different specifications and form the identities

where is one of the specifications.

The law of reciprocity shows that

viz.: a linear function of symmetric functions symbolized by the specifications; and that A table may be formed expressing the expressions as linear functions of the expressions , , and the numbers occurring therein possess row and column symmetry. By solving linear equations we similarly express the latter functions as linear functions of the former, and this table will also be symmetrical.

Theorem.—“The symmetric function