# 1911 Encyclopædia Britannica/Energetics

**ENERGETICS.** The most fundamental result attained by the
progress of physical science in the 19th century was the definite
enunciation and development of the doctrine of energy, which is
now paramount both in mechanics and in thermodynamics.
For a discussion of the elementary ideas underlying this conception
see the separate heading Energy.

Ever since physical speculation began in the atomic theories of the Greeks, its main problem has been that of unravelling the nature of the underlying correlation which binds together the various natural agencies. But it is only in recent times that scientific investigation has definitely established that there is a quantitative relation of simple equivalence between them, whereby each is expressible in terms of heat or mechanical power; that there is a certain measurable quantity associated with each type of physical activity which is always numerically identical with a corresponding quantity belonging to the new type into which it is transformed, so that the energy, as it is called, is conserved in unaltered amount. The main obstacle in the way of an earlier recognition and development of this principle had been the doctrine of caloric, which was suggested by the principles and practice of calorimetry, and taught that heat is a substance that can be transferred from one body to another, but cannot be created or destroyed, though it may become latent. So long as this idea maintained itself, there was no possible compensation for the destruction of mechanical power by friction; it appeared that mechanical effect had there definitely been lost. The idea that heat is itself convertible into power, and is in fact energy of motion of the minute invisible parts of bodies, had been held by Newton and in a vaguer sense by Bacon, and indeed long before their time; but it dropped out of the ordinary creed of science in the following century. It held a place, like many other anticipations of subsequent discovery, in the system of Natural Philosophy of Thomas Young (1804); and the discrepancies attending current explanations on the caloric theory were insisted on, about the same time, by Count Rumford and Sir H. Davy. But it was not till the actual experiments of Joule verified the same exact equivalence between heat produced and mechanical energy destroyed, by whatever process that was accomplished, that the idea of caloric had to be definitely abandoned. Some time previously R. Mayer, physician, of Heilbronn, had founded a weighty theoretical argument on the production of mechanical power in the animal system from the food consumed; he had, moreover, even calculated the value of a unit of heat, in terms of its equivalent in power, from the data afforded by Regnault’s determinations of the specific heats of air at constant pressure and at constant volume, the former being the greater on Mayer’s hypothesis (of which his calculation in fact constituted the verification) solely on account of the power required for the work of expansion of the gas against the surrounding constant pressure. About the same time Helmholtz, in his early memoir on the Conservation of Energy, constructed a cumulative argument by tracing the ramifications of the principle of conservation of energy throughout the whole range of physical science.

*Mechanical and Thermal Energy.*—The amount of energy,
defined in this sense by convertibility with mechanical work,
which is contained in a material system, must be a function of its
physical state and chemical constitution and of its temperature.
The change in this amount, arising from a given transformation
in the system, is usually measured by degrading the energy that
leaves the system into heat; for it is always possible to do this,
while the conversion of heat back again into other forms of
energy is impossible without assistance, taking the form of
compensating degradation elsewhere. We may adopt the
provisional view which is the basis of abstract physics, that all
these other forms of energy are in their essence mechanical,
that is, arise from the motion or strain of material or ethereal
media; then their distinction from heat will lie in the fact that
these motions or strains are simply co-ordinated, so that they can
be traced and controlled or manipulated in detail, while the
thermal energy subsists in irregular motions of the molecules or
smallest portions of matter, which we cannot trace on account of
the bluntness of our sensual perceptions, but can only measure as
regards total amount.

*Historical: Abstract Dynamics.*—Even in the case of a purely
mechanical system, capable only of a finite number of definite
types of disturbance, the principle of the conservation of energy is
very far from giving a complete account of its motions; it forms
only one among the equations that are required to determine
their course. In its application to the kinetics of invariable
systems, after the time of Newton, the principle was emphasized
as fundamental by Leibnitz, was then improved and generalized
by the Bernoullis and by Euler, and was ultimately expressed in
its widest form by Lagrange. It is recorded by Helmholtz that
it was largely his acquaintance in early years with the works of
those mathematical physicists of the previous century, who had
formulated and generalized the principle as a help towards the
theoretical dynamics of complex systems of masses, that started
him on the track of extending the principle throughout the whole
range of natural phenomena. On the other hand, the ascertained
validity of this extension to new types of phenomena, such as
those of electrodynamics, now forms a main foundation of our
belief in a mechanical basis for these sciences.

In the hands of Lagrange the mathematical expression for the
manner in which the energy is connected with the geometrical
constitution of the material system became a sufficient basis for a
complete knowledge of its dynamical phenomena. So far as
statics was concerned, this doctrine took its rise as far back as
Galileo, who recognized in the simpler cases that the work
expended in the steady driving of a frictionless mechanical
system is equal to its output. The expression of this fact was
generalized in a brief statement by Newton in the *Principia*, and
more in detail by the Bernoullis, until, in the analytical guise of
the so-called principle of “virtual velocities” or virtual work, it
finally became the basis of Lagrange’s general formulation of
dynamics. In its application to kinetics a purely physical
principle, also indicated by Newton, but developed long after with
masterly applications by d’Alembert, that the reactions of the
infinitesimal parts of the system against the accelerations of
their motions statically equilibrate the forces applied to the system
as a whole, was required in order to form a sufficient basis, and
one which Lagrange soon afterwards condensed into the single
relation of Least Action. As a matter of history, however, the
complete formulation of the subject of abstract dynamics actually
arose (in 1758) from Lagrange’s precise demonstration of the
principle of Least Action for a particle, and its immediate extension,
on the basis of his new Calculus of Variations, to a system
of connected particles such as might be taken as a representation
of any material system; but here too the same physical as
distinct from mechanical considerations come into play as in
d’Alembert’s principle. (See Dynamics: *Analytical*.)

It is in the cases of systems whose state is changing so slowly
that reactions arising from changing motions can be neglected,
that the conditions are by far the simplest. In such systems,
whether stationary or in a state of steady motion, the energy
depends on the configuration alone, and its mathematical
expression can be determined from measurement of the work
required for a sufficient number of simple transformations;
once it is thus found, all the statical relations of the system are
implicitly determined along with it, and the results of all other
transformations can be predicted. The general development of
such relations is conveniently classed as a separate branch of
physics under the name *Energetics*, first invented by W. J. M.
Rankine; but the essential limitations of this method have not
always been observed. As regards statical change, the complete
specification of a mechanical system is involved in its geometrical
configuration and the function expressing its mechanical energy
in terms thereof. Systems which have statical energy-functions
of the same analytical form behave in corresponding ways, and
can serve as models or representations of one another.

*Extension to Thermal and Chemical Systems.*—This dominant
position of the principle of energy, in ordinary statical problems,
has in recent times been extended to transformations involving
change of physical state or chemical constitution as well as change
of geometrical configuration. In this wider field we cannot
assert that mechanical (or available) energy is never lost, for it
may be degraded into thermal energy; but we can use the
principle that on the other hand it can never spontaneously
increase. If this were not so, cyclic processes might theoretically
be arranged which would continue to supply mechanical power
so long as energy of any kind remained in the system; whereas
the irregular and uncontrollable character of the molecular
motions and strains which constitute thermal energy, in combination
with the vast number of the molecules, must place an effectual
bar on their unlimited co-ordination. To establish a doctrine
of *energetics* that shall form a sufficient foundation for a theory
of the trend of chemical and physical change, we have, therefore,
to impart precision to this motion of available energy.

*Carnot’s Principle: Entropy.*—The whole subject is involved
in the new principle contributed to theoretical physics by Sadi
Carnot in 1824, in which the far-reaching modern conception of
cyclic processes was first scientifically developed. It was shown
by Carnot, on the basis of certain axioms, whose theoretical
foundations were subsequently corrected and strengthened by
Clausius and Lord Kelvin, that a reversible mechanical process,
working in a cycle by means of thermal transfers, which takes
heat, say H_{1}, into the material system at a given temperature
T_{1}, and delivers the part of it not utilized, say H_{2}, at a lower
given temperature T_{2}, is more efficient, considered as a working
engine, than any other such process, operating between the same
two temperatures but not reversible, could be. This relation of
inequality involves a definite law of equality, that the mechanical
efficiencies of all reversible cyclic processes are the same, whatever
be the nature of their operation or the material substances
involved in them; that in fact the efficiency is a function solely
of the two temperatures at which the cyclically working system
takes in and gives out heat. These considerations constitute a
fundamental general principle to which all possible slow reversible
processes, so far as they concern matter in bulk, must conform in
all their stages; its application is almost coextensive with the
scope of general physics, the special kinetic theories in which
inertia is involved, being excepted. (See Thermodynamics.)
If the working system is an ideal gas-engine, in which a perfect
gas (known from experience to be a possible state of matter) is
passed through the cycle, and if temperature is measured from
the absolute zero by the expansion of this gas, then simple direct
calculation on the basis of the laws of ideal gases shows that
H_{1}/T_{1} = H_{2}/T_{2}; and as by the conservation of energy the work
done is H_{1} − H_{2}, it follows that the efficiency, measured as the
ratio of the work done to the supply of heat, is 1 − T_{2}/T_{1}. If we
change the sign of H_{1} and thus consider heat as positive when
it is restored to the system as is H_{2}, the fundamental equation
becomes H_{1}/T_{1} + H_{2}/T_{2} = 0; and as any complex reversible
working system may be considered as compounded in various
ways of chains of elementary systems of this type, *whose effects*
*are additive*, the general proposition follows, that in any reversible
complete cyclic change which involves the taking in of heat by
the system of which the amount is δH, when its temperature
ranges between T_{r} and T_{r} + δT, the equation ΣδH_{r} /T_{r} -0 holds
good. Moreover, if the changes are not reversible, the proportion
of the heat supply that is utilized for mechanical work will be
smaller, so that more heat will be restored to the system, and
ΣδH_{r} /T_{r} or, as it may be expressed, ƒ*d*H/T, must have a larger
value, and must thus be positive. The first statement involves
further, that for all reversible paths of change of the system from
one state C to another state D, the value of ƒ*d*H/T must be the
same, because any one of these paths and any other one reversed
would form a cycle; whereas for any irreversible path of change
between the same states this integral must have a greater value
(and so exceed the difference of entropies at the ends of the path).
The definite quantity represented by this integral for a reversible
path was introduced by Clausius in 1854 (also adumbrated by
Kelvin’s investigations about the same time), and was named
afterwards by him the increase of the *entropy* of the system in
passing from the state C to the state D. This increase, being thus
the same for the unlimited number of possible reversible paths
involving independent variation of all its finite co-ordinates,
along which the system can pass, can depend only on the terminal
states. The entropy belonging to a given state is therefore a
function of that state alone, irrespective of the manner in which it
has been reached; and this is the justification of the assignment to
it of a special name, connoting a property of the system depending
on its actual condition and not on its previous history. Every
reversible change in an isolated system thus maintains the
entropy of that system unaltered; no possible spontaneous
change can involve decrease of the entropy; while any defect of
reversibility, arising from diffusion of matter or motion in the
system, necessarily leads to increase of entropy. For a physical or
chemical system only those changes are spontaneously possible
which would lead to increase of the entropy; if the entropy is
already a maximum for the given total energy, and so incapable
of further continuous increase under the conditions imposed
upon the system, there must be stable equilibrium.

This definite quantity belonging to a material system, its entropy φ, is thus concomitant with its energy E, which is also a definite function of its actual state by the law of conservation of energy; these, along with its temperature T, and the various co-ordinates expressing its geometrical configuration and its physical and chemical constitution, are the quantities with which the thermodynamics of the system deals. That branch of science develops the consequences involved in just two principles: (i.) that the energy of every isolated system is constant, and (ii.) that its entropy can never diminish; any complication that may be involved arises from complexity in the systems to which these two laws have to be applied.

*The General Thermodynamic Equation.*—When any physical or
chemical system undergoes an infinitesimal change of state, we
have δE = δH + δU, where δH is the energy that has been acquired
*as heat* from sources extraneous to the system during the change,
and δU is the energy that has been imparted by reversible
agencies such as mechanical or electric work. It is, however,
not usually possible to discriminate permanently between heat
acquired and work imparted, for (unless for isothermal transformations)
neither δH nor δU is the exact differential of a function of
the constitution of the system and so independent of its previous
history, although their sum δE is such; but we can utilize the
fact that δH is equal to Tδφ where δφ is such, as has just been seen.
Thus E and φ represent properties of the system which, along with
temperature, pressure and other independent data specifying its
constitution, must form the variables of an analytical exposition.
We have, therefore, to substitute Tδφ for δH; also the *change* of
internal energy is determined by the change of constitution,
involving a differential relation of type

*p*δv + δW + μ

_{1}δ

*m*

_{1}+ μ

_{2}δ

*m*

_{2}+ ... + μ

_{n}δ

*m*

_{n},

when the system consists of an intimate mixture (solution) of
masses *m*_{1}, *m*_{2}, ... *m*_{n} of given constituents, which differ physically
or chemically but may be partially transformable into each other
by chemical or physical action during the changes under consideration,
the whole being of volume v and under extraneous
pressure *p*, while W is potential energy arising from physical
forces such as those of gravity, capillarity, &c. The variables
*m*_{1}, *m*_{2}, ... *m*_{n} may not be all independent; for example, if the
system were chloride of ammonium gas existing along with its
gaseous products of dissociation, hydrochloric acid and ammonia,
only one of the three masses would be independently variable. The
sufficient number of these variables (independent components)
together with two other variables, which may be v and T, or v and
φ, specifies and determines the state of the system, considered as
matter in bulk, at each instant. It is usual to include δW in
μ_{1}δ*m*_{1} + ...; in all cases where this is possible the single
equation

*p*δv + μ

_{1}δ

*m*

_{1}+ μ;

_{2}δ

*m*

_{2}+ ... + μ

_{n}δ

*m*

_{n}

thus expresses the complete variation of the energy-function E
arising from change of state; and when the part involving the n
constitutive differentials has been expressed in terms of the
number of them that are really independent, this equation by
itself becomes the unique expression of *all* the thermodynamic
relations of the system. These are in fact the various relations
ensuring that the right-hand side is an exact differential, and are
of the type of reciprocal relations such as dμ_{r}/*d*φ = *d*T/*dm*_{r}.

The condition that the state of the system be one of stable
equilibrium is that δφ, the variation of entropy, be negative for
all formally imaginable infinitesimal transformations which
make δE vanish; for as δφ cannot actually be negative for any
spontaneous variation, none of these transformations can then
occur. From the form of the equation, this condition is the same
as that δE − Tδφ must be positive for *all possible* variations of
state of the system as above defined in terms of co-ordinates
representing its constitution in bulk, without restriction.

We can change one of the independent variables expressing the state of the system from φ to T by subtracting δ(φT) from both sides of the equation of variation: then

*p*δ

*v*+ μ

_{1}δ

*m*

_{1}+ ... + μ

_{n}δ

*m*

_{n}.

It follows that for *isothermal* changes, *i.e.* those for which δT is
maintained null by an environment at constant temperature, the
condition of stable equilibrium is that the function E − Tφ shall be
a minimum. If the system is subject to an external pressure *p*,
which as well as the temperature is imposed constant from
without and thus incapable of variation through internal changes,
the condition of stable equilibrium is similarly that E − Tφ + *pv*
shall be a minimum.

A chemical system maintained at constant temperature by
communication of heat from its environment may thus have
several states of stable equilibrium corresponding to different
minima of the function here considered, just as there may be
several minima of elevation on a landscape, one at the bottom of
each depression; in fact, this analogy, when extended to space of
n dimensions, exactly fits the case. If the system is sufficiently
disturbed, for example, by electric shock, it may pass over
(explosively) from a higher to a lower minimum, but never
(without compensation from outside) in the opposite direction.
The former passage, moreover, is often effected by introducing a
new substance into the system; sometimes that substance is
recovered unaltered at the end of the process, and then its action
is said to be purely *catalytic*; its presence modifies the form of
the function E − Tφ so as to obliterate the ridge between the two
equilibrium states in the graphical representation.

There are systems in which the equilibrium states are but very
slightly dependent on temperature and pressure within wide
limits, outside which reaction takes place. Thus while there are
cases in which a state of mobile dissociation exists in the system
which changes continuously as a function of these variables,
there are others in which change does not sensibly occur at all
until a certain *temperature of reaction* is attained, after which it
proceeds very rapidly owing to the heat developed, and the
system soon becomes sensibly permanent in a transformed phase
by completion of the reaction. In some cases of this latter type
the cause of the delay in starting lies possibly in passive resistance
to change, of the nature of viscosity or friction, which is
competent to convert an unstable mechanical equilibrium into a
moderately stable one; but in most such reactions there seems to
be no exact equilibrium at any temperature, short of the ultimate
state of dissipated energy in which the reaction is completed,
although the velocity of reaction is found to diminish exponentially
with change of temperature, and thus becomes insignificant at a
small interval from the temperature of pronounced activity.

*Free Energy*.—The quantity E − Tφ thus plays the same
fundamental part in the thermal statics of general chemical
systems at uniform temperature that the potential energy plays
in the statics of mechanical systems of unchanging constitution.
It is a function of the geometrical co-ordinates, the physical and
chemical constitution, and the temperature of the system, which
determines the conditions of stable equilibrium *at each temperature*;
it is, in fact, the potential energy generalized so as to
include temperature, and thus be a single function relating to each
temperature but at the same time affording a basis of connexion
between the properties of the system at different temperatures.
It has been called the *free energy* of the system by Helmholtz, for
it is the part of the energy whose variation is connected with
changes in the bodily structure of the system represented by the
variables *m*_{1}, *m*_{2}, ... *m*_{n}, and not with the irregular molecular
motions represented by heat, so that it can take part freely in
physical transformations. Yet this holds good only subject to
the condition that the temperature is not varied; it has been
seen above that for the more general variation neither δH nor δU
is an exact differential, and no line of separation can be drawn
between thermal and mechanical energies.

The study of the evolution of ideas in this, the most abstract
branch of modern mathematical physics, is rendered difficult in
the manner of most purely philosophical subjects by the variety
of terminology, much of it only partially appropriate, that has
been employed to express the fundamental principles by different
investigators and at different stages of the development.
Attentive examination will show, what is indeed hardly surprising,
that the principles of the theory of free energy of Gibbs and Helmholtz
had been already grasped and exemplified by Lord Kelvin
in the very early days of the subject (see the paper “On the
Thermoelastic and Thermomagnetic Properties of Matter,
Part I.” *Quarterly Journal of Mathematics*, No. 1, April 1855;
reprinted in Phil. Mag., January 1878, and in *Math. and Phys.*
*Papers*, vol. i. pp. 291, seq.). Thus the striking new advance
contained in the more modern work of J. Willard Gibbs (1875–1877)
and of Helmholtz (1882) was rather the sustained general
application of these ideas to chemical systems, such as the
galvanic cell and dissociating gaseous systems, and in general
fashion to heterogeneous concomitant phases. The fundamental
paper of Kelvin connecting the electromotive force of the cell
with the energy of chemical transformation is of date 1851, some
years before the distinction between free energy and total energy
had definitely crystallized out; and, possibly satisfied with the
approximate exactness of his imperfect formula when applied to a
Daniell’s cell (*infra*), and deterred by absence of experimental
data, he did not return to the subject. In 1852 he briefly
announced (*Proc. Roy. Soc. Edin.*) the principle of the dissipation
of mechanical (or available) energy, including the necessity of
compensation elsewhere when restoration occurs, in the form that
“any restoration of mechanical energy, without more than an
equivalent of dissipation, is impossible”—probably even in
vital activity; but a sufficient specification of available energy
(cf. *infra*) was not then developed. In the paper above referred to,
where this was done, and illustrated by full application to solid
elastic systems, the total energy is represented by *c* and is named
“the intrinsic energy,” the energy taken in during an isothermal
transformation is represented by *e*, of which H is taken in as heat,
while the remainder, the change of free (or mechanical or
available) energy of the system is the unnamed quantity denoted
by the symbol *w*, which is “the work done by the applied forces”
at uniform temperature. It is pointed out that it is *w* and not *e*
that is the potential energy-function for isothermal change, of
which the form can be determined directly by dynamical and
physical experiment, and from which alone the criteria of
equilibrium and stress are to be derived—simply for the reason
that for all *reversible* paths at constant temperature between the
same terminal configurations, there must, by Carnot’s principle,
be the same gain or loss of heat. And a system of formulae are
given (5) to (11)—*Ex. gr. e = w − t dwdt* + J *sdt* for finding the total
energy *e* for any temperature *t* when *w* and the thermal capacity *s*
of the system, in a standard state, have thus been ascertained,
and another for establishing connexion between the form of *w*
for one temperature and its form for adjacent temperatures—which
are identical with those developed by Helmholtz long
afterwards, in 1882, except that the entropy appears only as an
unnamed integral. The progress of physical science is formally
identified with the exploration of this function *w* for physical
systems, with continually increasing exactness and range—except
where pure kinetic considerations prevail, in which cases the
wider Hamiltonian dynamical formulation is fundamental.
Another aspect of the matter will be developed below.

A somewhat different procedure, in terms of entropy as
fundamental, has been adopted and developed by Planck. In an
isolated system the trend of change must be in the direction
which increases the entropy φ, by Clausius’ form of the principle.
But in experiment it is a system at constant temperature rather
than an adiabatic one that usually is involved; this can be
attained formally by including in the isolated system (cf. *infra*) a
source of heat at that temperature and of unlimited capacity,
when the energy of the original system increases by δE this source
must give up heat of amount δE, and its entropy therefore
diminishes δE/T. Thus for the original system maintained at
constant temperature T it is δφ − δE/T that must always
be positive in spontaneous change, which is the same criterion as
was reached above. Reference may also be made to H. A.
Lorentz’s *Collected Scientific Papers*, part i.

A striking anticipation, almost contemporaneous, of Gibbs’s
thermodynamic potential theory (*infra*) was made by Clerk
Maxwell in connexion with the discussion of Andrews’s experiments
on the critical temperature of mixed gases, in a letter
published in Sir G. G. Stokes’s *Scientific Correspondence* (vol.
ii. p. 34).

*Available Energy.*—The same quantity φ, which Clausius
named the entropy, arose in various ways in the early development
of the subject, in the train of ideas of Rankine and Kelvin
relating to the expression of the *available energy* A of the material
system. Suppose there were accessible an auxiliary system
containing an *unlimited* quantity of heat at absolute temperature
T_{0}, forming a condenser into which heat can be discharged from
the working system, or from which it may be recovered at that
temperature: we proceed to find how much of the heat of our
system is available for transformation into mechanical work, in a
process which reduces the whole system to the temperature of
this condenser. Provided the process of reduction is performed
reversibly, it is immaterial, by Carnot’s principle, in what
manner it is effected: thus in following it out in detail we can
consider each elementary quantity of heat δH removed from the
system as set aside at its actual temperature between T and
T + δT for the production of mechanical work δW and the
residue of it δH_{0} as directly discharged into the condenser at T_{0}.
The principle of Carnot gives δH/T = δH_{0}/T_{0}, so that the portion
of the heat δH that is not available for work is δH_{0}, equal to
T_{0}δH/T. In the whole process the part not available in connexion
with the condenser at T_{0} is therefore T_{0} ƒδH/T. This quantity
must be the same whatever reversible process is employed:
thus, for example, we may first transform the system reversibly
from the state C to the state D, and then from the state D to the
final state of uniform temperature T_{0}. It follows that the value
of T_{0} ƒ*d*H/T, representing the heat degraded, is the same along all
reversible paths of transformation from the state C to the state D;
so that the function ƒ*d*H/T is the excess of a definite quantity
φ connected with the system in the former state as compared
with the latter.

It is usual to change the law of sign of δH so that gain of heat
by the system is reckoned positive; then, relative to a condenser
of unlimited capacity at T_{0}, the state C contains more mechanically
*available energy* than the state D by the amount
E_{C} − E_{D} + T_{0} ƒ*d*H/T, that is, by E_{C} − E_{D} − T_{0}(φ_{C} − φ_{D}). In this way
the existence of an entropy function with a definite value for each
state of the system is again seen to be the direct analytical
equivalent of Carnot’s axiom that no process can be more efficient
than a reversible process between the same initial and final states.
The name *motivity* of a system was proposed by Lord Kelvin in
1879 for this conception of available energy. It is here specified
as relative to a condenser of unlimited capacity at an assigned
temperature T_{0}: some such specification is necessary to the
definition; in fact, if T_{0} were the absolute zero, all the energy
would be mechanically available.

But we can obtain an intrinsically different and self-contained
comparison of the available energies in a system in two different
states at different temperatures, by ascertaining how much
energy would be dissipated in each in a reduction to the *same*
standard state of the system itself, at a standard temperature T_{0}.
We have only to reverse the operation, and change back this
standard state to each of the others in turn. This will involve
abstractions of heat δH_{0} from the various portions of the system
in the standard state, and returns of δH to the state at T_{0}; if
this return were δH_{0}T/T_{0} instead of δH, there would be no loss of
availability in the direct process; hence there is actual dissipation
δH − δH_{0}T/T_{0}, that is T(δφ − δφ_{0}). On passing from state 1
to state 2 through this standard state 0 the difference of these
dissipations will represent the energy of the system that has
become unavailable. Thus in this sense E − Tφ + Tφ_{0} + const.
represents for each state the amount of energy that is available;
but instead of implying an unlimited source of heat at the standard
temperature T_{0}, it implies that there is no extraneous source.
The available energy thus defined differs from E − Tφ, the *free*
*energy* of Helmholtz, or the *work function of the applied forces* of
Kelvin, which involves no reference to any standard state, by a
simple linear function of the temperature alone which is immaterial
as regards its applications.

The determination of the available mechanical energy arising
from differences of temperature between the parts of the same
system is a more complex problem, because it involves a
determination of the common temperature to which reversible
processes will ultimately reduce them; for the simple case in
which no changes of state occur the solution was given by Lord
Kelvin in 1853, in connexion with the above train of ideas (cf.
Tait’s *Thermodynamics*, §179). In the present exposition the
system is sensibly in equilibrium at each stage, so that its
temperature T is always uniform throughout; isolated portions
at different temperatures would be treated as different systems.

*Thermodynamic Potentials.*—We have now to develop the
relations involved in the general equation (1) of thermodynamics.
Suppose the material system includes two coexistent states or
phases, with opportunity for free interchange of constituents—for
example, a salt solution and the aqueous vapour in equilibrium
with it. Then in equilibrium a slight transfer δ*m* of the water-substance
of mass *m*_{r} constituting the vapour, into the water-substance
of mass *m*_{s}, existing in the solution, should not produce
any alteration of the first order in δE − Tδφ; therefore μ_{r} must be
equal to μ_{s}. The quantity μ_{r} is called by Willard Gibbs the
potential of the corresponding substance of mass *m*_{r}; it may be
defined as its marginal available energy per unit mass at the
given temperature. If then a system involves in this way
coexistent phases which remain permanently separate, the
potentials of any constituent must be the same in all of them in
which that constituent exists, for otherwise it would tend to pass
from the phases in which its potential is higher to those in which
it is lower. If the constituent is non-existent in any phase, its
potential when in that phase would have to be higher than in the
others in which it is actually present; but as the potential
increases logarithmically when the density of the constituent is
indefinitely diminished, this condition is automatically satisfied—or,
more strictly, the constitutent cannot be entirely absent,
but the presence of the merest trace will suffice to satisfy the
condition of equality of potential. When the action of the force of
gravity is taken into account, the potential of each constituent
must include the gravitational potential gh; in the equilibrium
state the total potential of each constituent, including this part,
must be the same throughout all parts of the system into which
it is freely mobile. An example is Dalton’s law of the independent
distributions of the gases in the atmosphere, if it were in a
state of rest. A similar statement applies to other forms of
mechanical potential energy arising from actions at a distance.

When a slight constitutive change occurs in a galvanic element at given temperature, producing available energy of electric current, in a reversible manner and isothermally, at the expense of chemical energy, it is the free energy of the system E − Tφ, not its total intrinsic energy, whose value must be conserved during the process. Thus the electromotive force is equal to the change of this free energy per electrochemical equivalent of reaction in the cell. This proposition, developed by Gibbs and later by Helmholtz, modifies the earlier one of Kelvin—which tacitly assumed all the energy of reaction to be available—except in the cases such as that of a Daniell’s cell, in which the magnitude of the electromotive force does not depend sensibly on the temperature.

The effects produced on electromotive forces by difference of concentrations in dilute solutions can thus be accounted for and traced out, from the knowledge of the form of the free energy for such cases; as also the effects of pressure in the case of gas batteries. The free energy does not sensibly depend on whether the substance is solid or fused—for the two states are in equilibrium at the temperature of fusion—though the total energy differs in these two cases by the heat of fusion; for this reason, as Gibbs pointed out, voltaic potential-differences are the same for the fused as for the solid state of the substances concerned.

*Relations involving Constitution only.*—The potential of a
component in a given solution can depend only on the temperature
and pressure of the solution, and the densities of the various
components, including itself; as no distance-actions are usually
involved in chemical physics, it will not depend on the aggregate
masses present. The example above mentioned, of two coexistent
phases liquid and vapour, indicates that there may thus be
relations between the constitutions of the phases present in a
chemical system which do not involve their total masses. These
are developed in a very direct manner in Willard Gibbs’s original
procedure. In so far as attractions at a distance (a uniform
force such as gravity being excepted) and capillary actions at the
interfaces between the phases are inoperative, the fundamental
equation (1) can be integrated. Increasing the volume *k* times,
and all the masses to the same extent—in fact, placing alongside
each other *k* identical systems at the same temperature and
pressure—will increase φ and E in the same ratio *k*; thus E must
be a homogeneous function of the first degree of the independent
variables φ, *v*, *m*_{1}, ..., *m*_{n}, and therefore by Euler’s theorem
relating to such functions

*pv*+ μ

_{1}

*m*

_{1}+ ... + μ

_{n}

*m*

_{n}.

This integral equation merely expresses the additive character of the energies and entropies of adjacent portions of the system at uniform temperature, and thus depends only on the absence of sensible physical action directly across finite distances. If we form from it the expression for the complete differential δE, and subtract (1), there remains the relation

*v*δ

*p*+

*m*

_{1}δμ

_{1}+ ... +

*m*

_{n}δμ

_{n}.

This implies that in each phase the change of pressure depends on
and is determined by the changes in T, μ_{1}, ... μ_{n} alone; as we
know beforehand that a physical property like pressure is an
analytical function of the state of the system, it is therefore a
function of these *n* + 1 quantities. When they are all independently
variable, the densities of the various constituents and
of the entropy in the phase are expressed by the partial fluxions of
*p* with respect to them: thus

φ | = | dp |
, | m_{r} |
= | dp |
. |

v | dT |
v | dμ_{r} |

But when, as in the case above referred to of chloride of ammonium gas existing partially dissociated along with its constituents, the masses are not independent, necessary linear relations, furnished by the laws of definite combining proportions, subsist between the partial fluxions, and the form of the function which expresses p is thus restricted, in a manner which is easily expressible in each special case.

This proposition that the pressure in any phase is a function of
the temperature and of the potentials of the independent constituents,
thus appears as a consequence of Carnot’s axiom
combined with the energy principle and the absence of effective
actions at a distance. It shows that at a given temperature and
pressure the potentials are not all independent, that there is a
necessary relation connecting them. This is the *equation of state*
or constitution of the phase, whose existence forms one mode of
expression of Carnot’s principle, and in which all the properties
of the phase are involved and can thence be derived by simple
differentiation.

*The Phase Rule*.—When the material system contains only a
single phase, the number of independent variations, in addition
to change of temperature and pressure, that can spontaneously
occur in its constitution is thus one less than the number of its
independent constituents. But where several phases coexist in
contact in the same system, the number of possible independent
variations may be much smaller. The present independent
variables μ_{1}, ..., μ_{n} are specially appropriate in this problem,
because each of them has the same value in all the phases. Now
each phase has its own characteristic equation, giving a relation
between δp, δT, and δμ_{1}, ... δμ_{n}, or such of the latter as are
independent; if *r* phases coexist, there are *r* such relations;
hence the number of possible independent variations, including
those of *v* and T, is reduced to *m* − *r* + 2, where *m* is the number
of independently variable chemical constituents which the system
contains. This number of degrees of constitutive freedom
cannot be negative; therefore the number of possible phases
that can coexist alongside each other cannot exceed *m* + 2.
If *m* + 2 phases actually coexist, there is no variable quantity in
the system, thus the temperature and pressure and constitutions
of the phases are all determined; such is the triple point at which
ice, water and vapour exist in presence of each other. If there are
*m* + 1 coexistent phases, the system can vary in one respect only;
for example, at any temperature of water-substance different
from the triple point two phases only, say liquid and vapour,
or liquid and solid, coexist, and the pressure is definite, as also are
the densities and potentials of the components. Finally, when
but one phase, say water, is present, both pressure and temperature
can vary independently. The first example illustrates the
case of systems, physical or chemical, in which there is only one
possible state of equilibrium, forming a point of transition between
different constitutions; in the second type each temperature has
its own completely determined state of equilibrium; in other
cases the constitution in the equilibrium state is indeterminate as
regards the corresponding number of degrees of freedom. By aid
of this phase rule of Gibbs the number of different chemical
substances actually interacting in a given complex system can
be determined from observation of the degree of spontaneous
variation which it exhibits; the rule thus lies at the foundation
of the modern subject of chemical equilibrium and continuous
chemical change in mixtures or alloys, and in this connexion it
has been widely applied and developed in the experimental
investigations of Roozeboom and van ’t Hoff and other physical
chemists, mainly of the Dutch school.

*Extent to which the Theory can be practically developed*.—It is
only in systems in which the number of independent variables is
small that the forms of the various potentials,—or the form of the
fundamental characteristic equation expressing the energy of the
system in terms of its entropy and constitution, or the pressure
in terms of the temperature and the potentials, which includes
them all,—can be readily approximated to by experimental
determinations. Even in the case of the simple system water-vapour,
which is fundamental for the theory of the steam-engine,
this has not yet been completely accomplished. The general
theory is thus largely confined, as above, to defining the restrictions
on the degree of variability of a complex chemical system
which the principle of Carnot imposes. The tracing out of these
general relations of continuity of state is much facilitated by
geometrical diagrams, such as James Thomson first introduced in
order to exhibit and explain Andrews’ results as to the range of
coexistent phases in carbonic acid. Gibbs’s earliest thermodynamic
surface had for its co-ordinates volume, entropy and
energy; it was constructed to scale by Maxwell for water-substance,
and is fully explained in later editions of the *Theory of*
*Heat* (1875); it forms a relief map which, by simple inspection,
reveals the course of the transformations of water, with the
corresponding mechanical and thermal changes, in its three
coexistent states of solid, liquid and gas. In the general case,
when the substance has more than one independently variable
constituent, there are more than three variables to be represented;
but Gibbs has shown the utility of surfaces representing,
for instance, the entropy in terms of the constitutive variables
when temperature and pressure are maintained constant. Such
graphical methods are now of fundamental importance in
connexion with the phase rule, for the experimental exploration
of the trend of the changes of constitution of complex mixtures
with interacting components, which arise as the physical conditions
are altered, as, for example in modern metallurgy, in the
theory of alloys. The study of the phenomena of condensation
in a mixture of two gases or vapours, initiated by Andrews and
developed in this manner by van der Waals and his pupils, forms
a case in point (see Condensation of Gases).

*Dilute Components: Perfect Gases and Dilute Solutions*.—There
are, however, two simple limiting cases, in which the theory
can be completed by a determination of the functions involved in
it, which throw much light on the phenomena of actual systems
not far removed from these ideal limits. They are the cases of
mixtures of perfect gases, and of very dilute solutions.

If, following Gibbs, we apply his equation (2) expressing the pressure
in terms of the temperature and the potentials, to a very dilute
solution of substances *m*_{2}, *m*_{3}, ... *m*_{n} in a solvent substance *m*_{1}, and
vary the co-ordinate *m*_{r} alone, p and T remaining unvaried, we have
in the equilibrium state

m_{r} | dμ_{r} |
+ m_{1} | dμ_{1} |
+ ... + m_{n} | dμ_{n} |
= 0, |

dm_{r} | dm_{r} |
dm_{r} |

in which every *m* except *m*_{1} is very small, while dμ_{1}/*dm*_{r} is presumably
finite. As the second term is thus finite, this requires that the total
potential of each component *m*_{r}, which is *m*_{r}*d*μ_{r}/*dm*_{r}, shall be finite,
say *k*_{r}, in the limit when *m*_{r} is null. Thus for very small concentrations
the potential μ_{r} of a dilute component must be of the form
*k*_{r}log *m*_{r}/*v*, being proportional to the logarithm of the density of
that component; it thus tends logarithmically to an infinite value
at evanescent concentrations, showing that removal of the last
traces of any impurity would demand infinite proportionate expenditure
of available energy, and is therefore practically impossible
with finite intensities of force. It should be noted, however, that
this argument applies only to fluid phases, for in the case of deposition
of a solid *m*_{r} is not uniformly distributed throughout the phase;
thus it remains possible for the growth of a crystal at its surface
in aqueous solution to extrude all the water except such as is in some
form of chemical combination.

The precise value of this logarithmic expression for the potential
can be readily determined for the case of a perfect gas from its
characteristic properties, and can be thence extended to other dilute
forms of matter. We have *pv* = R/*m*·T for unit mass of the gas,
where *m* is the molecular weight, being 2 for hydrogen, and R is a
constant equal to 82 × 10^{6} in C.G.S. dynamical units, or 2 calories
approximately in thermal energy units, which is the same for all
gases because they have all the same number of molecules per unit
volume. The increment of heat received by the unit mass of the
gas is δH = pδv + κδT, κ being thus the specific heat at constant
volume, which can be a function only of the temperature. Thus

*d*H/T = R/

*m*· log v + ƒ κT

^{−1}

*d*T;

and the available energy A per unit mass is E − Tφ + Tφ_{0} where
E = ε + ƒκ*d*T, the integral being for a standard state, and ε being
intrinsic energy of chemical constitution; so that

_{0}T + ƒκ

*d*T − T ƒκT

^{−1}

*d*T − R/

*m*· T log

*v*.

If there are ν molecules in the unit mass, and N per unit volume, we
have mν = N*mv*, each being 2 ν′, where ν′ is the number of molecules
per unit mass in hydrogen; thus the free energy per molecule is
*a*′ + R′T log *b*N, where *b* = m/2ν′, R′ = R/2ν′, and *a*′ is a function of
T alone. It is customary to avoid introducing the unknown molecular
constant ν′ by working with the available energy per “gramme-molecule,”
that is, for a number of grammes expressed by the
molecular weight of the substance; this is a constant multiple of the
available energy per molecule, and is *a* + RT logρ, ρ being the density
equal to bN where *b* = m/2ν′. This formula may now be extended
by simple summation to a mixture of gases, on the ground of Dalton’s
experimental principle that each of the components behaves in
presence of the others as it would do in a vacuum. The components
are, in fact, actually separable wholly or partially in reversible ways
which may be combined into cycles, for example, either (i.) by
diffusion through a porous partition, taking account of the work of
the pressures, or (ii.) by utilizing the modified constitution towards
the top of a long column of the mixture arising from the action of
gravity, or (iii.) by reversible absorption of a single component.

If we employ in place of available energy the form of characteristic
equation which gives the pressure in terms of the temperature and
potentials, the pressure of the mixture is expressed as the sum of
those belonging to its components: this equation was made by Gibbs
the basis of his analytical theory of gas mixtures, which he tested by
its application to the only data then available, those of the equilibrium
of dissociation of nitrogen peroxide (2NO_{2} ⇆ N_{2}O_{4}) vapour.

*Van ’t Hoff’s Osmotic Principle: Theoretical Explanation*.—We
proceed to examine how far the same formulae as hold for
gases apply to the available energy of matter in solution which is
so dilute that each molecule of the dissolved substance, though
possibly the centre of a complex of molecules of the solvent, is for
nearly all the time beyond the sphere of direct influence of the
other molecules of the dissolved substance. The available
energy is a function only of the co-ordinates of the matter in bulk
and the temperature; its change on further dilution, with which
alone we are concerned in the transformations of dilute solutions,
can depend only on the further separation of these molecular
complexes in space that is thereby produced, as no one of them is
in itself altered. The change is therefore a function only of the
number N of the dissolved molecules per unit volume, and of the
temperature, and is, per molecule, expressible in a form entirely
independent of their constitution and of that of the medium in
which they are dissolved. This suggests that the expression for
the change on dilution is the same as the known one for a gas, in
which the same molecules would exist free and in the main
outside each other’s spheres of influence; which confirms and is
verified by the experimental principle of van ’t Hoff, that osmotic
pressure obeys the laws of gaseous pressure with identically the
same physical constants as those of gases. It can be held, in fact,
that this suggestion does not fall short of a demonstration, on the
basis of Carnot’s principle, and independent of special molecular
theory, that in all cases where the molecules of a component,
whether it be of a gas or of a solution, are outside each other’s
spheres of influence, the available energy, so far as regards
dilution, must have a common form, and the physical constants
must therefore be the known gas-constants. The customary
exposition derives this principle, by an argument involving
cycles, from Henry’s law of solution of gases; it is sensibly
restricted to such solutes as appear concomitantly in the free
gaseous state, but theoretically it becomes general when it is
remembered that no solute can be absolutely non-volatile.

*Source of the Idea of Temperature*.—The single new element
that thermodynamics introduces into the ordinary dynamical
specification of a material system is temperature. This conception
is akin to that of potential, except that it is given to us
directly by our sense of heat. But if that were not so, we could
still demonstrate, on the basis of Carnot’s principle, that there is a
definite function of the state of a body which must be the same
for all of a series of connected bodies, when thermal equilibrium
has become established so that there is no tendency for heat to
flow from one to another. For we can by mere geometrical
displacement change the order of the bodies so as to bring
different ones into direct contact. If this disturbed the thermal
equilibrium, we could construct cyclic processes to take advantage
of the resulting flow of heat to do mechanical work, and such
processes might be carried on without limit. Thus it is proved
that if a body A is in temperature-equilibrium with B, and B
with C, then A must be in equilibrium with C directly. This
argument can be applied, by aid of adiabatic partitions, even
when the bodies are in a field of force so that mechanical work is
required to change their geometrical arrangement; it was in
fact employed by Maxwell to extend from the case of a gas to that
of any other system the proposition that the temperature is the
same all along a vertical column in equilibrium under gravity.

It had been shown from the kinetic theory by Maxwell that in a gas-column the mean kinetic energy of the molecules is the same at all heights. If the only test of equality of temperature consisted in bringing the bodies into contact, this would be rather a proof that thermal temperature is of the same physical nature in all parts of the field of force; but temperature can also be equalized across a distance by radiation, so that this law for gases is itself already necessitated by Carnot’s general principle, and merely confirmed or verified by the special gas-theory. But without introducing into the argument the existence of radiation, the uniformity of temperature throughout all phases in equilibrium is necessitated by the doctrine of energetics alone, as otherwise, for example, the raising of a quantity of gas to the top of the gravitational column in an adiabatic enclosure together with the lowering of an equal mass to the bottom would be a source of power, capable of unlimited repetition.

*Laws of Chemical Equilibrium based on Available Energy*.—The
complete theory of chemical and physical equilibrium in
gaseous mixtures and in very dilute solutions may readily be
developed in terms of available energy (cf. *Phil. Trans*., 1897,
A, pp. 266-280), which forms perhaps the most vivid and most
direct procedure. The available energy per molecule of any kind,
in a mixture of perfect gases in which there are N molecules of
that kind per unit volume, has been found to be *a*′ + R′T logbN
where R′ is the universal physical constant connected with R
above. This expression represents the marginal increase of
available energy due to the introduction of one more molecule
of that kind into the system as actually constituted. The same
formula also applies, by what has already been stated, to substances
in dilute solution in any given solvent. In any isolated
system in a mobile state of reaction or of internal dissociation,
the condition of chemical equilibrium is that the available energy
at constant temperature is a minimum, therefore that it is
stationary, and slight change arising from fresh reaction would
not sensibly alter it. Suppose that this reaction, per molecule
affected by it, is equivalent to introducing *n*_{1} molecules of type
N_{1}, *n*_{2} of type N_{2}, &c., into the system, *n*_{1}, *n*_{2}, ... being the
numbers of molecules of the different types that take part in the
reaction, as shown by its chemical equation, reckoned positive
when they appear, negative when they disappear. Then in the
state of equilibrium

*n*

_{1}(

*a*′

_{1}+ R′T log

*b*

_{1}N

_{1}) +

*n*

_{2}(

*a*′

_{2}+ R′T log

*b*

_{2}N

_{2}) + ...

must vanish. Therefore N_{1}*n*1N_{2}*n*2 ... must be equal to K, a
function of the temperature alone. This law, originally based
by Guldberg and Waage on direct statistics of molecular interaction,
expresses for each temperature the relation connecting the
densities of the interacting substances, in dilution comparable as
regards density with the perfect gaseous state, when the reaction
has come to the state of mobile equilibrium.

All properties of any system, including the heat of reaction,
are expressible in terms of its available energy A, equal to
E − Tφ + φ_{0}T. Thus as the constitution of the system changes
with the temperature, we have

dA |
= | dE |
- T | dφ |
− (φ − φ_{0}) |

dT | dT |
dT |

where

δH being heat and δW mechanical and chemical energy imparted to the system at constant temperature; hence

d(A − W) | = −(φ − φ_{0}), so that A = E + T | d(A − W) |
, |

dT | dT |

which is equivalent to

E − W = −T^{2} | d |
( | A − W | ). |

dT | T |

This general formula, applied differentially, expresses the heat δE − δW absorbed by a reaction in terms of δA, the change produced by it in the available energy of the system, and of δW, the mechanical and electrical work done on the system during its progress.

In the problem of reaction in gaseous systems or in very dilute solution, the change of available energy per molecule of reaction has just been found to be

_{0}+ R′T log K′, where K′ =

*b*

_{1}

*n*1

*b*

_{2}

*n*2 ... K;

thus, when the reaction is spontaneous without requiring external work, the heat absorbed per molecule of reaction is

−T^{2} | d |
δA_{0} |
, or −R′T^{2} | d |
log K. | |

dT | T | dT |

This formula has been utilized by van ’t Hoff to determine, in
terms of the heat of reaction, the displacement of equilibrium in
various systems arising from change of temperature; for K, equal
to N_{1}*n*1N_{2}*n*2 ..., is the reaction-parameter through which alone the
temperature affects the law of chemical equilibrium in dilute
systems.

*Interfacial Phenomena: Liquid Films*.—The characteristic
equation hitherto developed refers to the state of an element of
mass in the interior of a homogeneous substance: it does not
apply to matter in the neighbourhood of the transition between
two adjacent phases. A remarkable analysis has been developed
by J. W. Gibbs in which the present methods concerning matter
in bulk are extended to the phenomena at such an interface,
without the introduction of any molecular theory; it forms
the thermodynamic completion of Gauss’s mechanical theory of
capillarity, based on the early form of the principle of total
energy. The validity of the fundamental doctrine of available
energy, so far as regards all mechanical actions in bulk such as
surface tensions, is postulated, even when applied to interfacial
layers so thin as to be beyond our means of measurement; the
argument from perpetual motions being available here also, as
soon as we have experimentally ascertained that the said tensions
are definite physical properties of the state of the interface and
not merely accidental phenomena. The procedure will then
consist in assuming a definite excess of energy, of entropy, and
of the masses of the various components, each per unit surface,
at the interface, the potential of each component being of
necessity, in equilibrium, the same as it is in the adjacent masses.
The interfacial transition layer thus provides in a sense a new
surface-phase coexistent with those on each side of it, and having
its own characteristic equation. It is only the extent of the
interface and not its curvatures that need enter into this relation,
because any slight influence of the latter can be eliminated from
the equation by slightly displacing the position of the surface
which is taken to represent the interface geometrically. By an
argument similar to one given above, it is shown that one of the
forms of the characteristic equation is a relation expressing the
surface tension as a function of the temperature and the potentials
of the various components present on the two sides of the
interface; and from the differentiation of this the surface
densities of the superficial distributions of these components
(as above defined) can be obtained. The conditions that a
specified new phase may become developed when two other
given ones are brought into contact, *i.e.* that a chemical reaction
may start at the interface, are thence formally expressed in
terms of the surface tensions of the three transition layers and the
pressures in the three phases. In the case of a thin soap-film,
sudden extension of any part reduces the interfacial density of
each component at each surface of the film, and so alters the
surface tension, which requires time to recover by the very slow
diffusion of dissolved material from other parts of the thin film;
the system being stable, this change must be an increase of
tension, and constitutes a species of elasticity in the film. Thus
in a vertical film the surface tension must be greater in the
higher parts, as they have to sustain the weight of the lower parts;
the upper parts, in fact, stretch until the superficial densities of
the components there situated are reduced to the amounts that correspond to the tension required for this purpose. Such a film
could not therefore consist of pure water. But there is a limit to
these processes: if the film becomes so thin that there is no water
in bulk between its surfaces, the tensions cannot adjust themselves
in this slow way by migration of components from one part
of the film to another; if the film can survive at all after it has
become of molecular thickness, it must be as a definite molecular
structure all across its thickness. Of such type are the black
spots that break out in soap-films (suggested by Gibbs and proved
by the measures of Reinold and Rücker): the spots increase in
size because their tension is less than that of the surrounding
film, but their indefinite increase is presumably stopped in
practice by some clogging or viscous agency at their boundary.

*Transition to Molecular Theory*.—The subject of energetics,
based on the doctrine of available energy, deals with matter in
bulk and is not concerned with its molecular constitution, which
it is expressly designed to eliminate from the problem. This
analysis of the phenomena of surface tension shows how far the
principle of negation of perpetual motions can carry us, into
regions which at first sight might be classed as molecular. But,
as in other cases, it is limited to pointing out the general scheme
of relations within which the phenomena can have their play.
There is now a considerable body of knowledge correlating
surface tension with chemical constitution, especially to a
certain extent with the numerical density of the distribution
of molecules; thus R. Eötvös has shown that a law of proportionality
exists for wide classes of substances between the temperature-gradient
of the surface tension and the density of the molecules
over the surface layer, which varies as the two-thirds
power of the number per unit volume (see Chemistry: *Physical*).
This takes us into the sphere of molecular science, where at
present we have only such indications largely derived from
experiment, if we except the mere notion of inter-atomic forces of
unknown character on which the older theories of capillarity,
those of Laplace and Poisson, were constructed.

In other topics the same restrictions on the scope of the simple
statical theory of energy appear. From the ascertained behaviour
in certain respects of gaseous media we are able to construct
their characteristic equation, and correlate their remaining
relations by means of its consequences. Part of the experimental
knowledge required for this purpose is the values of the gas-constants,
which prove to be the same for all nearly perfect gases.
The doctrine of energetics by itself can give no clue as to why this
should be so; it can only construct a scheme for each simple
or complex medium on the basis of its own experimentally
determined characteristic equation. The explanation of uniformities
in the intrinsic constitutions of various media belongs
to molecular theory, which is a distinct and in the main more
complex and more speculative department of knowledge. When
we proceed further and find, with van ’t Hoff, that these same
universal gas-constants reappear in the relations of very dilute
solutions, our demand for an explanation such as can only be
provided by molecular theory (as *supra*) is intensely stimulated.
But except in respects such as these the doctrine of energetics
gives a complete synthesis of the course and relations of the
chemical reactions of matter in bulk, from which we can eliminate
atomism altogether by restating the merely numerical atomic
theory of Dalton as a principle of equivalent combining proportions.
Of recent years there has been a considerable school of
chemists who insist on this procedure as a purification of their
science from the hypothetical ideas as to atoms and molecules,
in terms of which its experimental facts have come to be expressed.
A complete system of doctrine can be developed in this manner,
but its scope will be limited. It makes use of one principle
of correlation, the doctrine of available energy, and discards
another such principle, the atomic theory. Nor can it be said
that the one principle is really more certain and definite than the
other. This may be illustrated by what has sometimes by
German writers been called Gibbs’s paradox: the energy that is
available for mechanical effect in the inter-diffusion of given
volumes of two gases depends only on these volumes and their
pressures, and is independent of what the gases are; if the gases
differed only infinitesimally in constitution it would still be the
same, and the question arises where we are to stop, for we cannot
suppose the inter-diffusion of two identical gases to be a source of
power. This then looks like a real failure, or rather limitation, of
the principle; and there are other such, that can only be satisfactorily
explained by aid of the complementary doctrine of
molecular theory. That theory, in fact, shows that the more
nearly identical the gases are, the slower will be the process of
inter-diffusion, so that the mechanical energy will indeed be
available, but only after a time that becomes indefinitely prolonged.
It is a case in which the simple doctrine of energetics
becomes inadequate before the limit is reached. The phenomena
of highly rarefied gases provide other cases. And in fact the only
reason hitherto thought of for the invariable tendency of available
energy to diminish, is that it represents the general principle that
in the kinetic play of a vast assemblage of independent molecules
individually beyond our control, the normal tendency is for
the regularities to diminish and the motions to become less
correlated: short of some such reason, it is an unexplained
empirical principle. In the special departments of dynamical
physics on the other hand, the molecular theory, there dynamical
and therefore much more difficult and less definite, is an indispensable
part of the framework of science; and even experimental
chemistry now leans more and more on new physical methods
and instruments. Without molecular theory the clue which has
developed into spectrum analysis, bringing with it stellar
chemistry and a new physical astronomy, would not have been
available; nor would the laws of diffusion and conduction in
gases have attained more than an empirical form; nor would it
have been possible to weave the phenomena of electrodynamics
and radiation into an entirely rational theory.

The doctrine of available energy, as the expression of thermodynamic
theory, is directly implied in Carnot’s Essai of 1824, and
constitutes, in fact, its main theme; it took a fresh start, in the
light of fuller experimental knowledge regarding the nature of
heat, in the early memoirs of Rankine and Lord Kelvin, which
may be found in their Collected Scientific Papers; a subsequent
exposition occurs in Maxwell’s *Theory of Heat*; its most familiar
form of statement is Lord Kelvin’s principle of the dissipation of
available energy. Its principles were very early applied by James
Thomson to a physico-chemical problem, that of the influence of
stress on the growth of crystals in their mother liquor. The
“thermodynamic function” introduced by Rankine into its
development is the same as the “entropy” of the material
system, independently defined by Clausius about the same time.
Clausius’s form of the principle, that in an adiabatic system the
entropy tends continually to increase, has been placed by Professor
Willard Gibbs, of Yale University, at the foundation of his
magnificent but complex and difficult development of the theory.
His monumental memoir “On the Equilibrium of Heterogeneous
Substances,” first published in *Trans. Connecticut Academy*
(1876–1878), made a clean sweep of the subject; and workers
in the modern experimental science of physical chemistry
have returned to it again and again to find their empirical
principles forecasted in the light of pure theory, and to derive
fresh inspiration for new departures. As specially preparatory to
Gibbs’s general discussion may be mentioned Lord Rayleigh’s
memoir on the thermodynamics of gaseous diffusion (*Phil. Mag.*,
1876), which was expounded by Maxwell in the 9th edition of the
*Ency. Brit*. (art. Diffusion). The fundamental importance of
the doctrine of dissipation of energy for the theory of chemical
reaction had already been insisted on in general terms by
Rayleigh; subsequent to, but independently of, Gibbs’s work it
had been elaborated by von Helmholtz (*Gesamm. Abhandl*. ii. and
iii.) in connexion with the thermodynamics of voltaic cells, and
more particularly in the calculation of the free or available
energy of solutions from data of vapour-pressure, with a view to
the application to the theory of concentration cells, therein also
coming close to the doctrine of osmotic pressure. This form of
the general theory has here been traced back substantially to
Lord Kelvin under date 1855. Expositions and developments on
various lines will be found in papers by Riecke and by Planck in *Annalen der Physik* between 1890 and 1900, in the course of a
memoir by Larmor, Phil. Trans., 1897, A, in Voigt’s *Compendium*
*der Physik* and his more recent *Thermodynamik*, in Planck’s
*Vorlesungen über Thermodynamik*, in Duhem’s elaborate *Traité*
*de mécanique chimique* and *Le Potential thermodynamique*, in
Whetham’s *Theory of Solution* and in Bryan’s *Thermodynamics*.
Numerous applications to special problems are expounded in
van’t Hoff’s *Lectures on Theoretical and Physical Chemistry*.

The theory of energetics, which puts a diminishing limit on the
amount of energy available for mechanical purposes, is closely
implicated in the discovery of natural radioactive substances by
H. Becquerel, and their isolation in the very potent form of
radium salts by M. and Mme Curie. The slow degradation of
radium has been found by the latter to be concomitant with an
evolution of heat, in amount enormous compared with other
chemical changes. This heat has been shown by E. Rutherford
to be about what must be due to the stoppage of the α and β
particles, which are emitted from the substance with velocities
almost of the same scale as that of light. If they struck an ideal
rigid target, their lost kinetic energy must all be sent away as
radiation; but when they become entangled among the molecules
of actual matter, it will, to a large extent, be shared among them
as heat, with availability reduced accordingly. In any case the
particles that escape into the surrounding space are so few and
their velocity so uniform that we can, to some extent, treat their
energy as directly available mechanically, in contradistinction
to the energy of individual molecules of a gas (cf. Maxwell’s
“demons”), *e.g.* for driving a vane, as in Crookes’s experiment
with the cathode rays. Indeed, on account of the high velocity
of projection of the particles from a radium salt, the actions
concerned would find their equilibrium at such enormously high
temperatures that any influence of actually available differences
of temperature is not sensibly a feature of the phenomena.
Such actions, however, like explosive actions in general, are
beyond our powers of actual *direct* measurement as regards the
degradation of availability of the energy. It has been pointed
out by Rutherford, R. J. Strutt and others, that the energy of
degradation of even a very minute admixture of active radium
would entirely dominate and mask all other cosmical modes of
transformation of energy; for example, it far outweighs that
arising from the exhaustion of gravitational energy, which has
been shown by Helmholtz and Kelvin to be an ample source for
all the activities of our cosmical system, and to be itself far greater
than the energy of any ordinary chemical rearrangements consequent
on a fall of temperature: a circumstance that makes
the existence and properties of this substance under settled
cosmic conditions still more anomalous (see Radioactivity).
Theoretically it is possible to obtain unlimited concentration of
availability of energy at the expense of an equivalent amount of
degradation spread over a wider field; the potency of electric
furnaces, which have recently opened up a new department of
chemistry, and are limited only by the refractoriness of the
materials of which they are constituted, forms a case in point.
In radium we have the very remarkable phenomenon of far higher
concentration occurring naturally in very minute permanent
amounts, so that merely chemical sifting is needed to produce its
aggregation. Even in pitchblende only one molecule in 10^{9}
seems to be of radium, renewable, however, when lost, by internal
transformation.

The energetics of Radiation is treated under that heading. See also Thermodynamics. (J. L.*)