# Eight Lectures on Theoretical Physics/IV

Fourth Lecture.

The Equation of State for a Monatomic Gas.

My problem today is to utilize the general fundamental laws concerning the concept of irreversibility, which we established in the lecture of yesterday, in the solution of a definite problem: the calculation of the entropy of an ideal monatomic gas in a given state, and the derivation of all its thermodynamic properties. The way in which we have to proceed is prescribed for us by the general definition of entropy:

{\displaystyle {\begin{aligned}&(13){\color {White}.}\qquad &&S=k\log W.\end{aligned}}}

The chief part of our problem is the calculation of ${\displaystyle W}$ for a given state of the gas, and in this connection there is first required a more precise investigation of that which is to be understood as the state of the gas. Obviously, the state is to be taken here solely in the sense of the conception which we have called macroscopic in the last lecture. Otherwise, a state would possess neither probability nor entropy. Furthermore, we are not allowed to assume a condition of equilibrium for the gas. For this is characterized through the further special condition that the entropy for it is a maximum. Thus, an unequal distribution of density may exist in the gas; also, there may be present an arbitrary number of different currents, and in general no kind of equality between the various velocities of the molecules is to be assumed. The velocities, as the coordinates of the molecules, are rather to be taken a priori as quite arbitrarily given, but in order that the state, considered in a macroscopic sense, may be assumed as known, certain mean values of the densities and the velocities must exist. Through these mean values the state from a macroscopic standpoint is completely characterized.

The conditions mentioned will all be fulfilled if we consider the state as given in such manner that the number of molecules in a sufficiently small macroscopic space, but which, however, contains a very large number of molecules, is given, and furthermore, that the (likewise great) number of these molecules is given, which are found in a certain macroscopically small velocity domain, i. e., whose velocities lie within certain small intervals. If we call the coordinates ${\displaystyle x}$, ${\displaystyle y}$, ${\displaystyle z}$, and the velocity components ${\displaystyle {\dot {x}}}$, ${\displaystyle {\dot {y}}}$, ${\displaystyle {\dot {z}}}$, then this number will be proportional to[1]

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&dx\cdot dy\cdot dz\cdot d{\dot {x}}\cdot d{\dot {y}}\cdot d{\dot {z}}=\sigma .\end{aligned}}}

It will depend, besides, upon a finite factor of proportionality which may be an arbitrarily given function ${\displaystyle f(x,y,z,{\dot {x}},{\dot {y}},{\dot {z}})}$ of the coordinates and the velocities, and which has only the one condition to fulfill that

{\displaystyle {\begin{aligned}&(14){\color {White}.}\qquad &&{\textstyle \sum }f\cdot \sigma =N,\end{aligned}}}

where ${\displaystyle N}$ denotes the total number of molecules in the gas. We are now concerned with the calculation of the probability ${\displaystyle W}$ of that state of the gas which corresponds to the arbitrarily given distribution function ${\displaystyle f}$.

The probability that a given molecule possesses such coordinates and such velocities that it lies within the domain ${\displaystyle \sigma }$ is expressed, in accordance with the final result of the previous lecture, by the magnitude of the corresponding elementary domain:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&d\varphi _{1}\cdot d\varphi _{2}\cdot d\varphi _{3}\cdot d\psi _{1}\cdot d\psi _{2}\cdot d\psi _{3},\end{aligned}}}

therefore, since here

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\varphi _{1}=x,\quad \varphi _{2}=y,\quad \varphi _{3}=z,\quad \psi _{1}=m{\dot {x}},\quad \psi _{2}=m{\dot {y}},\quad \psi _{3}=m{\dot {z}},\end{aligned}}}

(${\displaystyle m}$ the mass of a molecule) by

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&m^{3}\sigma .\end{aligned}}}

Now we divide the whole of the six dimensional “state domain” containing all the molecules into suitable equal elementary domains of the magnitude ${\displaystyle m^{3}\sigma }$. Then the probability that a given molecule fall in a given elementary domain is equally great for all such domains. Let ${\displaystyle P}$ denote the number of these equal elementary domains. Next, let us imagine as many dice as there are molecules present, i. e., ${\displaystyle N}$, and each die to be provided with ${\displaystyle P}$ equal sides. Upon these ${\displaystyle P}$ sides we imagine numbers ${\displaystyle 1}$, ${\displaystyle 2}$, ${\displaystyle 3}$, ${\displaystyle \cdots }$ to ${\displaystyle P}$, so that each of the ${\displaystyle P}$ sides indicates a given elementary domain. Then each throw with the ${\displaystyle N}$ dice corresponds to a given state of the gas, while the number of dice which show a given number corresponds to the molecules which lie in the elementary domain considered. In accordance with this, each single die can indicate with the same probability each of the numbers from ${\displaystyle 1}$ to ${\displaystyle P}$, corresponding to the circumstance that each molecule may fall with equal probability in any one of the ${\displaystyle P}$ elementary domains. The probability ${\displaystyle W}$ sought, of the given state of the molecules, corresponds, therefore, to the number of different kinds of throws (complexions) through which is realized the given distribution ${\displaystyle f}$. Let us take, e. g., ${\displaystyle N}$ equal to ${\displaystyle 10}$ molecules (dice) and ${\displaystyle P=6}$ elementary domains (sides) and let us imagine the state so given that there are

 3 molecules in 1st elementary domain 4 molecules in 2d elementary domain 0 molecules in 3d elementary domain 1 molecule in 4th elementary domain 0 molecules in 5th elementary domain 2 molecules in 6th elementary domain,

then this state, e. g., may be realized through a throw for which the 10 dice indicate the following numbers:

 1st 2d 3d 4th 5th 6th 7th 8th 9th 10th 2 6 2 1 1 2 6 2 1 4.

Under each of the characters representing the ten dice stands the number which the die indicates in the throw. In fact,

 3 dice show the figure 1 4 dice show the figure 2 0 dice show the figure 3 1 die shows the figure 4 0 dice show the figure 5 2 dice show the figure 6.

The state in question may likewise be realized through many other complexions of this kind. The number sought of all possible complexions is now found through consideration of the number series indicated in ${\displaystyle (15)}$. For, since the number of molecules (dice) is given, the number series contains a fixed number of elements (${\displaystyle 10=N}$). Furthermore, since the number of molecules falling in an elementary domain is given, each number, in all permissible complexions, appears equally often in the series. Finally, each change of the number configuration conditions a new complexion. The number of possible complexions or the probability ${\displaystyle W}$ of the given state is therefore equal to the number of possible permutations with repetition under the conditions mentioned. In the simple example chosen, in accordance with a well known formula, the probability is

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&{\frac {10!}{3!\;4!\;0!\;1!\;0!\;2!\;}}=12,600.\end{aligned}}}

Therefore, in the general case:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&W={\frac {N!}{\prod (f\cdot \sigma )!}}.\end{aligned}}}

The sign ${\displaystyle \prod }$ denotes the product extended over all of the ${\displaystyle P}$ elementary domains.

From this there results, in accordance with equation ${\displaystyle (13)}$, for the entropy of the gas in the given state:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&S=k\log N!-k{\textstyle \sum }\log(f\cdot \sigma )!.\end{aligned}}}

The summation is to be extended over all domains ${\displaystyle \sigma }$. Since ${\displaystyle f\cdot \sigma }$ is a large quantity, Stirling's formula may be employed for its factorial, which for a large number ${\displaystyle n}$ is expressed by:

{\displaystyle {\begin{aligned}&(16){\color {White}.}\qquad &&n!=\left({\frac {n}{e}}\right)^{n}{\sqrt {2\pi n}},\end{aligned}}}

therefore, neglecting unimportant terms:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\log n!=n(\log n-1);\end{aligned}}}

and hence:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&S=k\log N!-k{\textstyle \sum }f\sigma (\log[f\cdot \sigma ]-1),\end{aligned}}}

or, if we note that ${\displaystyle \sigma }$ and ${\displaystyle N={\textstyle \sum }f\sigma }$ remain constant in all changes of state:

{\displaystyle {\begin{aligned}&(17){\color {White}.}\qquad &&S={\text{const}}-k{\textstyle \sum }f\cdot \log f\cdot \sigma .\end{aligned}}}

This quantity is, to the universal factor ${\displaystyle (-k)}$, the same as that which L. Boltzmann denoted by ${\displaystyle H}$, and which he showed to vary in one direction only for all changes of state.

In particular, we will now determine the entropy of a gas in a state of equilibrium, and inquire first as to that form of the law of distribution which corresponds to thermodynamic equilibrium. In accordance with the second law of thermodynamics, a state of equilibrium is characterized by the condition that with given values of the total volume ${\displaystyle V}$ and the total energy ${\displaystyle E}$, the entropy ${\displaystyle S}$ assumes its maximum value. If we assume the total volume of the gas

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&V=\int dx\cdot dy\cdot dz,\end{aligned}}}

and the total energy

{\displaystyle {\begin{aligned}&(18){\color {White}.}\qquad &&E={\frac {m}{2}}{\textstyle \sum }({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})f\sigma \end{aligned}}}

as given, then the condition:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\delta S=0\end{aligned}}}

must hold for the state of equilibrium, or, in accordance with ${\displaystyle (17)}$:

{\displaystyle {\begin{aligned}&(19){\color {White}.}\qquad &&{\textstyle \sum }(\log f+1)\cdot \delta f\cdot \sigma =0,\end{aligned}}}

wherein the variation ${\displaystyle \delta f}$ refers to an arbitrary change in the law of distribution, compatible with the given values of ${\displaystyle N}$, ${\displaystyle V}$ and ${\displaystyle E}$.

Now we have, on account of the constancy of the total number of molecules ${\displaystyle N}$, in accordance with ${\displaystyle (14)}$:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&{\textstyle \sum }\delta f\cdot \sigma =0\end{aligned}}}

and, on account of the constancy of the total energy, in accordance with ${\displaystyle (18)}$:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&{\textstyle \sum }({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})\cdot \delta f\cdot \sigma =0.\end{aligned}}}

Consequently, for the fulfillment of condition ${\displaystyle (19)}$ for all permissible values of ${\displaystyle \delta f}$, it is sufficient and necessary that

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\log f+\beta ({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})={\text{const}},\end{aligned}}}

or:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&f=\alpha e^{-\beta ({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})},\end{aligned}}}

wherein ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are constants. In the state of equilibrium, therefore, the space distribution of molecules is uniform, i. e., independent of ${\displaystyle x}$, ${\displaystyle y}$, ${\displaystyle z}$, and the distribution of velocities is the well known Maxwellian distribution.

The values of the constants ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ are to be found from those of ${\displaystyle N}$, ${\displaystyle V}$ and ${\displaystyle E}$. For the substitution of the value found for ${\displaystyle f}$ in ${\displaystyle (14)}$ leads to:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&N=V\alpha \left({\frac {\pi }{\beta }}\right)^{\tfrac {3}{2}},\end{aligned}}}

and the substitution of ${\displaystyle f}$ in ${\displaystyle (18)}$ leads to:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&E={\tfrac {3}{4}}Vm{\frac {\alpha }{\beta }}\left({\frac {\pi }{\beta }}\right)^{\tfrac {3}{2}}.\end{aligned}}}

From these equations it follows that:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\alpha ={\frac {N}{V}}\cdot \left({\frac {3mN}{4\pi E}}\right)^{\tfrac {3}{2}},\quad \beta ={\frac {3mN}{4E}},\end{aligned}}}

and hence finally, in accordance with ${\displaystyle (17)}$, the expression for the entropy ${\displaystyle S}$ of the gas in a state of equilibrium with given values for ${\displaystyle N}$, ${\displaystyle V}$ and ${\displaystyle E}$ is:

{\displaystyle {\begin{aligned}&(20){\color {White}.}\qquad &&S={\text{const}}+kN({\tfrac {3}{2}}\log E+\log V).\end{aligned}}}

The additive constant contains terms in ${\displaystyle N}$ and ${\displaystyle m}$, but not in ${\displaystyle E}$ and ${\displaystyle V}$.

The determination of the entropy here carried out permits now the specification directly of the complete thermodynamic behavior of the gas, viz., of the equation of state, and of the values of the specific heats. From the general thermodynamic definition of entropy:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&dS={\frac {dE+pdV}{T}}\end{aligned}}}

are obtained the partial differential quotients of ${\displaystyle S}$ with regard to ${\displaystyle E}$ and ${\displaystyle V}$ respectively:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\left({\frac {\partial S}{\partial E}}\right)_{V}={\frac {1}{T}},\quad \left({\frac {\partial S}{\partial V}}\right)_{E}={\frac {p}{T}}.\end{aligned}}}

Consequently, with the aid of ${\displaystyle (20)}$:

{\displaystyle {\begin{aligned}&(21){\color {White}.}\qquad &&\left({\frac {\partial S}{\partial E}}\right)_{V}={\frac {3}{2}}{\frac {kN}{E}}={\frac {1}{T}},\end{aligned}}}

and

{\displaystyle {\begin{aligned}&(22){\color {White}.}\qquad &&\left({\frac {\partial S}{\partial V}}\right)_{E}={\frac {kN}{V}}={\frac {p}{T}}.\end{aligned}}}

The second of these equations:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&p={\frac {kNT}{V}}\end{aligned}}}

contains the laws of Boyle, Gay Lussac and Avogadro, the latter because the pressure depends only upon the number ${\displaystyle N}$, and not upon the constitution of the molecules. Writing it in the ordinary form:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&p={\frac {RnT}{V}},\end{aligned}}}

where ${\displaystyle n}$ denotes the number of gram molecules or mols of the gas, referred to ${\displaystyle O_{2}=32g}$, and ${\displaystyle R}$ the absolute gas constant:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&R=8.315\cdot 10^{7}{\frac {\text{erg}}{\text{deg}}},\end{aligned}}}

we obtain by comparison:

{\displaystyle {\begin{aligned}&(23){\color {White}.}\qquad &&k={\frac {Rn}{N}}.\end{aligned}}}

If we denote the ratio of the mol number to the molecular number by ${\displaystyle \omega }$, or, what is the same thing, the ratio of the molecular mass to the mol mass:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&\omega ={\frac {n}{N}},\end{aligned}}}

and hence:

{\displaystyle {\begin{aligned}&(24){\color {White}.}\qquad &&k=\omega R.\end{aligned}}}

From this, if ${\displaystyle \omega }$ is given, we can calculate the universal constant ${\displaystyle k}$, and conversely.

The equation ${\displaystyle (21)}$ gives:

{\displaystyle {\begin{aligned}&(25){\color {White}.}\qquad &&E={\tfrac {3}{2}}kNT.\end{aligned}}}

Now since the energy of an ideal gas is given by:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&E=Anc_{v}T,\end{aligned}}}

wherein ${\displaystyle c_{v}}$ denotes in calories the heat capacity at constant volume of a mol, ${\displaystyle A}$ the mechanical equivalent of heat:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&A=4.19\cdot 10^{7}{\frac {\text{erg}}{\text{cal}}},\end{aligned}}}

it follows that:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&c_{v}={\frac {3kN}{2An}},\end{aligned}}}

and, having regard to ${\displaystyle (23)}$, we obtain:

{\displaystyle {\begin{aligned}&(26){\color {White}.}\qquad &&c_{v}={\frac {3}{2}}{\frac {R}{A}}=3.0,\end{aligned}}}

the mol heat in calories of any monatomic gas at constant volume.

For the mol heat ${\displaystyle c_{p}}$ at constant pressure we have from the first law of thermodynamics

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&c_{p}-c_{v}={\frac {R}{A}},\end{aligned}}}

and, therefore, having regard to ${\displaystyle (26)}$:

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&c_{p}=5,\quad {\frac {c_{p}}{c_{v}}}={\tfrac {5}{3}},\end{aligned}}}

a known result for monatomic gases.

The mean kinetic energy ${\displaystyle L}$ of a molecule is obtained from ${\displaystyle (25)}$:

{\displaystyle {\begin{aligned}&(27){\color {White}.}\qquad &&L={\frac {E}{N}}={\tfrac {3}{2}}kT.\end{aligned}}}

You notice that we have derived all these relations through the identification of the mechanical with the thermodynamic expression for the entropy, and from this you recognize the fruitfulness of the method here proposed.

But a method can first demonstrate fully its usefulness when we utilize it, not only to derive laws which are already known, but when we apply it in domains for whose investigation there at present exist no other methods. In this connection its application affords various possibilities. Take the case of a monatomic gas which is not sufficiently attenuated to have the properties of the ideal state; there are here, as pointed out by J. D. van der Waals, two things to consider: (1) the finite size of the atoms, (2) the forces which act among the atoms. Taking account of these involves a change in the value of the probability and in the energy of the gas as well, and, so far as can now be shown, the corresponding change in the conditions for thermodynamic equilibrium leads to an equation of state which agrees with that of van der Waals. Certainly there is here a rich field for further investigations, of greater promise when experimental tests of the equation of state exist in larger number.

Another important application of the theory has to do with heat radiation, with which we shall be occupied the coming week. We shall proceed then in a similar way as here, and shall be able from the expression for the entropy of radiation to derive the thermodynamic properties of radiant heat.

Today we will refer briefly to the treatment of polyatomic gases. I have previously, upon good grounds, limited the treatment to monatomic molecules; for up to the present real difficulties appear to stand in the way of a generalization, from the principles employed by us, to include polyatomic molecules; in fact, if we wish to be quite frank, we must say that a satisfactory mechanical theory of polyatomic gases has not yet been found. Consequently, at present we do not know to what place in the system of theoretical physics to assign the processes within a molecule—the intra-molecular processes. We are obviously confronted by puzzling problems. A noteworthy and much discussed beginning was, it is true, made by Boltzmann, who introduced the most plausible assumption that for intra-molecular processes simple laws of the same kind hold as for the motion of the molecules themselves, i. e., the general equations of dynamics. It is easy then, in fact, to proceed to the proof that for a monatomic gas the molecular heat ${\displaystyle c_{v}}$ must be greater than ${\displaystyle 3}$ and that consequently, since the difference ${\displaystyle c_{p}-c_{v}}$ is always equal to ${\displaystyle 2}$, the ratio is

{\displaystyle {\begin{aligned}&{\color {White}.(00)}\qquad &&{\frac {c_{p}}{c_{v}}}={\frac {c_{v}+2}{c_{v}}}<{\tfrac {5}{3}}.\end{aligned}}}

This conclusion is completely confirmed by experience. But this in itself does not confirm the assumption of Boltzmann; for, indeed, the same conclusion is reached very simply from the assumption that there exists intra-molecular energy which increases with the temperature. For then the molecular heat of a polyatomic gas must be greater by a corresponding amount than that of a monatomic gas.

Nevertheless, up to this point the Boltzmann theory never leads to contradiction with experience. But so soon as one seeks to draw special conclusions concerning the magnitude of the specific heats hazardous difficulties arise; I will refer to only one of them. If one assumes the Hamiltonian equations of mechanics as applicable to intra-molecular motions, he arrives of necessity at the law of “uniform distribution of energy,” which asserts that under certain conditions, not essential to consider here, in a thermodynamic state of equilibrium the total energy of the gas is distributed uniformly among all the individual energy phases corresponding to the independent variables of state, or, as one may briefly say; the same amount of energy is associated with every independent variable of state. Accordingly, the mean energy of motion of the molecules ${\displaystyle {\tfrac {1}{2}}kT}$, corresponding to a given direction in space, is the same as for any other direction, and, moreover, the same for all the different kinds of molecules, and ions; also for all suspended particles (dust) in the gas, of whatever size, and, furthermore, the same for all kinds of motions of the constituents of a molecule relative to its centroid. If one now reflects that a molecule commonly contains, so far as we know, quite a large number of different freely moving constituents, certainly, that a normal molecule of a monatomic gas, e. g., mercury, possesses numerous freely moving electrons, then, in accordance with the law of uniform energy distribution, the intra-molecular energy must constitute a much larger fraction of the whole specific heat of the gas, and therefore ${\displaystyle c_{p}/c_{v}}$ must turn out much smaller, than is consistent with the measured values. Thus, e. g., for an atom of mercury, in accordance with the measured value of ${\displaystyle c_{p}/c_{v}=5/3}$, no part whatever of the heat added may be assigned to the intra-molecular energy. Boltzmann and others, in order to eliminate this contradiction, have fixed upon the possibility that, within the time of observation of the specific heats, the vibrations of the constituents (of a molecule) do not change appreciably with respect to one another, and come later with their progressive motion so slowly into heat equilibrium that this process is no longer capable of detection through observation. Up to now no such delay in the establishment of a state of equilibrium has been observed. Perhaps it would be productive of results if in delicate measurements special attention were paid the question as to whether observations which take a longer time lead to a greater value of the mol-heat, or, what comes to the same thing, a smaller value of ${\displaystyle c_{p}/c_{v}}$, than observations lasting a shorter time.

If one has been made mistrustful through these considerations concerning the applicability of the law of uniform energy distribution to intra-molecular processes, the mistrust is accentuated upon the inclusion of the laws of heat radiation. I shall make mention of this in a later lecture.

When we pass from stable atoms to the unstable atoms of radioactive substances, the principles following from the kinetic gas theory lose their validity completely. For the striking failure of all attempts to find any influence of temperature upon radioactive phenomena shows us that an application here of the law of uniform energy distribution is certainly not warranted. It will, therefore, be safest meanwhile to offer no definite conjectures with regard to the nature and the laws of these noteworthy phenomena, and to leave this field for further development to experimental research alone, which, I may say, with every day throws new light upon the subject.

1. We can call ${\displaystyle \sigma }$ a “macro-differential” in contradistinction to the micro-differentials which are infinitely small with reference to the dimensions of a molecule. I prefer this terminology for the discrimination between “physical” and “mathematical” differentials in spite of the inelegance of phrasing, because the macro-differential is also just as much mathematical as physical and the micro-differential just as much physical as mathematical.