75%

The Encyclopedia Americana (1920)/Induction (logic)

From Wikisource
Jump to navigation Jump to search
785943The Encyclopedia Americana — Induction (logic)

INDUCTION, in logic, that method of reasoning which establishes general laws or specific predictions of future, present or past facts on the basis of individual experiences. It is the type of argument by which, let us say, the law of universal gravitation is demonstrated on the basis of observations as to the mutual attraction of certain given bodies, or by which an insurance company is able to determine a safe price for future policies on the basis of past statistical tables, or by which the geologist may describe the history of a certain drainage system through his knowledge of the present status of the system and of the modifications taking place in the drainage systems of the present time. Induction differs from deduction not only in that it starts from particular facts rather than general laws, but also in that the propositions derived by an induction (not covering every single case of the law it sets out to establish) never even appear to have that apodictic certainty which we naturally attribute to the results of correct deduction from indisputable premises. An induced conclusion is only probably true; furthermore, if it is at all precise in its terms, it is in general only approximately true. The probable correctness of the successive digits of a decimal fraction obtained by inductive reasoning falls off with amazing rapidity. A number, the first nine digits of which are all but absolutely certain, may well have a highly probable 10th digit, a likely 11th and an absolutely worthless 12th.

Induction, then, is the probable and approximate demonstration of laws or predictions on the basis of concrete experiences. It is not what it has been considered by Hume and the other 18th century British empiricists: the formation of general ideas — i.e., universals — from mere particulars. In the first place, a universal is not a universal law, nor is a fact a particular. But furthermore, we do not form the notion of red by looking at this red thing and that, and abstracting from them their common quality, nor by associating with the image of one red thing all our past experiences of red objects. There are an endless number of attributes possessed by a group of things, and even exclusively possessed by this group. Redness can only be one of these. When we see the group we recognize redness either as the color quality they possess in common or as their simplest common attribute, or because redness is the property that most attracts our attention. In all these cases we must have a concomitant or antecedent consciousness of a universal — of color, simplicity or redness itself. The process described by the British empiricists simply does not exist, and every formation of a general notion from experience involves an existing awareness of general notions. That is, notwithstanding the contrary opinion of the nominalists, the general notions of qualities and relations which enter into the presentment of inductive laws are not mere mosaics of particular sense data. In a like manner, the inductive laws themselves are not mere mosaics of particular facts. A very common expression among inductive logicians, due to J. S. Mill, is “the uniformity of nature.” We are justified in proceeding from this fact and that fact and the other fact to the general law subsuming them all because nature is uniform. This principle of the uniformity of nature has two very different meanings. It may be little more than a tautology or it may be the cardinal law of natural science — or all science, for the matter of that. If the uniformity of nature means simply what it says, it means merely that two occurrences can never agree in all but one aspect and disagree in that one. Now, as even an approximately complete inventory of the aspects of an occurrence is never at our disposal, and since moreover the temporal and spatial position of an occurrence must be considered in enumerating its aspects, this law tells us, for all practical purposes, absolutely nothing. Nature might be perfectly uniform, even though the jumping of a flea should determine the motions of the planets; the establishment of astronomical laws, however, would be a somewhat difficult pursuit.

The uniformity of nature, as the scientist understands it, is much more than this. It could be called more appropriately the continuity of nature. Perhaps it is best to consider it in the form which it assumes in the Newtonian physics. In the Newtonian physics, the world is completely described when the density of the matter occupying each point of space at some instant is known, together with the magnitude and direction of its velocity. The investigations of physics consist then in determining the actual form of the relation between quantities representing time, space, local density, direction and magnitude of velocity. In the attempts to discover the function which the time forms of the remaining seven variables, one assumption is always made — that this function is in general (i.e., in the language of the mathematician except at a set of points of zero measure) what the mathematicians term analytic. One consequence of this is that the function is continuous — i.e., by making sufficiently small changes in the seven variables, we may make the difference of time that results smaller than any assigned quantity, and keep it smaller. Furthermore, if we take a large number of experimental determinations of the time and the other seven variables, it is possible to construct a unique function of the seven variables which will represent the time at each of these points and which, by simply increasing the number of experiments, may be made to differ from the function representing the actual course of phenomena by less than any assigned quantity over any desired period of time. Two things follow: first, sufficiently slight errors in the observations only mean slight errors in the law covering the observations; secondly, by increasing the number of observations, it is possible to render the maximum error in the law formulated to cover them less than any given value. These facts assure us that by taking a sufficient number of observations, and by exercising a sufficient amount of care in each observation, we may approximate as near to the truth as we desire. That the amount of labor in obtaining a reasonable approximation is not beyond all human powers is an article of faith which may be said to constitute a part of what we mean by the law of the uniformity of nature. Other important elements have to do with the spatial distribution of phenomena. Not only the very small, but also the very remote, is inaccessible to observation and measurement. The existence of a scientific physics depends on our ability to neglect the phenomena at sufficiently great astronomical distances. Similar propositions assuring us of the negligibility of that which is sufficiently difficult to observe are basal for other aspects of physical theory.

The law of the uniformity of nature in physics then involves something like the following aggregate of statements: (1) There is a single equation subsisting between the time, the density of matter at every point of space at the time, and the direction and magnitude of velocity at that point. (2) This equation is such that each of the unknowns involved is in general an analytic function of all the rest — that is, among other things, that sufficiently small changes in the other unknowns produce only slight changes therein. (3) The error as to the occurrences near us made by considering only those physical phenomena within a sphere of finite radius with ourselves at the centre ultimately decreases very fast as the radius of the sphere increases.

If these statements are taken together with rough estimates of the order of magnitude of the change that the changes of some of the physical phenomena mentioned in 2 and 3 entail in others, we have the barest outline of the law of the uniformity of nature as it is found in physics. This law is very far from being a tautology — it is not even obvious. Furthermore, it is specifically a law of physics, and has only been established since the time of Newton by inductive physical researches. Other disciplines, such as psychology, have related but different laws of uniformity. They all involve statements of the continuity of certain concrete phenomena. It appears, then, that induction demands antecedent universal propositions that are not identically true — ex puris particularibus nil concluditur is not confined to deduction. These synthetic propositions, a priori at least in part, go back and back until in the last analysis they are due simply to a general consonance between the human mind and the facts of nature. This consonance, which consists largely in a preference for continuity both on the part of the mind and of nature, is continually rendered more perfect by the attrition of our imaginings in the places where they disagree with our observations. The history of science consists in a gradual remodeling of each theory in the points where it is wrong, in a mathematical treatment of the errors of the last mathematical treatment. It will be seen that in the theory here developed the distinction between induction and deduction is not absolute, but is rather one of degree and attitude. The stages of an inductive research are: (1) the imagination of a theory to fit the facts; (2) the deduction of the consequences of the theory; (3) the verification of these consequences and the observation of their errors; (4) the imagination of a theory to account for the errors of the original theory or the formulation of a new theory avoiding these errors. The process runs through a regular never-ending cycle. Stages (1) and (2) are identically those of the mathematician in his purely deductive reasoning. Stages (3) and (4) may be paralleled in a mathematical research where the object is the formation of an algorithm which will subserve an especial end. The only difference is that the verification which the mathematician makes is complete, that of the physicist incomplete.

The importance given to continuity in this article may be expressed by saying that the chief inductive method of the scientist is what Mill calls the method of concomitant variations. Mill's canon for this method is: “Whatever phenomenon varies in any manner wherever another phenomenon varies in some particular manner' is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation.”

It will be noted that Mill gives a causal interpretation to the method. It has always been the custom of the philosopher, and almost never the custom of the scientist, to interpret the laws of nature under the aspect of cause. A law of nature is simply a more or less precise formula to which occurrences conform. Sometimes, and only sometimes, the correlated phenomena will have a temporal order, and we may talk of antecedents and consequents. In such a case the antecedents may be called causes and the consequents effects. This implies no obscure effective force emanating from the cause and proceeding to the effect — Hume demolished that notion long ago. A causal interpretation of the universe, then, consists merely in selecting one especial type of inductive correlation and elevating it to the type of all induction whatever.

Causal language is particularly adapted to vague, ill-defined phenomena, about which we can assert little but their presence or absence. Accordingly Mill's remaining canons of induction deal with such phenomena. The method of agreement argues a causal relation between A and B when two trains of circumstances are known which begin in A, and have nothing else in common except their termination in B. The method of difference concludes that A causes B if a train of events is known which contains A and ends in B, while a train of events precisely similar except in that it does not contain A likewise fails to contain B. The joint method of agreement and difference is what its name would imply. The method of residues is that in which the unexplained parts of a nexus of events are linked up with one another. Not one of these methods is without grave dangers except in the hands of the scientist with a concrete knowledge of the field where he applies it. The artificial division or antecedent and consequent alike into a jig-saw puzzle of yes and no occurrences is vicious in the extreme.

Induction has been a method of human reasoning from time immemorial, and has especially characterized those centuries since Francis Bacon. Aristotle, who was the first to recognize induction as a scientific method, gave a very scant account of induction other than that by a complete enumeration of instances. Bacon followed him in this excessive restriction of inductive reasoning. The beginning of the 17th century marks a period when the progress of science had forced a consciousness of the inadequacy of the Aristotelian logic upon the world of learning. The accepted theory of deductive reasoning began to be supplemented in practice by a methodology, but no approximately adequate treatment of this methodology was developed until the middle of the 19th century. In 1840 Whewell (q.v.) published a work in which for the first time due credit was given to the function of imagining and speculation in inductive reasoning. Soon afterward Mill published his ‘Logic,’ in which he formulated the five methods of inductive research that have already been mentioned, and expressed the theory, also mentioned above, that every induction in a syllogism with the uniformity of the nature as its major premise. Since the time of Mill the growth of inductive logic and methodology has been extremely rapid. (See Law in Science and Philosophy; Logic; Mill, John Stuart). Consult Aristotle, ‘Organon’; Bacon, F., ‘Novum Organum’; Joseph, H. W. B., ‘Introduction to Logic’ (Oxford 1906); Mill, J. S., ‘System of Logic’ (London 1843); Russell, B. A. W., ‘Our Knowledge of the External World’ (Chicago 1914); Welton, ‘Manual of Logic’ (Pt. 11, London 1896); Whewell, W., ‘Philosophy of the Inductive Sciences’ (London 1840).

Norbert Wiener,
Editorial Staff of The Americana.