Lightning in a Bottle/Chapter 1

From Wikisource
Jump to navigation Jump to search

Chapter One

Who Are You, and What Are You Doing Here?

1.0 Cooperate or Die

The story of science is a story of progress through collaboration. The story of philosophy, on the face of it, is a story of neither: it is an academic cocktail party cliché that when an area of philosophy starts making progress, it’s time to request funds for a new department. If this observation is supposed to be a mark against philosophy, I’m not sure I understand the jibe—surely it’s a compliment to say that so much has sprung from philosophy’s fertile soil, isn’t it? Whether or not the joke contains a kernel of truth (and whether or not it does indeed count as a black mark against the usefulness of the discipline) is not immediately important. This project is neither a work in philosophy as traditionally conceived, nor a work in science as traditionally conceived: it is, rather, a work on a particular problem. I’ll say a bit more about what that means below, but first let’s start with an anecdote as a way into the problem we’ll be tackling.

In 2009, Columbia University's Mark Taylor, a professor of Religion, wrote an Op-Ed for the New York Times calling for a radical restructuring of academia. Among the controversial changes proposed by Taylor was the following: "Abolish permanent departments, even for undergraduate education, and create problem-focused programs. These constantly evolving programs would have sunset clauses, and every seven years each one should be evaluated and either abolished, continued or significantly changed.[1]" This suggestion drew a lot of fire from other academics. Brian Leiter, on his widely-circulated blog chronicling the philosophy profession, was particularly scathing in his rebuke: "Part of what underlies this is the fact that Taylor has no specialty or discipline of his own, and so would like every other unit to follow suit, and 'specialize' in intellectual superficiality across many topics.[2]" Ouch. Professor John Kingston of the University of Massachusetts, Amherst's linguistics department was a bit more charitable in his response, which appeared in the published reader comments on the New York Times' website:

Rather than looking inward as [Taylor] claims we all do, my colleagues and I are constantly looking outward and building intellectual bridges and collaborations with colleagues in other departments. In my department's case, these other departments include Psychology, Computer Science, and Communications – these collaborations not only cross department boundaries at my institution but college boundaries, too. Moreover, grants are increasingly collaborative and interdisciplinary.[3]

This seems to me to be a more sober description of the state of play today. While some of us might cautiously agree with Taylor's call for the radical restructuring of university departments (and, perhaps, the elimination of free-standing disciplines), virtually all of us seem to recognize the importance and power of collaboration across existing disciplines, and to recognize that (contra what Leiter has said here) generality is not necessarily the same thing as superficiality. The National Academies Press' Committee on Science, Engineering, and Public Policy recognized the emerging need to support this kind of collaborative structure at least as far back as 2004, publishing an exhaustive report titled Facilitating Interdisciplinary Research. The report describes the then-current state of interdisciplinary research in science and engineering:

Interdisciplinary thinking is rapidly becoming an integral feature of research as a result of four powerful “drivers”: the inherent complexity of nature and society, the desire to explore problems and questions that are not confined to a single discipline, the need to solve societal problems, and the power of new technologies.[4]

The times, in short, are a-changing; the kinds of problems facing science today increasingly call for a diverse and varied skill-set—both in theory and in practical application—and we ignore this call at our peril. This is true both inside traditional disciplines and outside them; in that sense, Taylor’s call was perhaps not as radical as it first appears—the kind of collaborative, problem-focused research that he advocates is (to a degree) alive and well in the traditional academic habitat. Research in quantum mechanics, to take one example on which my background allows me to speak at least semi-intelligently, might incorporate work from particle physicists doing empirical work with cloud chambers, high-energy particle physicists doing other empirical work with particle accelerators, and still other particle physicists investigating the mathematics behind spontaneous symmetry breaking. Progress will come as a result of a synthesis of these approaches to the problem.

This is hardly earth-shattering news: science has long labored under an epistemic and methodological division of labor. Problems in physics (for instance) have long-since become complex to such a degree that no single physicist can hope to understand all the intricacies (or have the equipment to perform all the necessary experiments), so physicists (and laboratories) specialize. The results that emerge are due to the action and work of the collective—to the institutional practices and structures that allow for this cooperative work—as much as to the work of individual scientists in the laboratories. Each branch supports all the others by working on more-or-less separable problems in pursuit of a common goal—a goal which no one branch is suited to tackle in isolation. In the case of elementary particle physics, that goal is (roughly) to understand patterns in the behavior of very, very small regions of the physical world; every relevant tool (from mathematical manifolds to particle accelerators) is recruited in pursuit of that goal.

More recently, however, a more sweeping collaborative trend has begun to emerge; increasingly, there have been meaningful contributions to quantum mechanics that have come not just from particle physicists, nor even just from physicists: the tool box has been enlarged. The work of W.H. Zurek on the relationship between quantum mechanics and classical mechanics, for instance, has been informed by such diverse fields of science as Shannon-Weaver information theory, mathematical game theory, and even Darwinian evolutionary biology[5]. "Pure" mathematics has contributions to make too, of course; much of the heavy-lifting in General Relativity (for example) is done by differential geometry, which was originally conceived in the purely theoretical setting of a mathematics department.

Philosophy too has been included in this interdisciplinary surge. The particular tools of the philosopher—the precise nature of which we shall examine in some detail in the coming sections—are well-suited to assist in the exploration of problems at the frontiers of human knowledge, and this has not gone unappreciated in the rest of the sciences. Gone are the days when most physicists shared the perspective apocryphally attributed to Richard Feynman, viz., "Philosophy of science is about as useful to scientists as ornithology is to birds." There are real conceptual problems at the heart of (say) quantum mechanics, and while the sort of scientifically-uninformed speculation that seems to have dominated Feynman's conception of philosophy is perhaps of little use to working scientists, the interdisciplinary turn in academia has begun to make it safe for the careful philosopher of science to swim along the lively reef of physical inquiry with the physicist, biologist, and chemist. Science is about collaboration, and there is room for many different contributions. No useful tool should be turned away.

So this call for radical collaboration is hardly new or revolutionary, despite the minor uproar that Taylor and his critics caused. The problem with which this project is concerned—the use to which I’ll be putting my own tools here—is not a new one either. It is one about which alarm bells have been ringing for at least 60 years now, growing steadily louder with each passing decade: the problem of rapid anthropogenic global climate change. I shall argue that what resources philosophy has to offer should not be ignored here, for every last bit of information that can be marshaled to solve this problem absolutely must be brought to bear. This is a problem that is more urgent than any before it, and certainly more than any since the end of the nuclear tensions of the Cold War. While it likely does not, as some have claimed, threaten the survival of the human species itself—short of a catastrophic celestial collision, few things beyond humanity's own weapons of mass destruction can claim that level of danger—it threatens the lives of millions, perhaps even billions, of individual human beings (as well as the quality of life for millions more), but only if we fail to understand the situation and act appropriately. I shall argue that this is quite enough of a threat to warrant an all-out effort to solve this problem. I shall argue that philosophy, properly pursued, has as real a contribution to make as any other branch of science. I shall argue that we must, in a very real sense, cooperate or die.

1.1 What's a Philosopher to Do?

Of course, we need to make all this a good deal more precise. It's all well and good for philosophers to claim to have something to add to science in general (and climate science in particular), but what exactly are we supposed to be adding? What are the problems of science that philosophical training prepares its students to tackle? Why are those students uniquely prepared to tackle those questions? What is it about climate science specifically that calls out for philosophical work, and how does philosophy fit into the overall project of climate science? Why (in short) should you care what I have to say about this problem? These are by no means trivial questions, and the answers to them are far from obvious. Let's start slowly, by examining what is (for us) perhaps the most urgent question in the first of the three categories introduced in Chapter Zero[6]: the question of how philosophy relates to the scientific project, and how philosophers can contribute to the advancement of scientific understanding[7].

The substance of the intuition lurking behind Feynman's quip about ornithology is this: scientists can get along just fine (thank you very much) without philosophers to tell them how to do their jobs. To a point, this intuition is surely sound—the physicist at work in the laboratory is concerned with the day-to-day operation of his experimental apparatus, with experiment design, and (at least sometimes) with theoretical breakthroughs that are relevant to his work. Practicing scientists—with a few very visible exceptions like Alan Sokal—paid little heed to the brisk "science wars" of the 1980s and 1990s. On the other hand, though, the intuition behind Feynman’s position is also surely mistaken; as I noted in Section 1.0, many of those same practicing physicists often acknowledge (for example) that people working in philosophy departments have made real contributions to the project of understanding quantum mechanics. It seems reasonable to suppose that those (living) scientists ought to be allowed to countermand Feynman who, great a physicist as he was, is not in a terribly good position to comment on the state of the discipline today; as James Ladyman has observed, “the metaphysical attitudes of historical scientists are of no more interest than the metaphysical opinions of historical philosophers[8].” I tend to agree with this assessment: primacy should be given to the living, and (at least some) contemporary scientists are happy to admit a place for the philosopher in the scientific project.

Still, it might be useful to pursue this line of thinking a bit further. We can imagine how Feynman might respond to the charge leveled above; though he's dead we might (so to speak) respond in his spirit. Feynman might well suggest that while it is true that genuine contributions to quantum mechanics (and science generally) have occasionally come from men and women employed by philosophy departments, those contributions have come about as a result of those men and women temporarily leaving the realm of philosophy and (at least for a time) doing science. He might well suggest, (as John Dewey did) that, “…if [philosophy] does not always become ridiculous when it sets up as a rival of science, it is only because a particular philosopher happens to be also, as a human being, a prophetic man of science.[9]” That is, he might well side with the spirit behind the cocktail party joke mentioned in Section 1.0—anything good that comes out of a philosophy department isn’t philosophy: it’s science.

How are we to respond to this charge? Superficially, we might accuse the spirit of Feynman of simply begging the question; after all, he's merely defined science in such a way that it includes (by definition!) any productive work done by philosophers of science. Given that definition, it is hardly surprising that he would consider philosophy of science qua philosophy of science useless—he's defined it as the set of all the work philosophers of science do that isn't useful! 'Philosophy of science is useless to scientists,' on that view, isn't a very interesting claim. By the same token, though, we might think that this isn't a very interesting refutation; let's give the spirit of Feynman a more charitable reading. If there's a more legitimate worry lurking behind the spirit of Feynman's critique, it's this: philosophers, on the whole, are not qualified to make pronouncements about the quality of scientific theories—they lack the training and knowledge to contribute non-trivially to any branch of the physical sciences, and while they might be well-equipped to answer evaluative questions, they ought to leave questions about the nature of the physical world to the experts. If philosophers occasionally make genuine progress in some scientific disciplines, cases like that are surely exceptional; they are (as Dewey suggests) the result of unusually gifted thinkers who are able to work both in philosophy and science (though probably not at the same time).

What's a philosopher of science to say here? How might we justify our paychecks in the face of the spirit of Feynman's accusations? Should we resign ourselves to life in the rich (if perhaps less varied) world of value theory and pure logic, and content ourselves with the fact that condensed-matter physicists rarely attempt to expound on the nature of good and evil? Perhaps, but let's not give up too quickly. We might wonder (for one thing) what exactly counts as "science," if only to make sure that we're not accidentally trespassing where we don’t belong. For that matter, what counts as philosophy and (in particular) what is it that philosophers of science are doing (useful or not) when they're not doing science? Surely this is the most basic of all foundational questions, and our answers here will color everything that follows. With that in mind, it's important to think carefully about how best to explain ourselves to the spirit of Feynman.

1.2 What's a Scientist to Do?

Let's start with a rather banal observation: science is about the world[10]. Scientists are in the business of understanding the world around us—the actual world, not the set of all possible worlds, or Platonic heaven, or J.R.R Tolkien’s Middle Earth[11]. Of course, this isn’t just limited to the observable, or visible world: science is interested in the nature of parts of the world that have never been directly observed and (in at least some cases) never will be. Physicists, for instance, are equally concerned that their generalizations apply to the region of the world inside the sun[12] as they are that those generalizations apply to their laboratory apparatuses. There’s a more important sense in which science is concerned with more than just the observed world, though: science is not just descriptive, but predictive too—good science ought to be able to make predictions, not just tell us the way the world is right now (or was in the past). A science that consisted of enumerating all the facts about the world now, as useful as it might be, wouldn’t seem to count as a full-fledged science by today’s standard, nor would it seem to follow the tradition of historical science; successful or not, scientists since Aristotle (at least!) have, it seems, tried to describe the world not just as it is, but as it will be.

This leads us to another (perhaps) banal observation: science is about predicting how the world changes over time. Indeed, a large part of how we judge the success (or failure) of scientific theories is through their predictive success; the stock example of Fresnel’s success with the wave theory of light, as demonstrated by the prediction (and subsequent observation) of a bright spot at the center of the shadow cast by a round disk is a stock example for good reason—it was a triumph of novel predictive utility. General relativity’s successful prediction of the actual orbit of the planet Mercury is another excellent paradigm case here; Mercury’s erratic orbit, which was anomalous in Newton’s theory of gravity, is predicted by Einstein’s geometric theory. This success, it is important to note, is not in any sense a result of “building the orbit in by hand;” as James Ladyman and John Collier observe, though Einstein did (in some sense) set out to explain Mercury’s orbit through a general theory of gravitation, he did this entirely by reference to general facts about the world—the empirically accurate prediction of Mercury’s orbit followed from his theory, but nothing in the theory itself was set with that particular goal in mind. The history of science is, if not exactly littered with, certainly not lacking in other examples of success like this; indeed, having surprising, novel, accurate predictions “pop out” of a particular theory is one of the best markers of that theory’s success[13].

It is not enough, then, to say that science is about prediction of how the world will change over time. Science doesn’t just seek to make any predictions, it seeks to make predictions of a particular sort—predictions with verifiable consequences—and it does this by attempting to pick out patterns that are in evidence in the world now, and projecting them toward the future. That is to say: science is the business of identifying genuine patterns[14] in how the world changes over time. It is precisely this projectability that makes a putative pattern genuine rather than ersatz; this is why science is of necessity concerned with more than just enumerating the facts about the way the world is now—just given the current state of the world, we could hypothesize a virtually infinite number of “patterns” in that state, but only some of those putative patterns will let us make accurate predictions about what the state of the world will be in (say) another hour.

1.3 Toy Science and Basic Patterns

Let’s think more carefully about what it means to say that science is in the business of identifying genuine patterns in the world. Consider a simple example—we’ll sharpen things up as we go along. Suppose we’re given a piece of a binary sequence, and asked to make predictions about what numbers might lie outside the scope of the piece we’ve been given:

S1: 110001010110001

Is there a genuine pattern in evidence here? Perhaps. We might reasonably suppose that the pattern is “two ‘ones,’ followed by three ‘zeros’ followed by ‘one, zero, one, zero,’ and then repeat from the beginning.” This putative pattern R is empirically adequate as a theory of how this sequence of numbers behaves; it fits all the data we have been given. How do we know if this is indeed a genuine pattern, though? Here’s an answer that should occur to us immediately: we can continue to watch how the sequence of numbers behaves, and see if our predictions bear out. If we’ve succeeded in identifying the pattern underlying the generation of these numbers, then we’ll be able to predict what we should see next: we should see a ‘zero’ followed by a 'one,’ and then another ‘zero,’ and so on. Suppose the pattern continues:

S2: 0101100010101

Ah ha! Our prediction does indeed seem to have been born out! That is: in S2, the string of numbers continues to evolve in a way that is consistent with our hypothesis that the sequence at large is (1) not random and (2) is being generated by the pattern R. Of course, this is not enough for us to say with certainty that R (and only R) is the pattern behind the generation of our sequence; it is entirely possible that the next few bits of the string will be inconsistent with R; that is one way that we might come to think that our theory of how the string is being generated is in need of revision. Is this the only way, though? Certainly not: we might also try to obtain information about what numbers came before our initial data-set and see if R holds there, too; if we really have indentified the pattern underlying the generation of S, it seems reasonable to suppose that we ought to be able to “retrodict” the structure of sub-sets of S that come before our initial data-set just as well as we can predict the structure of sub-sets of S that come after our initial data-set. Suppose, for example, that we find that just before our initial set comes the string:

S0: 00001000011111

The numbers in this string are not consistent with our hypothesis that all the numbers in the sequence at large are generated by R. Does this mean that we’ve failed in our goal of identifying a pattern, though? Not necessarily. Why not?

There’s another important question that we’ve been glossing over in our discussion here: for a pattern in some data to be genuine must it also be global[15]? That is, for us to say reasonably that R describes the sequence S, must R describe the sequence S everywhere? Here’s all the data we have now:

S0-2: 000010000111111100010101100010101100010101

It is clear that we can no longer say that R (or indeed any single pattern at all) is the pattern generating all of S. This is not at all the same thing as saying that we have failed to identify a pattern in S simpliciter, though. Suppose that we have some reason to be particularly interested in what’s going on in a restricted region of S: the region S1-2. If that’s the case, then the fact that R turns out not to hold for the totality of S might not trouble us at all; identifying a universal pattern would be sufficient for predicting what sequence of numbers will show up in S1-2, but it is by no means necessary. If all we’re interested in is predicting the sequence in a particular region of S, identifying a pattern that holds only[16] in that region is no failure at all, but rather precisely what we set out to do to begin with! It need not trouble us that the pattern we’ve identified doesn’t hold everywhere in S—identifying that pattern (if indeed there is one to be identified) is another project entirely.

When we’re investigating a sequence like S, then, our project is two-fold: we first pick a region of S about which we want to make predictions, and then attempt to identify a pattern that will let us make those predictions. When we have a candidate pattern, we can apply it to heretofore unobserved segments of our target region and see if the predictions we’ve made by using the pattern are born out. That is: we first identify a particular way of carving up our target data-set and then (given that carving) see what patterns can be picked out. That any patterns identified by this method will hold (or, better, that we have good reason to think they'll hold) in a particular region only is (to borrow the language of computer programmers) a feature rather than a bug. It's no criticism, in other words, to say that a putative pattern that we've identified relative to a particular carving of our subject-matter holds only for that carving; if our goal is just to make predictions about a restricted region of S, then identifying a pattern that holds only in that region might well make our jobs far easier, for it will give us license to (sensibly) ignore data from outside our restricted region, which might well make our task significantly easier[17].

Let's think about another potentially problematic case. Suppose now that we're given yet another piece of S:

S3: 0010100100010

S3 is almost consistent with having been generated by R—only a single digit is off (the bolded zero ought to be a one if R is to hold)—but still, it seems clear that it is not an instance of the pattern. Still, does this mean that we have failed to identify any useful regularities in S3? I will argue that it most certainly does not mean that, but the point is by no means an obvious one. What's the difference between S3 and S0 such that we can say meaningfully that, in picking out R, we've identified something important about the former but not the latter? To say why, we'll have to be a bit more specific about what counts as a pattern, and what counts as successful identification of a pattern.

Following Dennett[18] and Ladyman et. al.[19], we might begin by thinking of patterns as being (at the very least) the kinds of things that are "candidates for pattern recognition.[20]" But what does that mean? Surely we don't want to tie the notion of a pattern to particular observers—whether or not a pattern is in evidence in some dataset (say S3) shouldn't depend on how dull or clever the person looking at the dataset is. We want to say that there at least can be cases where there is in fact a pattern present in some set of data even if no one has yet (or perhaps even ever will) picked it out. As Dennett notes, though, there is a standard way of making these considerations more precise: we can appeal to information theoretic notions of compressibility. A pattern exists in some data if and only if there is some algorithm by which the data can be significantly compressed.

This is a bit better, but still somewhat imprecise. What counts as compression? More urgently, what counts as significant compression? Why should we tie our definition of a pattern to those notions? Let's think through these questions using the examples we've been looking at for the last few pages. Think, to begin with, of the sequence :

S1-2: 1100010101100010101100010101

This, recall, was our perfect case for R: the pattern we identified holds perfectly in this data-set. What does it mean to say that R holds perfectly in light of the Dennettian compressibility constraint introduced above, though? Suppose that we wanted to communicate this string of digits to someone else—how might we go about doing that? Well, one way—the easiest way, in a sense—would just be to transmit the string verbatim: to communicate a perfect bit map of the data. That is, for each digit in the string, we can specify whether it is a 'one' or a 'zero,' and then transmit that information (since there are 28 digits in the dataset S1-2, the bit-map of S1-2 is 28 bits long). If the string we're dealing with is truly random then this is (in fact) the only way to transmit its contents[21]: we have to record the state of each bit individually, because (if the string is random) there is no relationship at all between a given bit and the bits around it. Now we're getting somewhere. Part of what it means to have identified a pattern in some data-set, then, is to have (correctly) noticed that there is a relationship between different parts of the data-set under consideration—a relationship that can be exploited to create a more efficient encoding than the simple verbatim bit-map.

The sense of 'efficiency' here is a rather intuitive one: an encoding is more efficient just in case it is shorter than the verbatim bit map—just in case it requires fewer bits to transmit the same information. In the case of S1-2, it's pretty easy to see what this sort of encoding would look like—we specify R, then specify that the string we're passing consists in two iterations of R. Given a suitable way of encoding things, this will be much shorter than the verbatim bit map. For example, we might encode by first specifying a character to stand for the pattern, then specifying the pattern, then specifying the number of times that the pattern iterates. It might look something like this:


This string is 15 bits long; in just this simple encoding scheme, we've reduced the number of characters required to transmit S1-2 by almost 50%. That's a very significant efficiency improvement (and, given the right language, we could almost certainly improve on it even further)[22].

This compressibility criterion is offered by Dennett as a necessary condition on patternhood: to be an instance of a (real) pattern, a data-set must admit of a more compact description than the bitmap. However, as a number of other authors have pointed out[23], this cannot be the whole story; while compressibility is surely a necessary condition on patternhood, it cannot be both necessary and sufficient, at least not if it is to help us do useful work in talking about the world (recall that the ultimate point of this discussion is to articulate what exactly it is that science is doing so that we can see if philosophy has something useful to contribute to the project). Science cannot simply be in the business of finding ways to compress data sets; if that were so, then every new algorithm—every new way of describing something—would count as a new scientific discovery. This is manifestly not the case; whatever it is that scientists are doing, it is not just a matter of inventing algorithm after algorithm. There's something distinctive about the kinds of patterns that science is after, and about the algorithms that science comes up with. In fact, we've already identified what it is: we've just almost lost sight of it as we've descended into a more technical discussion—science tries to identify patterns that hold not just in existing data, but in unobserved cases (including future and past cases) as well. Science tries to identify patterns that are projectable.

How can we articulate this requirement in such a way that it meshes with the discussion we’ve been having thus far? Think, to begin, of our hypothetical recipient of information once again. We want to transmit the contents of S1-2 to a third party. However, suppose that (as is almost always the case) our transmission technology is imperfect—that we have reason to expect a certain degree of signal degradation or information loss in the course of the transmission. This is the case with all transmission protocols available to us; in the course of our transmission, it is virtually inevitable that a certain amount of noise (in the information-theoretic sense of the dual of signal) will be introduced in the course of our message traveling between us and our interlocutor. How can we deal with this? Suppose we transmit the bitmap of S1-2 and our recipient receives the following sequence:

S1-2: 1100010101100010101100??0?01

Some of the bits have been lost in transmission, and now appear as question marks—our interlocutor just isn’t sure if he’s received a one or a zero in those places. How can he correct for this? Well, suppose that he also knows that S1-2 was generated by R. That is, suppose that we’ve also transmitted our compressed version of S1-2. If that’s the case, then our interlocutor can, by following along with R, reconstruct the missing data and fill in the gaps in his signal. This, of course, requires more transmission overall—we have to transmit the bitmap and the pattern-encoding—but in some cases, this might well be worth the cost (for instance, in cases where there is a tremendous amount of latency between signal transmission and signal reception, so asking to have specific digits repeated is prohibitively difficult). This is in fact very close to how the Transmission-Control Protocol (TCP) works to ensure that the vast amount of data being pushed from computer to computer over the Internet reaches its destination intact.

Ok, but how does this bear on our problem? Next, consider the blanks in the information our interlocutor receives not as errors or miscommunication, but simply as unobserved cases. What our interlocutor has, in this case, is a partial record of S1-2; just as before, he’s missing some of the bits, but rather than resulting from an error in communication, this time we can attribute the information deficit to the fact that he simply hasn’t yet looked at the missing cases. Again, we can construct a similar solution—if he knows R, then just by looking at the bits he does have, then our interlocutor can make a reasonable guess as to what the values of his unobserved bits might be. It’s worth pointing out here that, given enough observed cases, our interlocutor need not have learned of R independently: he might well be able to deduce that it is the pattern underlying the data points he has, and then use that deduction to generate an educated guess about the value of missing bits. If an observer is clever, then, he can use a series of measurements on part of his data-set to ground a guess about a pattern that holds in that data set, and then use that pattern to ground a guess about the values of unmeasured parts of the data set.

At last, then, we’re in a position to say what it is that separates S3 from S0 such that it is reasonable for us to say that R is informative in the former case but not in the latter, despite the fact that neither string is consistent with the hypothesis that R is the pattern underlying its generation. The intuitive way to put the point is to say that R holds approximately in the case of S3 but not in the case of S0, but we can do better than that now: given R, and a restricted set of S3, an observer who is asked to guess the value of some other part of the set will do far better than we’d expect him to if R was totally uninformative—that is, he will be able to make predictions about S3 which, more often than not, turn out to be good ones. In virtue of knowing R, and by measuring the values in one sub-set of S3, he can make highly successful predictions about how other value measurements in the set will turn out. The fact that he will also get things wrong occasionally should not be too troubling; while he’d certainly want to work to identify the exceptions to R—the places in the sequence where R doesn’t hold—just picking out R goes a very long way toward sustained predictive success. Contrast that case to the case in S0: here, knowledge of R won’t help an observer make any deductions about values of unobserved bits. He can learn as much as he wants to about the values of bits before and after a missing bit and he won’t be any closer at all to being able to make an educated guess about the missing data.

1.4 Fundamental Physics and the Special Sciences

It might be worth taking a moment to summarize the rather lengthy discussion from the last section before we move on to considering how that discussion bears on the larger issue at hand. We started by observing that science is “about the world” in a very particular sense. In exploring what that might mean, I argued that science is principally concerned with identifying patterns in how the world around us changes over time[24]. We then spent some time examining some basic concepts in information theory, and noted that many of the insights in the philosophy of information theory first articulated by Dennett (1991) and later elaborated by other authors fit rather neatly with a picture of science as the study of patterns in the world. We looked at a few problem cases in pattern identification—including patterns that hold only approximately, and data-sets with partial information loss—and argued that even in cases like that, useful information can be gleaned from a close search for patterns; patterns neither need to be universal nor perfect in order to be informative. We tried to give an intuitive picture of what we might mean when we say that science looks for patterns that can be projected to unobserved cases. I’d like to now drop the abstraction from the discussion and make the implicit parallel with science that’s been lurking in the background of this discussion explicit. We should be able to draw on the machinery from Section 1.3 to make our earlier discussion of science more concrete, and to examine specific cases of how this model actually applies to live science.

Here’s the picture that I have in mind. Scientists are in the business of studying patterns in how the world changes over time. The method for identifying patterns varies from branch to branch of science; the special sciences differ in domain both from each other and from fundamental physics. In all cases, though, scientists proceed by making measurements of certain parts of the world, trying to identify patterns underlying those measurements, and then using those patterns to try to predict how unobserved cases—either future measurements or measurements in a novel spatial location—might turn out. Occasionally, they get a chance to compare those predictions to observed data directly. This is more common in some branches of science than in others: it is far more difficult to verify some of the predictions of evolutionary biology (say, speciation events) by observation than it is to verify some of the predictions of quantum mechanics (say, what state our measurement devices will end up in after a Stern-Gerlach experiment). More frequently, they are able to identify a number of different patterns whose predictions seem either agree or disagree with one another. Evolutionary biology is a well-confirmed science in large part not because large numbers of speciation events have been directly observed, but because the predictions from other sciences with related domains (e.g. molecular biology)—many of which have been confirmed through observation—are consistent with the predictions generated by evolutionary biologists.

Just as in the case of our toy science in Section 1.3, it seems to me that science generally consists in two separate (but related) tasks: scientists identify a domain of inquiry by picking out a way of carving up the world, and then identify the patterns that obtain given that way of carving things up. This is where the careful discussion from Section 1.3 should be illuminating: not all scientists are interested in identifying patterns that obtain everywhere in the universe—that is, not all scientists are interested in identifying patterns that obtain for all of S. Indeed, this is precisely the sense in which fundamental physics is fundamental: it alone among the sciences is concerned with identifying the patterns that will obtain no matter where in the world we choose to take our measurements. The patterns that fundamental physics seeks to identify are patterns that will let us predict the behavior of absolutely any sub-set of the world—no matter how large, small, or oddly disjunctive—at which we choose to look; it strives to identify patterns that describe the behavior of tiny regions of space-time in distant galaxies, the behavior of the interior of the sun, and the behavior of the Queen of England’s left foot. This is a fantastically important project, but it is by no means the only scientific project worth pursuing[25]. The special sciences are all, to one degree or another, concerned with identifying patterns that hold only in sub-sets of the domain studied by physics. This is not to say that the special sciences all reduce to physics or that they’re all somehow parasitic on the patterns identified by fundamental physics. While I want to avoid engaging with these metaphysical questions as much as possible, it’s important to forestall that interpretation of what I’m saying here. The special sciences are, on this view, emphatically not second-class citizens—they are just as legitimate as fields of inquiry as is fundamental physics. Again (and contra Maudlin), the sense of “fundamental” in “fundamental physics” should not be taken to connote anything like ontological primacy or a metaphysically privileged position (whatever that might mean) within the general scientific project. Rather (to reiterate) it is just an indicator of the fact that fundamental physics is the most general part of the scientific project; it is the branch of science that is concerned with patterns that show up everywhere in the world. When we say that other sciences are concerned with restricted sub-sets of the physical world, we just mean that they’re concerned with picking out patterns in some of the systems to which the generalizations of fundamental physics apply[26].

In contrast to fundamental physics, consider the project being pursued by one of the special sciences—say, molecular biology. Molecular biologists are certainly not interested in identifying patterns that hold everywhere in the universe; biologists have relatively little to say about what happens inside the sun (except perhaps to note that the conditions would make it difficult for life to prosper there). They are, instead, concerned with the behavior of a relatively small sub-set of regions of the universe. So far, the patterns they’ve identified have been observed to hold only on some parts of Earth, and that only in the last few billion years.[27] It’s clearly no criticism of molecular biology to point out that it has nothing to say on the subject of what happens inside a black hole—that kind of system is (by design) outside molecular biology’s domain of interest. Just as in the case of S1-2 above, this restriction of domain lets molecular biologists focus their efforts on identifying patterns that, while they aren’t universal, facilitate predictions about how a very large class of physical systems behave.

What exactly is the domain of inquiry with which molecular biology is concerned? That is, how do molecular biologists carve up the world so that the patterns they identify hold of systems included in that carving? It is rather unusual (to put it mildly) for the creation of a domain in this sense to be a rapid, deliberate act on the part of working scientists. It is unusual, that is, for a group of people to sit down around a table (metaphorical or otherwise), pick out a heretofore unexplored part of the world for empirical inquiry, and baptize a new special science to undertake that inquiry. Rather, new sciences seem most often to grow out of gaps in the understanding of old sciences. Molecular biology is an excellent illustration here; the isolation of DNA in 1869—and the subsequent identification of it as the molecule responsible for the heritability of many phenotypic traits—led to an explosion of new scientific problems: what is the structure of this molecule? How does it replicate itself? How exactly does it facilitate protein synthesis? How can it be damaged? Can that damage be repaired? Molecular biology is, broadly speaking, the science that deals with these questions and the questions that grew out of them—the science that seeks to articulate the patterns in how the chemical bases[28] for living systems behave. This might seem unsatisfactory, but it seems that it is the best answer we're likely to get: molecular biology, like the rest of science, is a work-in-progress, and is constantly refining its methodology and set of questions, both in light of its own successes (and failures) and in light of the progress in other branches of the scientific project. Science is (so to speak) alive.

This is an important point, and I think it is worth emphasizing. Science grows up organically as it attempts to solve certain problems—to fill in certain gaps in our knowledge about how the world changes with time—and is almost never centrally planned or directed. Scientists do the best they can with the tools they have, though they constantly seek to improve those tools. The fact that we cannot give a principled answer to the question "what parts of the world does molecular biology study?" should be no bar to our taking the patterns identified by molecular biology seriously. Just as we could not be sure that R, once identified, would hold in any particular segment of S that we might examine, we cannot be sure of precisely what regions of the world will behave in ways that are consistent with the patterns identified by molecular biologists. This is not to say, though, that the molecular biologists have failed to give us any interesting information—as we saw, universality (or even a rigidly defined domain of applicability) is no condition on predictive utility. To put the point one more way: though the special sciences are differentiated from one another in part by their domains of inquiry, giving an exhaustive account of exactly what parts of the world do and don't fall into the domain of a particular science is likely an impossible task. Even if it were not, it isn't clear what it would add to our understand of either a particular science or of science as a whole: the patterns identified by molecular biology are no less important for our not knowing if they do or don't apply to things other than some of the systems on Earth in the last few billion years; if molecular biology is forced to confront the problem of how to characterize extraterrestrial living systems, it is certainly plausible to suppose that its list of patterns will be revised, or even that an entirely new science will emerge from the realization that molecular biology as thus far conceived is parochial in the extreme. Speculating about what those changes would look like—or what this new special science would take as its domain—though, is of little real importance (except insofar as such speculation illuminates the current state of molecular biology). Like the rest of the sciences, molecular biology takes its problems as they come, and does what it can with the resources it has.

If we can't say for any given special science what exactly its domain is, then, perhaps we can say a bit more about what the choice of a domain consists in—that is, what practical activities of working scientists constitute a choice of domain? How do we know when a formerly singular science has diverged into two? Perhaps the most important choice characterizing a particular science's domain is the choice of what measurements to make, and on what parts of the world. That is: the choice of a domain is largely constituted by the choice to treat certain parts of the world as individuals, and the choice of what measurements to make on those individuals. Something that is treated as an individual by one special science might well be treated as a composite system by another[29]; the distinction between how human brains are treated by cognitive psychology (i.e. as the primary objects of prediction) and how they're treated by neurobiology (i.e. as aggregates of individual neural cells) provides an excellent illustration of this point. From the perspective of cognitive psychology, the brain is an unanalyzed individual object—cognitive psychologists are primarily concerned with making measurements that let them discern patterns that become salient when particular chunks of the physical world (that is: brain-containing chunks) are taken to be individual objects. From the perspective of neurobiology, on the other hand, brains are emphatically not unanalyzed objects, but are rather composites of neural cells—neurobiologists make measurements that are designed to discern patterns in how chunks of the physical world consisting of neural cells (or clusters of neural cells) evolve over time. From yet another perspective—that of, say, population genetics—neither of these systems might be taken to be an individual; while a population geneticist might well be interested in brain-containing systems, she will take something like alleles to be her primary objects, and will discern patterns in the evolution of systems from that perspective.

We should resist the temptation to become embroiled in an argument about which (if any) of these individuals are real individuals in a deep metaphysical sense. While it is certainly right to point out that one and the same physical system can be considered either as a brain (qua individual) or a collection of neurons (qua aggregate), this observation need not lead us to wonder which of these ways of looking at things (if either) is the right one. Some patterns are easier to discern from the former perspective, while others are easier to discern from the latter. For the purposes of what we're concerned with here, it seems to me, we can stop with that fact—there is no need to delve more deeply into metaphysical questions. Insofar as I am taking any position at all on questions of ontology, it is one that is loosely akin to Don Ross' "rainforest realism:[30]" a systematized version of Dennett's "stance" stance toward ontology. Ross' picture, like the one I have presented here, depicts a scientific project that is unified by goal and subject matter, though not necessarily by methodology or apparatus. It is one on which we are allowed to be frankly instrumentalist in our choice of objects—our choice of individuals—but still able to be thoroughly realists about the relations that hold between those objects—the patterns in how the objects change over time. This metaphysical position is a natural extension of the account of science that I have given here, and one about which much remains to be said. To engage deeply with it would take us too far afield into metaphysics of science, though; let us, then, keep our eye on the ball, and content ourselves with observing that there is at least the potential for a broad metaphysical position based on this pragmatically-motivated account of science. Articulating that position, though, must remain a project for another time.

1.5 Summary and Conclusion: Exorcising Feynman's Ghost

The story of science is a story of progress through collaboration: progress toward a more complete account of the patterns in how the world evolves over time via collaboration between different branches of science, which consider different ways of carving up the same world. Individual sciences are concerned with identifying patterns that obtain in certain subsets of the world, while the scientific project in general is concerned with the overarching goal of pattern-based prediction of the world's behavior. Success or failure in this project is not absolute; rather, the identification of parochial or "weak" patterns can often be just as useful (if not more useful) as the identification of universal patterns. Scientists identify patterns both by making novel measurements on accessible regions of the world and by creating models that attempt to accurately retrodict past measurements. The scientific project is unified in the sense that all branches of science are concerned with the goal of identifying patterns in how the physical world changes over time, and fundamental physics is fundamental in the sense that it is the most general of the sciences—it is the one concerned with identifying patterns that will generate accurate predictions for any and all regions of the world that we choose to consider. Patterns discovered in one branch of the scientific project might inform work in another branch, and (at least occasionally) entirely novel problems will precipitate a novel way of carving up the world, potentially facilitating the discovery of novel patterns; a new special science is born.

We might synthesize the discussions in Section 1.3 and Section 1.4 as follows. Consider the configuration space[31] D of some system T—say, the phase space corresponding to the kitchen in my apartment. Suppose (counterfactually) that we take Newtonian dynamics to be the complete fundamental physics for systems like this one. If that is the case, then fundamental physics provides a set of directions for moving from any point in the phase space to any other point—it provides a map identifying where in the space a system whose state is represented by some point at t0 will end up at a later time t1. This map is interesting largely in virtue of being valid for any point in the system: no matter where the system starts at t0, fundamental physics will describe the pattern in how it evolves. That is, given a list of points [a0,b0,c0,d0z0], the fundamental physics give us a corresponding list of points [a1,b1,c1,d1z1] that the system will occupy after a given time interval has passed. In the language of Section 1.3, we can say that fundamental physics provides a description of the patterns in the time-evolution of the room’s bit map: given a complete specification of the room’s state (in terms of its precise location in phase space) at one time, applying the algorithm of Newtonian mechanics will yield a complete specification of the room’s state at a later time (in terms of another point in phase space).

This is surely a valuable tool, but it is equally surely not the only valuable tool. It might be (and, in fact, is) the case that there are also patterns to be discerned in how certain regions of the phase space evolve over time. That is, we might be able to describe patterns of the following sort: if the room starts off in any point in region P0, it will, after a given interval of time, end up in another region P1. This is, in fact, the form of the statistical-mechanical explanation for the Second Law of Thermodynamics. This is clearly not a description of a pattern that applies to the “bit map” in general: there might be a very large number (perhaps even a continuous infinity) of points that do not lie inside P0, and for which the pattern just described thus just has nothing to say. This is not necessarily to say that the project of identifying patterns like P0 􏰀 P1 isn’t one that should be pursued, though. Suppose the generalization identified looks like this: if the room is in a region corresponding to “the kitchen contains a pot of boiling water and a normal human being who sincerely intends to put his hand in the pot[32]” at t0, then evolving the system (say) 10 seconds forward will result in the room’s being in a region corresponding to “the kitchen contains a pot of boiling water and a human being in great pain and with blistering skin.” Identifying these sorts of patterns is the business of the special sciences.

Not all regions will admit of interesting patterns in this way. This is the sense in which some ways of “carving up” a system’s space seem arbitrary in an important way. In a system with a relatively high degree of complexity—very roughly, a system with a relatively high-dimensional configuration space[33]—there will be a very large number of ways of specifying regions such that we won’t be able to identify any interesting patterns in how those regions behave over time. This is the sense in which some objects and properties seem arbitrary in problematic ways: carvings corresponding to (for example) grue-like properties (or bizarre compound objects like “the conjunction of the Queen of England’s left foot and all pennies minted after 1982”) just don’t support very many interesting patterns. Regions picked out by locutions like that don’t behave in ways that are regular enough to make them interesting targets of study. Even in cases like this, though, the patterns identified by fundamental physics will remain reliable: this (again) is the sense in which fundamental physics is fundamental. The behavior of even arbitrarily-specified regions—regions that don’t admit of any parochial patterns—will be predictable by an appeal to the bit-map level patterns of fundamental physics.

More precisely, then, the business of a particular special sciences consists in identifying certain regions of a system’s configuration space as instantiating enough interesting patterns to be worth considering, and then trying to enumerate those patterns as carefully as possible. A new special science emerges when someone notices that there exist patterns in the time-evolution of regions[34] which have heretofore gone unnoticed. The borders of the regions picked out by the special sciences will be vaguely-defined; if the special scientists were required to give a complete enumeration of all the points contained in a particular region (say, all the possible configurations corresponding to “normal human observer with the intention to stick his hand in the pot of boiling water”), then the usefulness of picking out patterns of those regions would be greatly reduced. To put the point another way, there’s a very real sense in which the vagueness of the carvings used by particular sciences is (to borrow from computer science yet again) a feature rather than a bug: it lets us make reliable predictions about the time-evolution of a wide class of systems while also ignoring a lot of detail about the precise state of those systems. The vagueness might lead us to occasionally make erroneous predictions about the behavior of a system, but (as I argued in Section 1.3) this is not at all a fatal criticism of a putative pattern. The progress of a particular special science consists largely in attempts to make the boundaries of its class of carvings as precise as possible, but this notion of progress need not entail that the ultimate goal of any special science is a set of perfectly defined regions. To be a pattern is not necessarily to be a perfect pattern, and (just as with compression algorithms in information theory) we might be happy to trade a small amount of error for a large gain in utility. The scientific project consists in the identification of as many of these useful region/pattern pairings as possible, and individual sciences aim at careful identification of patterns in the evolution of particular regions[35].

With this understanding of science (and the scientific project more generally) in hand, then, we can return to the question we posed near the beginning of this chapter: how are we to respond to the spirit of Richard Feynman? What's a philosopher to say in his own defense? What do we bring to the scientific table? It should be clear from what we've said thus far that philosophy is not, strictly speaking a science; philosophy (with a very few exceptions) does not seek to make measurements of the world around us[36], use those measurements to identify patterns in that world, and construct models under which those patterns are projected to future unobserved cases. That is, philosophy is not a science in the way that chemistry, biology, economics, climate science, or (a fortiori) fundamental physics are sciences; there is no set of configuration-space carvings with which philosophy is concerned. However, this does not mean that philosophy is not a part of Science in the sense of contributing to the overall scientific project. How does that relationship work? An analogy might help here. Consider the relationship between commercial airline pilots and the air-traffic controllers working at major metropolitan airports around the world. The kind of specialized knowledge required to operate (say) a Boeing 747 safely—as well as the rather restricted vantage point from which an individual pilot can view the airspace surrounding a port-of-call—leaves little room for coordination between planes themselves. While some communication is present between pilots, most of the direction comes from the ground—from people who, though they lack the incredibly technical know-how required to fly any one of the planes they support, fulfill a vital role, both in virtue of their position as outsiders with (so to speak) a bird's eye view on the complicated and fast-paced project of moving people in and out of cities via air travel and in virtue of their specialized training as managers and optimizers. Philosophers, I suggest, play a role similar to that of air traffic controllers while scientists play the role of pilots: while it is the pilots who are directly responsible for the success or failure of the project, their job can be (and is) made significantly easier with competent support and direction from the ground. The air traffic controllers cooperate with the pilots to further a shared goal: the goal of moving people about safely. Likewise, philosophers cooperate with scientists to further a shared goal: the goal of identifying genuine projectable patterns in the world around us. If this example strikes you as over inflating the philosophers' importance—who are we to think of ourselves as controlling anything?—then consider a related case. Consider the relationship between highway transportation qua vehicles and highway transportation qua broad system of technology—a technology in the fourth and last of the senses distinguished by Kline[37].

Think of the system of highway system in the United States[38]: while the vehicles—cars, trucks, motorcycles, bicycles, and so on—are in some sense the central components of the highway system (without vehicles of some sort, there would be no system to speak of at all), they by no means exhaust the vital components of the system. The highway system as a whole consists of a highly designed, standardized, well-maintained, incredibly diverse set of objects and practices that are just as essential for the smooth transportation of the people using the system as are the vehicles that traverse it: the traffic lights, the signs, the rest stops, the paint on the road, the safety-rails, the traffic cones, and so on are as vital as the cars themselves. Even more saliently for the purposes of our discussion, consider all the knowledge that went into conceptualizing, constructing, and maintaining that system, and of the skills and knowledge that must be imparted to each driver before he or she is competent to control a ton of metal and plastic moving at 75 miles per hour: these skills (and the tens of thousands of man-hours behind their conceptualization and implementation) are likewise essential. Think of the actual production and maintenance of those roads, the hundreds of thousands of miles of concrete, construction, and cleanup— as well as the hours of political negotiations and legal regulations and labor disputes that sit behind every mile of that road. Only through the smooth operation of this system as a whole is actual use of the road—the sitting behind the wheel, listening to terrible music, with only some destination in mind—made possible.

If the previous comparison of philosophers to air-traffic controllers seems to elevate philosophy beyond its rightful station, then we might take comfort in the fact that, though we might play the role of the lowly dotted yellow line, this role is still deeply essential to the functioning of the whole. Philosophers are not scientists in just the same way that dotted yellow lines are not cars, or that air-traffic controllers are not pilots, or that traffic engineers are not commuters trying to get to work on time. Like our transportation analogues, though, philosophers have a vital role to play in the scientific project as a whole: a role of coordination, general analysis, optimization, and clarification. We are suited to play this role precisely in virtue of not being scientists: we are uniquely suited (to both carry the transportation theme and echo a famous metaphor of Wilfred Sellars') "build bridges" between the activities of individual scientists, and between different branches of the scientific project as a whole. Philosophers are trained to clarify foundational assumptions, note structural similarities between arguments (and problems) that at first glance could not seem more disparate, and to construct arguments with a keen eye for rigor. These skills, while not necessarily part of the scientist's tool-kit, are vital to the success of the scientific project as a whole: if we're to succeed in our goal of cataloging the interesting patterns in the world around us, we need more than just people directly looking for those patterns. We might take this as a special case of Bruno Latour's observation that "the more non-humans share existence with humans, the more humane a collective is,[39]" and note that the more non-scientists share in the scientific project, the more scientific the project becomes. Now, let us turn to that project in earnest.

  1. Taylor (2009)
  2. Leiter (2009)
  3. Kingston (2009)
  4. Committee on Science, Engineering, and Public Policy (2004), p. 3
  5. See Zurek (2002), Zurek (2003), and Zurek (2004), respectively.
  6. I suggested that questions we might ask about climate science could be roughly divided into three categories: foundational questions, methodological questions, and evaluative questions. This chapter and the following one will deal with foundational questions. See Section 0.1 for more detail.
  7. The sense in which this is the most urgent question for us should be clear: the chapters that follow this one will constitute what is intended to be a sustained philosophical contribution to the climate change debate. On what basis should this contribution be taken seriously? Why should anyone care what I have to say? If we can't get a clear answer to this question, then all of what follows will be of suspect value.
  8. Ladyman, Ross, Spurrett, and Collier (2007)
  9. Dewey (1929), p.408
  10. The philosophically sophisticated reader might well be somewhat uncomfortable with much of what follows in the next few pages, and might be tempted to object that the observations I’ll be making are either fatally vague, fatally naïve, or both. I can only ask this impatient reader for some patience, and give my assurance that there is a deliberate method behind this naïve approach to philosophy of science. I will argue that if we start from basic facts about what science is—not as a social or professional institution, but as a particular attitude toward the world— how it is practiced both contemporarily and historically, and what it is supposed to do for us, we can short-circuit (or at least sneak by) many of the more technical debates that have swamped the last 100 years of the philosophy of science, and work slowly up to the tools we need to accomplish our larger task here. I ask, then, that the philosophically sophisticated reader suspend his sense of professional horror, and see if the result of our discussion here vindicates my dialectical (and somewhat informal) methodology. I believe it will. See Section 0.2for a more comprehensive defense of this naive methodology.
  11. Though it is worth mentioning that considerations of possible worlds, or even considerations of the happenings in Tolkien's Middle Earth might have a role to play in understanding the actual world. Fiction authors play a central role in the study of human culture: by running detailed "simulations" exploring elaborate hypothetical scenarios, they can help us better understand our own world, and better predict what might happen if certain facets of that world were different than they in fact are. This, as we will see, is a vital part of what the scientific enterprise in general is concerned with doing.
  12. Some philosophers of science (e.g. van Fraassen) have argued that there is a sense in which we observe what goes on inside the sun. This is an example of the sort of debate that I do not want to enter into here. The question of what counts as observation is, for our purposes, an idle one. I will set it to the side.
  13. The Aharnov-Bohm effect, a surprising quantum mechanical phenomenon in which the trajectory of a charged particle is affected by a local magnetic field even when traversing a region of space where both the magnetic field and the electric fields' magnitudes are zero, is another excellent example here. This particular flavor of non-locality implies that the classical Maxwellian formulation of the electromagnetic force as a function of a purely local electrical field and a purely local magnetic field is incomplete. The effect was predicted by the Schrodinger equation years before it was observed, and led to the redefinition of electromagnetism as a gauge theory featuring electromagnetic potentials, in addition to fields. See Ahranov and Bohm (1959). Thanks to Porter Williams for suggesting this case.
  14. The sense of “genuine” here is something like the sense of “real” in Dennett’s “real patterns” (Dennett 1991). I wish to delay questions about the metaphysics of patterns for as long as possible, and so opt for “genuine” rather than the more ontologically-loaded “real.” What it means for a pattern to be “genuine” will become clearer shortly. Again, see Section 0.2 for more on the underlying metaphysical assumptions here.
  15. The sense of 'global' here is the computer scientist's sense—a global pattern is one that holds over the entirety of the data set in question.
  16. Of course, it might not be true that R holds only in S1-2. It is consistent with everything we’ve observed about S so far to suppose that the sub-set S0 and the sub-set S1-2 might be manifestations of an over-arching pattern, of which R is only a kind of component, or sub-pattern.
  17. For more discussion of approximate pattern and their role in science, see Lawhead (2012)
  18. Dennett (1991)
  19. Ladyman, Ross, Spurrett, and Collier (2007)
  20. Dennett (op. cit.), p. 32, emphasis in the original
  21. Citing Chaitin (1975), Dennett (op. cit.) points out that we might actually take this to be the formal definition of a random sequence: there is no way to encode the information that results in a sequence that is shorter than the "verbatim" bit map.
  22. All of this can be made significantly more precise given a more formal discussion of what counts as a "good" compression algorithm. Such a discussion is unnecessary for our current purposes, but we will revisit information theory in significantly more detail in Chapter Two. For now, then, let me issue a promissory note to the effect that there is a good deal more to say on the topic of information-content, compression, and patternhood. See, in particular, Section 2.1.3.
  23. Collier (1999) and Ladyman, Ross, Spurrett, and Collier (2007)
  24. A similar view of scientific laws is given in Maudlin (2007). Maudlin argues that scientific laws are best understood as what he calls LOTEs—“laws of temporal evolution.” This is largely consistent with the picture I have been arguing for here, and (not coincidentally) Maudlin agrees that an analysis of scientific laws should "take actual scientific practice as its starting point" (p. 10), rather than beginning with an a priori conception of the form that a law must take. Our point of departure from Maudlin's view, as we shall see, lies in our treatment of fundamental physics. While Maudlin wants to distinguish "FLOTEs" (fundamental laws of temporal evolution) from normal LOTEs on the basis of some claim of "ontological primacy" (p. 13) for fundamental physics, the view I am sketching here requires no such militantly reductionist metaphysics. My view is intended to be a description of what working scientific laws actually consist in, not a pronouncement on any underlying metaphysics.
  25. It is worth pointing out that it is indeed possible that there just are no such patterns in the world: it is possible that all laws are, to a greater or lesser extent, parochial. If that were true, then it would turn out that the goal underlying the practice of fundamental physics was a bad one—there just are no universal patterns to be had. Because of this possibility, the unity of science is an hypothesis to be empirically confirmed or disconfirmed. Still, even its disconfirmation might not be as much of a disaster as it seems: the patterns identified in the course of this search would remain legitimate patterns, and the discovery that all patterns are to some extent parochial would itself be incredibly informative. Many advances are made accidentally in the course of pursuing a goal that, in the end, turns out to not be achievable.
  26. Ladyman, Ross, Spurrett, and Collier (2007) put the point slightly differently, arguing that fundamental physics is fundamental in the sense that it stands in an asymmetric relationship to the rest of science: generalizations of the special sciences are not allowed to contradict the generalizations of fundamental physics, but the reverse is not true; if the fundamental physicists and the biologists disagree, it is the biologist who likely has done something wrong. They call this the “Primacy of Physics Constraint” (PPC). It seems to me that while this is certainly true—that is, that it’s certainly right that the PPC is a background assumption in the scientific project—the way I’ve put the point here makes it clear why the PPC holds.
  27. It’s worth noting, though, that the search for habitable planets outside our own solar system is guided by the patterns identified by biologists studying certain systems here on Earth. This is an excellent case of an application of the kind of projectability we discussed above: biologists try to predict what planets are likely to support systems that are relevantly similar to the systems they study on Earth based on patterns they’ve identified in those terrestrial systems. It remains to be seen whether or not this project will prove fruitful.
  28. This includes not just the bases in the technical sense—nucleic acids—but also other chemical foundations that are necessary for life (e.g. proteins).
  29. We'll explore this point in much more depth in Chapter Two.
  30. See Ross (2000) and Chapter Four of Ladyman et. al. (2007), as well as Dennett (1991)
  31. That is, consider the abstract space in which every degree of freedom in T is represented as a dimension in a particular space D (allowing us to represent the complete state of T at any given time by specifying a single point in D), and where the evolution of T can be represented as a set of transformations in D. The phase space of classical statistical mechanics (which has a dimensionality equal to six times the number of classical particles in the system), the Hilbert space of standard non-relativistic quantum mechanics, and the Fock space of quantum field theory (which is the direct sum of the tensor products of standard quantum mechanical Hilbert spaces) are all prime examples of spaces of this sort, but are by no means the only ones. Though I will couch the discussion in terms of phase space for the sake of concreteness, this is not strictly necessary: the point I am trying to make is abstract enough that it should stand for any of these cases.
  32. We can think of the “sincerely intends to put his hand in the pot” as being an assertion about location of the system when its state is projected onto a lower-dimensional subspace consisting of the configuration space of the person’s brain. Again, this location will (obviously) be a regional rather than precise one: there are a large number of points in this lower-dimensional space corresponding to the kind of intention we have in mind here.
  33. This is only a very rough gesture at a definition of complexity, but we’re not yet in a position to do better than this. For a more precise discussion of the nature (and significance) of complexity, see Section 2.2.
  34. It might be appropriate to remind ourselves here that the regions under discussion here are regions of configuration space, not space-time.
  35. There will often be overlap between the regions studied by one science and the regions studied by another. The “human with his hand in a pot of boiling water” sort of system will admit of patterns from (for example) the perspectives of biology, psychology, and chemistry. That is to say that this sort of system is one that is in a region whose behavior can be predicted by the regularities identified by all of these special sciences, despite the fact that the unique carvings of biology, psychology, and chemistry will be regions with very different shapes. Systems like this one sit in regions whose time-evolution is particularly rich in interesting patterns.
  36. Of course, this is not to dismiss experimental philosophy as a legitimate discipline. Rather, on the view that I am advocating here, traditional experimental philosophy would count as a special science (in the sense described above) in its own right—a special science with deep methodological, historical, and conceptual ties to philosophy proper, but one which is well and truly its own project.
  37. Kline (1985)
  38. I owe this example to conversation with my friend and colleague Daniel Estrada.
  39. Latour (1999)