Lightning in a Bottle/Chapter 3
3.0 Recap and Survey
Let’s take a moment to summarize the relative strengths and weaknesses of the various approaches to defining complexity we considered in the last section; it will help us build a satisfactory definition if we have a clear target at which to aim, and clear criteria for what our definition should do. Here’s a brief recap, then.
The mereological size and hierarchical position measures suffered from parallel problems. In particular, it’s difficult to say precisely which parts we ought to be attending to when we’re defining complexity in terms of mereological size or (similarly) which way of structuring the hierarchy of systems is the right way (and why). Both of these approaches, though, did seem to be tracking something interesting: there does seem to be a sense in which a system’s place in a sort of “nested hierarchy” seems to be a reliable guide to its complexity. All other things being equal, a basic physical system (e.g. a free photon traveling through deep space) does indeed seem less complex than a chemical system (e.g. a combination of hydrogen and oxygen atoms to form H2O molecules), which in turn seems less complex than a biological system (e.g. an amoeba undergoing asexual reproduction), which seems less complex than a social system (e.g. the global stock market). The problem (again) is that it’s difficult to say why this is the case: the hierarchical and mereological size measures take it as a brute fact that chemical systems are less complex than biological systems, but have trouble explaining that relationship. A satisfactory theory of complexity must account for both the intuitive pull of these measures and deal with the troubling relativism lurking beneath their surfaces.
The Shannon entropy measure suffered from two primary problems. First, since Shannon entropy is an information theoretic quantity, it can only be appropriately applied to things that have the logical structure of messages. To make this work as a general measure of complexity for physical systems, we would have to come up with an uncontroversial way of representing parts of the world as messages generally—a tall order indeed. Additionally, we saw that there doesn’t seem to be a strict correlation between changes in Shannon entropy of messages and the complexity of systems with which those messages are associated. I argued that in order for Shannon entropy to function as a measure of complexity, a requirement called the correlation condition must be satisfied: it must be the case that a monotonic increase in complexity in physical systems is correlated with either a monotonic increase or a monotonic decrease in the Shannon entropy of the message associated with that system. The paradigm case here (largely in virtue of being quite friendly to representation as a string of bits) is the case of three strings of DNA: one that codes for a normal human, one that consists of randomly paired nucleotides, and one that consists entirely of cytosine-guanine pairs. In order for the correlation condition to obtain, it must be the case that the system consisting of either the randomly paired nucleotides (which has an associated message with maximal Shannon entropy) or the C-G pair molecule (which has an associated messages with minimal Shannon entropy) is more complex than the system consisting of the human-coding DNA molecule (which has an associated message with Shannon entropy that falls between these two extremes). This is not the case, though: any reasonable measure of complexity should rate a DNA strand that codes for a normal organism as more complex than one that’s either random or homogeneous. The correlation condition thus fails to hold. A successful measure of complexity, then, should account for why there seems to be a “sweet spot” in between maximal and minimal Shannon entropy where the complexity of associated systems seems to peak, as well as give an account of how in general we should go about representing systems in a way that lets us appropriately judge their Shannon entropy.
Finally, fractal dimension suffered from one very large problem: it seems difficult to say how we can apply it to judgments of complexity that track characteristics other than spatial shape. Fractal dimension does a good job of explaining what we mean when we judge that a piece of broccoli is more complex than a marble (the broccoli’s fractal dimension is higher), but it’s hard to see how it can account for our judgment that a supercomputer is more complex than a hammer, or that a human is more complex than a chair, or that the global climate system on Earth is more complex than the global climate system on Mars. A good measure of complexity will either expand the fractal dimension measure to make sense of non-geometric complexity, or will show why geometric complexity is just a special case of a more general notion.
2.1 Dynamical Complexity
With a more concrete goal at which to aim, then, let’s see what we can do. In this section, I will attempt to synthesize the insights in the different measures of complexity discussed above under a single banner—the banner of dynamical complexity. This is a novel account of complexity which will (I hope) allow us to make sense of both our intuitive judgments about complexity and open the door to making those judgments somewhat more precise. Ultimately, remember, our goal is to give a concept which will allow us to reliably differentiate between complex systems and simple systems such that we can (roughly) differentiate complex systems sciences from simple systems sciences, opening the door to more fruitful cross-talk between branches of science that, prior to the ascription of complexity, seemed to have very little in common with one another. I shall argue that such an understanding of complexity emerges very naturally from the account of science given in Chapter One. I’m going to begin by just laying out the concept I have in mind without offering much in the way of argument for why we ought to adopt it. Once we have a clear account of dynamical complexity on the table, then I’ll argue that it satisfies all the criteria given above—I’ll argue, in other words, that it captures what seems right about the mereological, hierarchical, information-theoretic, and fractal accounts of complexity while also avoiding the problems endemic to those views.
Back in Section 1.5, I said, “In a system with a relatively high degree of complexity—very roughly, a system with a relatively high-dimensional configuration space—there will be a very large number of ways of specifying regions such that we won’t be able to identify any interesting patterns in how those regions behave over time,” and issued a promissory note for an explanation to come later. We’re now in a position to examine this claim, and to (finally) cash that promissory check. First, note that the way the definition was phrased in the last chapter isn’t going to quite work: having a very high-dimensional configuration space is surely not a sufficient condition for complexity. After all, a system consisting of a large number of non-interacting particles may have a very high-dimensional phase space indeed: even given featureless particles in a Newtonian system, the dimensionality of the phase space of a system with n particles will be (recall) 6n. Given an arbitrarily large number of particles, the phase space of a system like this will also be of an arbitrarily large dimensionality. Still, it seems clear that simply increasing the number of particles in a system like that doesn’t really increase the system’s complexity: while it surely makes the system more complicated, complexity seems to require something more. This is a fact that the mereological size measure (especially in Kiesling’s phrasing) quite rightly seizes on: complexity is (at least partially) a fact not just about parts of a system, but about how those parts interact.
Let’s start to refine Chapter One’s definition, then, by thinking through some examples. As a reminder, let’s remind ourselves of the example we worked through there: consider a thermodynamically-isolated system consisting of a person standing in a kitchen, deliberating about whether or not to stick his hand in the pot of boiling water. As we saw, a system like this one admits of a large number of ways of carving up the associated configuration space: describing the situation in the vocabulary of statistical mechanics will yield one set of time-evolution patterns for the system, while describing it in the vocabulary of biology will yield another set, and so on. Fundamental physics provides the “bit mapping” from points in the configuration space representing the system at one instant to points in the same space at another instant; the different special sciences, then, offer different compression algorithms by which the state of a particular system can be encoded. Different compressions of the same system will evince different time-evolution patterns, since the encoding process shifts the focus from points in the configuration space to regions in the same space. All of this is laid out in significantly more detail in Chapter One.
Now, consider the difference between the person-stove- water system and the same system, only with the person removed. What’s changed? For one thing, the dimensionality of the associated configuration space is lower; in removing the person from the system, we’ve also removed a very large number of particles. That’s far from the most interesting change, though—in removing the human, we’ve also significantly reduced the number of interesting ways of carving up the configuration space. The patterns identified by (for instance) psychology, biology, and organic chemistry are no longer useful in predicting what’s going to happen as the system evolves forward in time. In order to make useful predictions about the behavior of the system, we’re now forced to deal with it in the vocabulary of statistical mechanics, inorganic chemistry, thermodynamics, or (of course) fundamental physics. This is a very significant change for a number of reasons. Perhaps paramount among them, it changes the kind of information we need to have about the state of the system in order to make interesting predictions about its behavior.
Consider, for instance, the difference between the following characterizations of the system’s state: (1) “The water is hot enough to cause severe burns to human tissue” and (2) “The water is 100 degrees C.” In both cases, we’ve been given some information about the system: in the case of (1), the information has been presented in biological terms, while in the case of (2), the information has been presented in thermodynamic terms. Both of these characterizations will let us make predictions about the time-evolution of the system, but the gulf between them is clear: (2) is a far more precise description of the state of the system, and requires far more detailed information to individuate than does (1). That is, there are far more points in the system’s configuration space that are compatible with (1) than with (2), so individuating cases of (2) from cases of not-(2) requires more data about the state of the system than does individuating cases of (1) from cases of not-(1).  This is a consequence of the fact that (as we saw in Chapter One) some special science compressions are more lossy (in the sense of discarding more information, or coarse-graining more heavily) than others: biology is, in general, a more lossy encoding scheme than is organic chemistry. This is (again) a feature rather than a bug: biology is lossy, but the information discarded by biologists is (ideally) information that’s irrelevant to the patterns with which biologists concern themselves. The regions of configuration space that evolve in ways that interest biologists are less precisely defined than the regions of configuration space that evolve in ways that interest chemists, but the biologists can take advantage of that fact to (in a sense) do more work with less information, but that work will only be useful in a relatively small number of systems—those with paths that remain in a particular region of configuration space during the time period of interest.
The significance of this last point is not obvious, so it is worth discussing in more detail. Note, first, that just by removing the human being from this system, we haven’t necessarily made it the case that the biology compression algorithm fails to produce a compressed encoding of the original state: even without a person standing next to the pot of water, generalizations like “that water is hot enough to burn a person severely” can still be made quite sensibly. In other words, the set of points in configuration space that a special science can compress is not necessarily identical to the set of points in configuration space that the same special science can usefully compress; the information that (for instance) the inside of my oven is too hot for infants to live comfortably is really only interesting if there is an infant (or something sufficiently like an infant) in the vicinity of my oven. If there isn’t, that way of describing my oven’s state remains accurate, but ceases to be very relevant in predicting how the system containing the over will change over time; in order for it to become predicatively relevant, I’d need to change the state of the system by adding a baby (or something suitably similar). This is a consequence of the fact that (as we saw in 1.5), the business of the special sciences is two-fold: they’re interested both in identifying novel ways of carving up the world and in applying those carvings to some systems in order to predict their behavior over time. Both of these tasks are interesting and important, but I want to focus on the latter one here—it is analysis of the latter task that, I think, can serve as the foundation for a plausible definition of ‘complexity.’
By removing the person from our example system, we reduce the complexity of that system. This is relatively uncontroversial, I take it—humans are paradigmatic cases of complex systems. My suggestion is that the right way to understand this reduction is as a reduction in the number of predictively useful ways the system can be carved up. This is why the distinction just made between special-scientific compression and useful special-scientific compression is essential—if we were to attend only to shifts that changed a system enough for a particular special science’s compression to fail entirely, then we wouldn’t be able to account for the uncontroversial reduction of complexity that coincides with the removal of the human from our kitchen-system. After all, as we just saw, the fact that the compression scheme of biology is useless for predicting the behavior of a system doesn’t imply that the compression scheme of biology can’t be applied to that system at all. However, removing the person from the system does render a large number of compression schemes predictively useless, whether or not they still could be applied: removing the person pushes the system into a state for which the patterns identified by (e.g.) biology and psychology don’t apply, whether or not the static carvings of those disciplines can still be made.
This fact can be generalized. The sense in which a system containing me is more complex (all other things being equal) than is a system containing my cat instead of me is just that the system containing me can be usefully carved up in more ways than the system containing my cat. My brain is more complex than my cat’s brain in virtue of there being more ways to compress systems containing my brain such that the time-evolution of those states can be reliably predicted than there are ways to compress systems containing my cat’s brain such that the same is true. The global climate today is more complex than was the global climate 1 billion years ago in virtue of there being more ways to usefully carve up the climate system today than there were 1 billion years ago. Complexity in this sense, then, is a fact not about what a system is made out of, or how many parts it has, or what its shape is: it is a fact about how it behaves. It is a dynamical fact—a fact about how many different perspectives we can usefully adopt in our quest to predict how the system will change over time. One system is more dynamically complex than another if (and only if) it occupies a point in configuration space that is at the intersection of regions of interest to more special sciences: a system for which the patterns of economics, psychology, biology, chemistry, and physics are predictively useful is more complex than one for which only the patterns of chemistry and physics are predictively useful.
2.2.1 Dynamical Complexity as a Unifying Definition
I have now given a definition of dynamical complexity. Before we close this theoretical discussion and move on to consider the special problems faced by climate science as a complex science, it’s worth briefly reviewing the attempted definitions of complexity we surveyed in Section 2.1 to see how dynamical complexity fares as a unifying definition of complexity. In this section, I will argue that dynamical complexity succeeds in cherry-picking the best features of the mereological size measure, the hierarchical position measure, the information-theoretic measure, and the fractal dimension measure, while avoiding the worst difficulties of each of them. Let’s begin with the mereological size measure.
As I mentioned above, one of the strongest virtues of the mereological size measure is that (at least in its better formulations) it attends to the fact that complexity is a concept that deals not with static systems, but with dynamic systems—with systems that are moving, changing, and exchanging information with their environments. Strevens, for instance, emphasizes not only the presence of many parts in a complex system, but also the fact that those parts interact with one another in a particular way. This is an insight that is clearly incorporated into dynamical complexity: since dynamical complexity deals with the number of different ways of carving configuration space that yield informative time-evolution patterns for a given system, the presence of interacting constituent parts is indeed, on this view, a great contributor to complexity. Why? Well, what does it mean to say that a system is “composed” of a large number of interacting parts? It means (among other things) that the system can be fruitfully redescribed in the language of another science—the one that carves configuration space in terms of whatever the parts for this particular system are. To say that the human body is composed of many interacting cells, for instance, is just to say that we can either treat the body as an individual (as, say, evolutionary biology might) and make use of the patterns that can be identified in the behavior of systems like that, or treat it as a collection of individual cells (as a cellular biologist might) and predict its behavior in terms of those patterns. Systems which can appropriately be said to be made out of many parts are often systems which can be treated by the vocabulary of multiple branches of the scientific project. Moreover, since we’re tying dynamical complexity not to composition but behavior, we don’t need to answer the uncomfortable questions that dog the avid proponent of the mereological size measure—we don’t need to say, for instance, which method of counting parts is the right one. Indeed, the existence of many different ways to count the parts of a system is something that dynamical complexity can embrace whole-heartedly—the fact that the human body can be seen as a collection of organs, or cells, or molecules straightforwardly reflects its status as a complex system: there are many different useful ways to carve it up, and many interesting patterns to be found in its time-evolution.
This leads directly into the hierarchical position measure. Here too the relationship to dynamical complexity is fairly clear. What does it mean to say that one system is “nested more deeply in the hierarchy?” It means that the system can be described (and its behavior predicted) in the language of more branches of science. The central mistake of previous attempts to make this notion precise, I think, lies in thinking of this “nestedness” as hierarchical in the traditional linear sense: of there being strict pyramidal structure to the relationship between the various branches of science. In Oppenheim and Putnam’s formulation, for instance, physics was at the bottom of the pyramid, then chemistry, then biology, then psychology, then sociology. The assumption lurking behind this model is that all systems described by chemistry can also be described by physics (true enough, but only in virtue of the fact that the goal of physics is to describe all systems), all systems described by biology can also be described by chemistry (probably also true), that all systems that can be described by psychology can also be described by biology (possibly not true), and that all systems described by sociology can also be described by psychology (almost certainly not true). The last two moves look particularly suspect, as they rule out a priori the possibility of non-biological systems that might be usefully described as psychological agents, or the possibility of systems that cannot be treated by psychology, and yet whose behavior can be fruitfully treated by the social sciences.
Dynamical complexity escapes from this problem by relaxing the pyramidal constraint on the relationship between the various branches of science. As I argued in Chapter One, the intersections between the domains of the various sciences are likely to be messy and complicated: while many psychological systems are in fact also biological systems, there may well be psychological systems which are not—the advent of sophisticated artificial intelligence, for instance, would give rise to systems that might be fruitfully studied by psychologists but not by biologists. This is a problem for someone who wants to embrace position in a strict hierarchy as a measure of complexity: there may be no strict hierarchy to which we can appeal. Dynamical complexity cheerfully acknowledges this fact, and judges complexity on a case-by-case basis, rather than trying to pronounce on the relative complexity of all biological systems, or all psychological systems.
What aspects of fractal dimensionality does dynamical complexity incorporate? To begin, it might help to recall why fractal dimensionality by itself doesn’t work as a definition of complexity. Most importantly, recall that fractal dimensionality is a static notion—a fact about the shape of an object—not a dynamical one. We’re interested in systems, though, not static objects—science deals with how systems change over time. On the face of it, fractal dimensionality doesn’t have the resources to deal with this: it’s a geometrical concept properly applied to shapes. Suppose, however, that think not about the geometry of a system, but about the geometry of the space representing the system. Perhaps we can at least recover self-similarity and see how complexity is a fractal-like concept.
Start with the normal configuration space we’ve been dealing with all along. From the perspective of fundamental physics, each point in the space represents an important or interesting distinction: fundamental physics is a bit-map from point-to-point. When we compress the configuration space for treatment by a special science, though, not all point differences remain relevant—part of what it means to apply a particular special science is to treat some distinctions made by physics as irrelevant given a certain set of goals. This is what is meant by thinking of the special sciences as coarse-grainings of fundamental physics.
Suppose that instead of thinking of the special sciences as providing compressed versions of the space provided by fundamental physics, though, we take the view offered in Chapter One: we can think of a special science as defining a new configuration space for the system. What were formerly regions in the very high-dimensional configuration space defined by fundamental physics can now be treated as points in a lower dimensional space defined by the special science in question. It is tempting to think that both these representations—the special sciences as coarse-graining and the special sciences as providing entirely novel configuration spaces—are predicatively equivalent, but this is not so.
The difference is that the second way of doing things actually makes the compression—the information loss—salient; it isn’t reversible. It also (and perhaps even more importantly) emphasizes the fact that the choice of a state-space involves more than choosing which instantaneous states are functionally equivalent—it involves more than choosing which collections of points (microstates) in the original space to treat as macrostates. The choices of a state-space also constitutes a choice of dynamics: for a system with a high degree of dynamical complexity, there are a large number of state spaces which evince not only interesting static detail, but interesting dynamical detail as well. Thinking of (say) a conscious human as being at bottom a system that’s only really completely describable in the state space of atomic physics eclipses not just the presence of interesting configurations of atomic physics’ particles (interesting macrostates), but also the presence of interesting patterns in how those configurations change over time: patterns that might become obvious, given the right choice of state space. Choosing a new state space in which to describe the same system can reveal dynamical constraints which might otherwise have been invisible.
We can think of the compression from physics to (say) chemistry, then, as resulting in a new configuration space for the same old system—one where points represent regions of the old space, and where every point represents a significant difference from this new (goal-relative) perspective, with the significance stemming from both the discovery of interesting new macrostates and interesting new dynamics. This operation can be iterated for some systems: biology can define a new configuration space that will consist of points representing regions of the original configuration space. Since biology is more “lossy” than chemistry (in the sense of discarding more state-specific information in favor of dynamical shortcuts), the space defining a system considered from a biological perspective will be of a still lower dimensionality that the space considering the same system from a chemical perspective. The most dynamically complex systems will be those that admit of the most recompressions—the ones for whom this creation of a predictively-useful new configuration space can be iterated the most. After each coarse-graining, we’ll be left with a new, lower-dimensional space wherein each point represents an importantly different state, and wherein different dynamical patterns describe the transition from state to state. That is, repeated applications of this procedure will produce increasingly compressed bitmaps, with each compression also including a novel set of rules for evolving the bitmap forward in time.
We can think of this operation as akin to changing magnification scale with physical objects that display fractal-like statistical self-similarity: the self-similarity here, though, is not in shape but in the structure and behavior of different abstract configuration spaces: there’s interesting detail, but rather than being geometrically similar, it is dynamically similar. Call this dynamical self-similarity. Still, there’s a clear parallel to standard statistical self-similarity: fractal dimension for normal physical objects roughly quantifies how much interesting spatial detail persists between magnification operations, and how much magnification one must do move from one level of detail to another. Similarly, dynamical complexity roughly quantifies how much interesting detail there is in the patterns present in the behavior of the system (rather than in the shape of the system itself), and how much coarse-graining (and what sort) can be done while still preserving this self-similar of detail. This allows us to recover and greatly expand some of the conceptual underpinnings of fractal dimensionality as a measure of complexity—indeed, it ends up being one of the more accurate measures we discussed.
2.2 Effective Complexity: The Mathematical Foundation of Dynamical Complexity
Finally, what of Shannon entropy? First, notice that this account of dynamical complexity also gives us a neat way of formalizing the state of a system as a sort of message so that its Shannon entropy can be judged: the state of a system is represented by its position in configuration space, and facts about how the system changes over time are represented as patterns in how that system moves through configuration space. All these facts can easily be expressed numerically. The deeper conceptual problem with Shannon entropy remains, though: if the correlation condition fails (which it surely still does), how can we account for the fact that there does seem to be some relationship between Shannon entropy and dynamical complexity? That is, how do we explain the fact that where there is no strict, linear correlation between changes in dynamical complexity and changes in Shannon entropy, there does indeed seem to be a “sweet spot”—middling Shannon entropy seems to correspond to maximal complexity in the associated system.
In other words, identifying complexity with compressibility leads to an immediate conflict with our intuitions. A completely random string—a string with no internal structure or correlation between individual bits—will, on this account, said to be highly complex. This doesn’t at all accord with our intuitions about what complex systems look like; whatever complexity is, a box of gas at perfect thermodynamic equilibrium sure doesn’t have it. This observation has led a number of information theorists and computer scientists to look for a refinement on the naïve information-theoretic account. A number of authors have been independently successful in this attempt, and have produced a successor theory called “effective complexity.” Let’s get a brief sense of the formalism behind this view (and how it resolves the problem of treating random strings as highly complex), and then examine how it relates to the account of dynamical complexity given above.
The central move from the information-content account of complexity that’s built on the back of Shannon entropy to the notion of effective complexity is analogous to the move from thinking about particular strings and thinking about ensembles of strings. One way of presenting the standard Shannon account of complexity associates the complexity of a string with the length of the shortest computer program that will print the string, and then halt. The incompressibility problem is clear here as well: the shortest computer program that will print a random string just is the random string: when we say that a string S is incompressible, we’re saying (among other things) that “Print S” is the shortest possible program that will reproduce S. Thus, a maximally random (incompressible) string of infinite length is infinitely complex, as the shortest program that produces it is just the string itself.
Suppose that rather than think of individual strings, though, we shift our attention to ensembles of strings that share certain common features. In the language of Gell-Mann and Lloyd, suppose that rather than think about the shortest program that would reproduce our target string exactly, we think about the shortest program that would reproduce the ensemble of strings which “best represents” the target string. Gell-Mann argues that the best representative of a random string is the uniform ensemble—that is, the ensemble of strings that assigns all possible strings equal probability. This is supposed to resolve the compressibility issues in the traditional information-theoretic account of complexity. It’s easy to see why: suppose we want to print a random string of length n. Rather than printing n characters directly, Gell-Mann proposes that we instead write a program that prints a random character n times. The program to do this is relatively short, and so the effective complexity of a random string will rate as being quite low, despite the fact that individual random strings are incompressible. Gell-Mann is capitalizing on a higher-order regularity: the fact that all random strings are, in a certain respect, similar to one another. While there’s no pattern to be found within each string, this higher-order similarity lets us produce a string that is in some sense “typical” of its type with relative ease.
Conversely, a string with a certain sort of internal structure—one with a large number of patterns—is a member of a far more restricted ensemble. The collected work of Shakespeare (to use one of Gell-Mann’s own examples) rates as highly complex because it (considered as a single string) is a member of a very small ensemble of relevantly similar strings. There is very little (if anything) in Shakespeare that is well-captured by the uniform ensemble; the information, to a very large degree, is specialized, regular, and non-incidental.
In other words, the effective complexity of a string is the algorithmic information content of the ensemble that “best represents” the string. If the ensemble is easy to produce (as in the case of both a random string and an entirely uniform string), then any string belonging to that ensemble is itself is low in effective complexity. If the ensemble is difficult (that is, requires a lengthy program) to produce, then any string that is a member of that ensemble is high in effective complexity. This resolves the central criticism of the algorithmic information content (i.e. Shannon) approach to defining complexity, and seems to accord better with our intuitions about what should and should not count as complex.
What, then, is the relationship between effective complexity and dynamical complexity? Moreover, if effective complexity is the right way to formalize the intuitions behind complexity, why is this the case? What’s the physical root of this formalism? To answer these questions, let’s look at one of the very few papers yet written that offers a concrete criticism of effective complexity itself. McAllister (2003) criticizes Gell-Mann’s formulation on the grounds that, when given a physical interpretation, effective complexity is troublingly observer-relative. This is a massively important point (and McAllister is entirely correct), so it is worth quoting him at length here:
The concept of effective complexity has a flaw, however: the effective complexity of a given string is not uniquely defined. This flaw manifests itself in two ways. For strings that admit a physical interpretation, such as empirical data sets in science, the effective complexity of a string takes different values depending on the cognitive and practical interests of investigators. For strings regarded as purely formal constructs, lacking a physical interpretation, the effective complexity of a given string is arbitrary. The flaw derives from the fact that any given string displays multiple patterns, each of which has a different algorithmic complexity and each of which can, in a suitable context, count as the regularity of the string.
For an example, consider a data set on atmospheric temperature. Such a data set exhibits many different patterns (Bryant 1997). These include a pattern with a period of a day, associated with the earth’s rotation about its axis; patterns with periods of a few days, associated with the life span of individual weather systems; a pattern with a period of a year, associated with the earth’s orbit around the sun; a pattern with a period of 11 years, attributed to the sunspot cycle; a pattern with a period of approximately 21,000 years, attributed to the precession of the earth’s orbit; various patterns with periods of between 40,000 and 100,000 years, attributed to fluctuations in the inclination of the earth’s axis of rotation and the eccentricity of the earth’s orbit; and various patterns with periods of between 107 and 109 years, associated with variations in the earth’s rate of rotation, the major geography of the earth, the composition of the atmosphere, and the characteristics of the sun. Each of these patterns has a different algorithmic complexity and is exhibited in the data with a different noise level. Any of these patterns is eligible to be considered as the regularity of the data set. Depending on their cognitive and practical interests, weather forecasters, meteorologists, climatologists, palaeontologists, astronomers, and researchers in other scientific disciplines will regard different patterns in this series as constituting the regularity in the data. They will thus ascribe different values to the effective complexity of the data set.
McAllister’s observations are acute: this is indeed a consequence of effective complexity. I think McAllister is wrong in calling this a fatal flaw (or even a criticism) of the concept, though, for reasons that should be relatively obvious. The central thrust of McAllister’s criticism is that it is difficult to assign a determinate value to the effective complexity of any physical system, as that system might contain a myriad of patterns, and thus fail to be best represented by any single ensemble. The question of what effective complexity we assign a system will depend on what string we choose to represent the system. That choice, in turn, will depend on how we carve the system up—it will depend on our choice of which patterns to pay attention to. Choices like that are purpose-relative; as McAllister rightly says, they depend on our practical and cognitive interest.
Given the account of science I developed in Chapter One, though, this is precisely what we should expect out of a concept designed to describe the relationship between how different branches of science view a single physical system. There’s no single correct value for a system’s effective complexity, because there’s no single correct way to carve up a system—no single way to parse it into a string of patterns. Far from making us think that effective complexity gets it wrong, then, this should lead us to think that effective complexity gets things deeply right: the presence of a plurality of values for the effective complexity of a system reflects the methodological plurality of the natural sciences.
McAllister suggests that we might instead choose to sum different values to get a final value, but his proposal is limited to summing over the complexity as defined by algorithmic information content. Because McAllister believes his observation that effective complexity contains an observer-relative element to be a fatal flaw in the concept, he doesn’t consider the possibility that we might obtain a more reliable value by summing over the effective complexity values for the system.
My proposal is that dynamical complexity, properly formalized, is precisely this: a sum of the effective complexity values for the different strings representing the different useful carvings of the system. While there is no single value for effective complexity, we can perfectly coherently talk about summing all the useful ways given our goals and values. The value of this sum will change as we make new scientific discoveries—as we discover new patterns in the world that are worth paying attention to—but this again just serves to emphasize the point from Chapter One: the world is messy, and science is hard. Complexity theory is part of the scientific project, and so inherits all the difficulties and messiness from the rest of the project.
Dynamical complexity, in other words, offers a natural physical interpretation for the formalism of effective complexity, and a physical interpretation that takes the multiplicity of ways that physical systems can be described into account. It offers a natural way to understand how the abstraction described by Gell-Mann and others relates to the actual practice of scientists. The conceptual machinery underwriting the account of science that we developed in this chapter and the last helps us get an intuitive picture of complexity and its place in science. The formalism of effective complexity provides a formalism that can be used to underwrite this intuitive formulation, making the concepts described more precise.
2.3 Conclusion, Summary, and the Shape of Things to Come
In the previous chapter, we examined several different ways that “complexity” might be defined. We saw that each attempt seemed to capture something interesting about complexity, but each also faced serious problems. After arguing that none of these definitions by itself was sufficient to yield a rigorous understanding of complexity, I introduced a new concept—dynamical complexity. This chapter has consisted in a sustained description of the concept, and an argument for its role as a marker for the kind of complexity we’re after when we’re doing science. The insight at the heart of dynamical complexity is that complexity, at least as it concerns science, is a feature of active, changing, evolving systems. Previous attempts to define complexity have overlooked this fact to one degree or another, and have tried to account for complexity primarily in terms of facts about the static state of a system. Dynamical complexity, on the other hand, tracks facts about how systems change over time, and (moreover) embraces the notion that change over time can be tracked in numerous different ways, even for a single system. If our account of science from Chapter One is right—if science is the business of identifying new ways to carve up the world such that different patterns in how the world changes over time become salient—then dynamical complexity is a concept that should be of great interest to working scientists, since it captures (in a sense) how fruitful (and how difficult) scientific inquiry into the behavior of a given system is likely to be. Finally, we saw how the formalism of effective complexity very naturally dove-tails with the intuitive conceptual machinery developed here and in Chapter One. I argued that summing over the effective complexities of different representations of the same system offers a way to quantify the dynamical complexity of the system. This value will be a moving target, and will be observer (and goal) relative to some degree. This should concern us no more than the observation that the choice of what patterns we pay attention to in science is goal-relative should trouble us, as they stem from precisely the same features of the scientific project.
In Chapter Four, we will leave foundational questions behind and move on to considering some methodological questions relevant to climate science. We’ll introduce the basics of climatology and atmospheric science, and examine the difficulties involved in creating a working model of the Earth’s climate. From there, we will consider the particular challenges that climate science faces, given that it explicitly deals with a system of high dynamical complexity, and think about and how have those challenges been met in different fields. We’ll examine why it is that scientists care about dynamical complexity, and what can be learned by assessing the dynamical complexity of a given system. In Chapter Five, I’ll synthesize the two threads that have, up to that point, been pursued more-or-less in parallel and argue the global climate is a paradigmatic dynamically complex system. We’ll examine how that fact has shaped the methodology of climate science, as well as how it has given rise to a number of unique problems for climatologists to tackle. I shall argue that the markedly high degree of dynamical complexity in the global climate system is best dealt with by strongly interdisciplinary scientific inquiry, and that a failure to recognize the role that dynamical complexity plays in shaping the practices of some branches of science is what has led to most of the general criticism faced by climate science. In Chapter Six, we’ll look at one case in particular—Michael Mann’s “hockey stick” prediction—and see how the criticisms levied at Mann often result from a failure to understand the special problems faced by those studying dynamically complex systems. Finally, in Chapter Seven, we’ll examine the political controversy surrounding climate science, assess various recommended responses to anthropogenic climate change, and examine the role that complexity-theoretic reasoning should play in the policy-making process. Onward, then.
- Equivalently, we might say that a system like this admits of a very large number of interesting configuration spaces; there are very many ways that we might describe the system such that we can detect a variety of interesting time-evolution patterns.
- That is, the information has been presented in a way that assumes that we’re using a particular state-space to represent the system.
- That is, there are far fewer possible states of the system compatible with (2) than there are states compatible with (1).
- This does not necessarily mean that the associated measurements are operationally more difficult to perform in the case of (2), though—how difficult it is to acquire certain kinds of information depends in part on what measurement tools are available. The role of a thermometer, after all, is just to change the state of the system to one where a certain kind of information (information about temperature) is easier to discern against the “noisy” information-background of the rest of what’s going on in the system. Measurement tools work as signal-boosters for certain classes of information.
- Of course, these two interests are often mutually-reinforcing. For a particularly salient example, think of the search for extraterrestrial life: we need to both identify conditions that must obtain on extrasolar planets for life to plausibly have taken hold and, given that identification, try to predict what sort of life might thrive on one candidate planet or another.
- If this assertion seems suspect, consider the fact that patterns identified by economists (e.g. the projected price of fossil fuels vs. the projected price of cleaner alternative energies) are now helpful in predicting the evolution of the global climate. This was clearly not the case one billion years ago, and (partially) captures the sense in which humanity’s emergence as a potentially climate-altering force has increased the complexity of the global climate system. This issue will be taken up in great detail in Chapter Three.
- Strevens (Ibid)
- Indeed, it was my reading of the canonical articulation of the hierarchical scheme—Oppenheim and Putnam (1954)—that planted the seed which eventually grew into the position I have been defending over the last 60 pages.
- Op. cit.
- This is the worry that leads Dennett to formulate his “intentional stance” view of psychology. For more discussion of this point, see Dennett (1991).
- Social insects—bees and ants, for instance—might even be an existing counterexample here. The fascinating discussion in Gordon (2010) of ant colonies as individual “superorganisms” lends credence to this view. Even if Earthly ants are not genuine counterexamples, though, such creatures are surely not outside the realm of possibility, and ought not be ruled out on purely a priori grounds.
- Note that it isn’t right to say “regions of chemistry’s configuration space.” That would be to implicitly buy into the rigid hierarchical model I attributed to Oppenheim and Putnam a few pages back, wherein all biology is a sub-discipline of chemistry, psychology is a sub-discipline of biology, and so on. That won’t do. Many of the points might well correspond to regions of the “one step lower” space, but not all will.
- We will consider how this quantification works in just a moment. There is a mathematical formalism behind all of this with the potential to make things far more precise.
- A system like that could be appropriately represented as a random string, as part of what it means for a system to be at thermodynamic equilibrium is for it to have the maximum possible entropy for a system constituted like that. Translated into a bit-string, this yields a random sequence.
- Gell-Mann and Lloyd (2003). See also Foley and Oliver (2011).
- Ibid. pp. 303-304
- In addition, his choice to use climate science as his leading example here is very interesting, given the overall shape of the project we’re pursuing here. Chapter Five will consider the ramifications of this discussion for the project of modeling climate systems, and Chapter Seven will deal with (among other things) the policy-making implications. For now, it is more important to get a general grasp on the notion of effective complexity (and dynamical complexity).