Lightning in a Bottle/Chapter 4

From Wikisource
Jump to navigation Jump to search
2042364Lightning in a Bottle — Chapter 4Jonathan Lawhead


Chapter Four

A Philosopher’s Introduction to Climate Models


4.0 What Have We Gotten Ourselves Into?

As usual, let’s begin by briefly reviewing where we are in our overall discussion, with an eye toward how to proceed from here. The last two chapters have focused very heavily on the details of certain aspects of complexity theory, and it might be easy to lose sight of our overall goal. In Chapter Two, I presented a primer on complex systems theory and surveyed various attempts to reduce the notoriously slippery notion of complexity itself to various proxy concepts, including mereological size, chaotic behavior, algorithmic incompressibility, fractal dimension, Shannon entropy, and hierarchical position. I argued (convincingly, I hope) that none of these definitions precisely captures the intuition behind complexity and that moreover, the nature of complexity is such that it is likely that no single unifying definition is forthcoming. Rather, we should aim at a constellation of related notions of complexity, each of which is tailored to the different purposes toward which complexity theory might be used. I proposed the concept of dynamical complexity as best capturing the aspects of the varied proxy concepts we considered that are most relevant to scientists seeking to understand active, dynamical complex systems in the natural world (as opposed to, say, those interested in studying aspects of abstract signals), and argued effective complexity can plausibly be taken as a physical interpretation of the existing mathematical framework of effective complexity. A system’s dynamical complexity, recall, is a fact about the pattern-richness of the system’s location in the configuration space defined by fundamental physics. Equivalently, we can think of it as being a fact about how many predictively useful ways the system can be carved up. Formally, a system’s dynamical complexity is the sum of the effective complexity values for all relevant ways of representing the system. See Section 2.2.2 for more on this.

In this chapter, I would like to narrow our focus and apply some of the concepts we’ve developed over the last hundred (or so) pages to more practical concerns. In Chapter Zero, I argued that the issue of global climate change is perhaps the most pressing scientific problem of our time, and suggested that the paucity of philosophical engagement with this problem is a travesty in need of serious attention. Chapter One consisted of a systematic description of the kind of contribution that philosophers can be expected to make to problems like this one, and Chapters Two and Three laid the groundwork for making some contributions of that kind. In this chapter, we will start to examine climate science itself. As I have repeatedly emphasized, philosophy is at its best when it makes contact with the social and scientific issues of the day, and it is difficult to imagine a more pressing social and scientific problem than that of global climate change.

Here’s how this chapter will go. In Section 4.1, I will offer a brief overview to some of the central concepts and terminology of climate science. The focus of this section will be not on the controversial aspects of climatology, but just on introducing some of the basic jargon and ideas behind the science; at this point, we will have very little to say about what makes climate science particularly difficult, or about the nature of the political dispute raging in the wake of the science. Rather, our goal shall be just to get enough of the basics on the table to allow for an intelligible discussion of some of the specifics that are of particular philosophical interest. We’ll introduce these concepts by way of a concrete examination of the practice of model building in climate science. Sticking with the generally dialectical style we’ve been using so far, we’ll begin with a simple, intuitive observation about the relationship between the climate and incoming solar radiation and build up from there. As we run up against the short-comings of each candidate-model we consider, we’ll introduce some more terminology and concepts, incorporating them into increasingly more sophisticated models. By the end of Section 4.1, we will have constructed a working (if still quite basic) climate model piece by piece.

Section 4.2 will build from there (and will lay the groundwork for the next chapter). With a firm grasp on the basic model we’ve constructed in Section 4.1, we’ll survey some of the considerations that guide climatologists in their construction of more elaborate models. We’ll examine the notion of a “hierarchy of models” in climate science, and explore the connection between this hierarchy and the discussions of science and complexity theory we’ve had so far. We’ll take a look at the diverse family of models (so-called “Earth models of intermediate complexity”) that occupy the territory between the relatively simple model we’ve constructed here and the elaborate supercomputer-dependent models that we’ll consider in Chapter Five. We’ll think about what climate scientists mean when they say “intermediate complexity,” and how that concept might relate to dynamical complexity. Finally, we’ll consider some of the limitations to the scientific methodology of decomposing systems into their constituent parts for easier analysis. We’ll explore the parallels between the development of complexity-theoretic reasoning in climate science and biology, two more striking examples of sciences which have begun to turn away from the old decompositionist-centered scientific method. This critique will lay the groundwork for Chapter Five, in which we’ll examine the elaborate, holistic, complicated family of cutting-edge climate models, which seek to represent the climate as a unified complex system within a single comprehensive model.

4.1 Fundamentals of Climate Science

Climate science is a mature science, with a large body of technically-sophisticated and specialized literature. The goal of giving a complete and substantive introduction to its fundamentals in anything as short as a single section of this dissertation is surely impossible to achieve. I’ll refer the curious reader to a number of secondary sources[1] for further clarification of the terms I’ll present here, as well as for elaboration on concepts I don’t discuss. My objective here is just to present the bare minimum of terminology necessary to make the rest of our discussion comprehensible. I’ll highlight some of the subtleties later on in this chapter (and the next), but many important details will necessarily be left out in the cold (so to speak), and some of the concepts I do discuss will be simplified for presentation here. Whenever possible I’ll flag these simplifications in a footnote.

Let’s start with distinguishing between the study of the climate and the study of the weather. We can think of weather as a set of short-term, more-or-less localized facts about the prevailing atmospheric conditions in particular places. Questions about whether or not it will rain tomorrow, what tonight’s low temperature will be, and so on are (generally speaking) questions about the weather. The study of climate, on the other hand, consists in studying both the long-term trends in the prevalence of certain weather events in particular places (is it, on average, raining more or less this century than it was last century?), and also in studying the factors that produce particular weather events (e.g. the interplay between ocean and atmosphere temperatures that produces hurricanes generally). Standard definitions used by climatologists resemble something like “the mean [weather] state together with measures of variability or fluctuations, such as the standard deviation or autocorrelation statistics for the period[2].” Additionally (and perhaps more saliently), climate study includes the identification of factors that drive the evolution of these long-term trends, and this is the aspect of climatology that has drawn the most attention recently. The claim that the activity of human beings is causing the average temperature to increase, is a claim of this third kind. It’s also worth emphasizing that since the study of climate is concerned with the factors that produce weather conditions, it is not necessarily limited to the study of atmospheric conditions. In particular, the relationship between the ocean and the atmosphere is a very significant sub-field of climate science[3], while those who study the weather directly are significantly less concerned with exploring the dynamics of the ocean.

Here’s a question that might immediately occur to us: what exactly counts as “long-term” in the relevant sense? That is, at what time-scale does our attempt to predict facts about temperature, precipitation, &c. cease to be a matter of weather prediction (that is, the kind of forecasting you might see on the nightly news), and become a matter of climate prediction? By now, our answer to this question should be fairly easy to predict: there is no concrete line other than that of actual scientific practice. As with all other special sciences, the difference between weather forecasting and climatology is defined only by the research questions that drive scientists working in their respective disciplines. There are clear cases that fall into one or another discipline—the question of how likely it is that it will rain tomorrow is clearly a question for weather forecasting, while the question of how the Earth’s changing axis of rotation contributes to ice ages is clearly a question for climatology—but many questions will be of interest to both disciplines, and there is bound to be significant overlap in both topic and method.

It is worth pointing out, as a brief historical aside, that this reunification is a relatively recent event. Until recently (as late as the middle of the 20th century), the study of climate fell into three largely independent camps: short-term weather forecasting, climatology, and theoretical meteorology. Practical forecasting and climatology were almost purely descriptive sciences, concerned solely with making accurate predictions without concern for the mechanisms behind those predictions. Weather forecasts in particular were devoid of any theoretical underpinnings until well into the 20th century. The most popular method for forecasting the weather during the first part of the 20th century involved the use of purely qualitative maps of past weather activity. Forecasters would chart the current state to the best of their ability, noting the location of clouds, the magnitude and direction of prevailing winds, the presence of precipitation, &c. Once the current state was recorded on a map of the region of interest, the forecasters would refer back to past charts of the same region until they found one that closely resembled the chart they had just generated. They would then check to see how that past state had evolved over time, and would base their forecast of the current situation on that past record. This turned forecasting into the kind of activity that took years (or even decades) to become proficient in; in order to make practical use of this kind of approach, would-be forecasters had to have an encyclopedic knowledge of past charts, as well as the ability to make educated guesses at how the current system might diverge from the most similar past cases[4]. Likewise, climatology at the time was more-or-less purely descriptive, consisting of the collection and analysis of statistical information about weather trends over long time-scales, and relying almost exclusively on graphical presentation. Although some inroads were being made in theoretical meteorology at the same time—mostly by applying cutting-edge work in fluid dynamics to the flow of air in the upper atmosphere—it wasn’t until the advent of the electronic computer in the 1950s and 1960s, which made numerical approximation of the solutions to difficult-to-solve equations finally feasible on a large scale, that forecasting and climatology moved away from this purely qualitative approach. Today, the three fields are more tightly integrated, though differences in the practical goals of weather and climate forecasting—most significantly, the need for weather forecasts to be generated quickly enough to be of use in (say) deciding whether or not to take an umbrella to work tomorrow—still give rise to somewhat different methods. We will return to these issues in Chapter Five when we discuss the role of computer models in climate science.

We can think of the relationship between weather and climate as being roughly analogous to the relationship between (say) the Newtonian patterns used to predict the behavior of individual atoms, and thermodynamics, which deals with the statistical behavior of collections of atoms. The question of exactly how many atoms we need before we can begin to sensibly apply patterns that make reference to average behavior—patterns like temperature, pressure, and so on—just isn’t one that needs a clear answer (if this dismissive shrug of an answer bothers you, review the discussion of the structure of the scientific project in Chapter One). When we apply the patterns of thermodynamics and when we apply the dynamics of Newtonian mechanics to individual atoms is a matter of our goals, not a matter of deep metaphysics. Precisely the same is true of the line between weather forecasting and climatology: which set of patterns we choose to pay attention to depends on our goals. For more on the question of how to individuate particular special sciences, see Section 1.4. For now, we will set this question aside and focus on climate science as it is practiced. As a general rule of thumb, weather forecasting is concerned with predicting particular events, and climatology is concerned with predicting trends. This definition is good enough for our purposes, at least for now.

4.1.1 Basic Energy Balance Models

What, then, are the patterns of interest to climate scientists? In general, climate scientists are interested in predicting the long-term behavior of the Earth’s atmosphere (as well as the systems that are tightly coupled to the atmosphere). A tremendous number of patterns turn out to play a role in this general predictive enterprise (indeed, this is part of what makes climate science a complex-systems science; more on this below), but not all of them are necessarily of immediate interest to us here[5]. Since our ultimate goal is to focus our discussion in on anthropogenic climate change, we can limit our attention to those factors that might play a significant role in understanding that problem. To begin, it might be helpful to get a very basic picture of how the Earth’s climate works, with particular attention to temperature, since this is a feature of the climate that will be of great interest to us as we proceed.

Like most contemporary science, climate science relies very heavily on the construction of models—artifacts which are supposed to represent interesting aspects of a physical system[6]. The simplest climate model is the energy balance model, which is concerned with the amount of energy received and emitted by the Earth. All matter[7] emits electromagnetic radiation, and the wavelength (λ) of that emitted radiation straightforwardly varies with the temperature of the object. The Sun, a relatively hot object, emits E/M radiation across a very wide spectrum, from very short-wave gamma radiation (λ > 10-12 m) to very long-wave microwave and radio radiation (λ > 102 m). Some of the radiation emitted by the Sun, of course, is in the very narrow range of the E/M spectrum that is visible to the naked human eye (λ = ~.4-.8 x 10-6 m). The surface temperature of the sun is approximately 5,778K; this means that the sun’s peak E/M emission—that is, the area of the E/M spectrum with the most intense emission—falls into this visible spectrum, at somewhere around λ = .5-.6 x 10-6 m. This corresponds to light that normal humans perceive as yellowish-green (the sun appears primarily yellow from Earth because of atmospheric scattering of light at the blue end of the visible spectrum). Similarly, the Earth emits electromagnetic radiation. However, the Earth is (thankfully) much cooler than the sun, so it radiates energy at a significantly different wavelength. Peak E/M emission wavelength is inversely proportional to the temperature of the radiator (this is why, for instance, the color of a heating element in a toaster progresses from red, to orange, to yellow as it heats up), and the Earth is sufficiently cold so that its peak E/M emission is somewhere around λ = 20 x 10-6 m. This means that the Earth’s emission is mostly in the infrared portion of the spectrum, a fact which plays a very significant role in the dynamics of the greenhouse effect (see Section 4.1.3).

The input of energy from the sun and the release of energy (in the form of infrared radiation) by the Earth dominate the temperature dynamics of the planet. At the simplest level, then, understanding how the temperature of the Earth changes over time is just a matter of balancing an energy budget: if the Earth absorbs more energy than it emits, it will warm until it reaches thermal equilibrium[8]. The simplest energy balance models, so-called “zero-dimensional energy balance models,” (ZDEBM) model the Earth and the Sun as point-like objects with particular temperatures, absorption characteristics, and emission characteristics. We can quantify the amount of energy actually reaching any particular region of the Earth (e.g. a piece of land, a layer of the atmosphere, or just the Earth simpliciter for the most basic ZDEBM) in terms of Watts per square meter (Wm-2). The amount of energy reaching a particular point at a given time is called the radiative forcing active on that point[9]. Assuming that the Earth is in equilibrium—that is, assuming that the radiated energy and the absorbed energy are in balance—the simplest possible ZDEBM would look like this:

(4a)

Here, represents the amount of solar energy input to the system (i.e. absorbed by the Earth), and represents the amount of energy radiated by the Earth. How much solar energy does the Earth receive? Well, just however much of the sun’s energy actually reaches as far as the Earth multiplied by the size of the area of the Earth that the sun is actually shining on. Filling in some values, we can expand that to:

(4b)

In this expanded equation, is the solar constant (the amount of energy radiated by the sun which reaches Earth), which is something like . Why is this value divided by four? Well, consider the fact that only some of the Earth is actually receiving solar radiation at any particular time—the part of the Earth in which it is day time. Without too much loss of accuracy, we can think of the Earth as a whole as being a sphere, with only a single disc facing the sun at any given time. Since all the surface areas we'll be dealing with in what follows are areas of circles and disks, they're all also multiplied by ; for the sake of keeping things as clean-looking as possible, I’ve just factored this out except when necessary, since it is a common multiple of all area terms. That’s the source of the mysterious division by 4 in (4b), though: the area of the Earth as a whole (approximated as a sphere) is , while the area of a disk is just .

On the other side of the balance, we have . The value is obtained by applying the Stefan-Boltzmann law, which gives the total energy radiated by a blackbody () as a function of its absolute temperature (), modified by the Stefan-Boltzmann constant (), which itself is derived from other constants of nature (the speed of light in a vacuum and Planck's constant). Filling in actual observed values, we get:

(4c)

Unfortunately, evaluating this leaves us with 341.75 Wm-2 = 240 Wm-2, which is (manifestly) not valid—though at least both sides come out on the same order of magnitude, which should suggest that we’re on to something. What’s the problem? In order to diagnose where things are going wrong here, we’ll have to dig more deeply into the energy balance class of models, and start to construct a more realistic model—one which begins to at least approximately get things right.

4.1.2 Albedo

The basic ZDEBM of the climate is roughly analogous to the simple “calorie balance” model of nutrition—if you consume more calories than you burn each day you will gain weight, and if you burn more calories than you consume you will lose weight. In both cases, while the model in question does indeed capture something accurate about the system in question, the real story is more complicated. In the case of nutrition, we know that not all calories are created equal, and that the source of the calories can make a difference: for instance, consuming only refined carbohydrates can negatively impact insulin resistance, which can affect the body’s metabolic pathways in general, leading to systemic changes that would not have occurred as a result of consuming an equal amount of calories from protein[10]. Analogously, the most simple ZDEBM—in which the Earth and the sun are both featureless points that only absorb and radiate energy—doesn’t capture all the factors that are relevant to temperature variation on Earth.

Adding some more detail, consider a slightly more sophisticated ZDEBM, the like of which actually represents the planet in enough detail to be of actual (though limited) predictive use. To begin, we might note that only some of the wide spectrum of E/M radiation reaching the Earth actually makes it to the planet’s surface. This reflects the fact that our first approximation of the Earth as a totally featureless ideal black-body is, as we’ve seen, very inaccurate: in addition to radiating and absorbing, the Earth also reflects some energy. The value representing the reflectance profile of a particular segment of the planet (or the entire planet, in this simple model) is called the albedo. At the very least, then, our ZDEBM is going to have to take albedo into account: if we allow our model to correct for the fact that not all of the energy that reaches the Earth is actually absorbed by the Earth, then we can approach values that accurately represent the way things are.

Earth’s albedo is highly non-uniform, varying significantly over both altitude and surface position. In the atmosphere, composition differences are the primarily relevant factors, while on the ground color is the most relevant characteristic. Cloud cover is certainly the most significant factor for calculating atmospheric albedo (clouds reflect some energy back to space). On the ground, the type of terrain makes the most significant difference: the ocean reflects very little energy back to space, and snow reflects a great deal (dry land falls somewhere between these two extremes, depending on what’s on it). However, we’re getting ahead of ourselves: ZDEBMs don’t take any of this variation into account, and operate on the simplifying assumption that albedo can be averaged for the planet (in much the same way that emission and absorption can be). In all cases, though, albedo is expressed as a dimensionless fraction, with a value between 0 and 1 (inclusive). 0 albedo represents total absorption (a perfectly black surface), and 1 albedo represents a total reflection (a perfectly white surface). To get an idea of the relative values at play here, consider the following table.[11]


Surface Albedo
Equatorial oceans at noon 0.05
Dense forest 0.05-0.10
Forest 0.14-0.20
Modern city 0.14-0.18
Green crops 0.15-0.25
Grassland 0.16-0.20
Sand 0.18-0.28
Polar oceans with sea ice 0.6
Old snow 0.4-0.6
Fresh snow 0.75-0.95
Clouds 0.40-0.9
Spherical water droplet with low angle of incidence[12] 0.99


Taking albedo into account will clearly affect the outcome of the model we’ve been working with. We were implicitly treating the Earth as if it were a perfect absorber—an object with albedo 0—which would explain why our final result was so far off base. Let’s see how our result changes when we jettison this assumption. We will stick with the simplification we’ve been working with all along so far and give a single average albedo value for the Earth as a whole, a value which is generally referred to as the “planetary albedo.” More nuanced energy balance models, which we will discuss shortly, might refine this assumption somewhat. Our modified model should decrease the value of (the amount of energy absorbed by the Earth) by a factor that is proportional to the albedo: as the albedo of the planet increases it absorbs less energy, and as the albedo decreases it absorbs more. Let's try this, then:

(4d)

In the special case where the Earth's albedo is 0, (4d) reduces to (4c), since is just 1. OK, so once again let's fill in our observed values and see what happens. We'll approximate as being equal to .3, so now we have:

(4e)

Which gives us a result of:

(4f)

This is far more accurate, and the remaining difference is well within the margin of error for our observed values.

So now we're getting somewhere. We have a simple model which, given a set of observed values, manages to spit out a valid equality. However, as we noted above, the purpose of a model is to help us make predictions about the system the model represents, so we shouldn't be satisfied just to plug in observed values: we want our model to tell us what would happen if the values were different than they in fact are. In this case, we're likely to be particularly interested in : we want to know how the temperature would change as a result of changes in albedo, emitted energy, or received energy. Fortunately, it’s only a trivial matter of algebraic manipulation to rearrange our last equation to solve for :

(4g)

We’re now free to plug in different values for incoming solar radiation and planetary albedo to see how the absolute temperature of the planet changes (try it!). But wait: something is still amiss here. By expressing the model this way, we’ve revealed another flaw in what we have so far: there’s no way to vary the amount of energy the planet emits. Recall that we originally expressed F—the total energy radiated by Earth as a blackbody—in terms of the Stefan-Boltzmann law. That is, the way we have things set up right now, the radiated energy only depends on the Stefan-Boltzmann constant σ (which, predictably, is constant) and the absolute temperature of the planet Tp. When we set things up as we did just now, it becomes apparent that (since the Stefan-Boltzmann constant doesn’t vary), the amount of energy that the planet radiates depends directly (and only) on the temperature. Why is this a problem? Well, we might want to see how the temperature varies as a result of changes in how much energy the planet radiates[13]. That is, we might want to figure out how the temperature would change if we were to add an atmosphere to our planet—an atmosphere which can hold in some heat and alter the radiation profile of the planet. In order to see how this would work, we need to understand how atmospheres affect the radiation balance of planets: we need to introduce the greenhouse effect and add a parameter to our model that takes it into account.

4.1.3 The Greenhouse Effect and Basic Atmospheric Physics

So how does the greenhouse effect work? To begin, we should note that as some skeptics[14] of anthropogenic climate change have pointed out, the term “greenhouse effect” is somewhat misleading: the mechanics of the effect bear only a passing resemblance to the mechanics of man-made greenhouses. Artificial greenhouses are kept warmer than the ambient environment primarily through a suppression of convection: that is, the glass in the greenhouse prevents warm air—which is less dense than cold air, and so will tend to rise above it—from rising away from ground level, and thus keeps conditions warmer than they would be otherwise. A similar mechanism is at work when you leave your car parked in the sun on a warm day: the interior heats up, but because the cabin is air-tight (at least on the timescales of interest to you during your trip to the shopping mall or grocery store), the warmer air inside the car and the cooler air outside the car cannot circulate, so the temperature increase can build up over time. The planetary greenhouse effect operates very differently. The layers of the Earth’s atmosphere are not closed systems in this sense, and while convection impediment can play a role in increasing radiative forcing felt on the ground—the fact that cloudy nights are generally warmer than clear nights is partially explained by this effect—it is not the driving factor in keeping the surface of the Earth warm.

Rather than blocking the motion of air itself—convection—the greenhouse effect operates primarily by altering the balance of radiation that is emitted by the planet (conveniently, this is just what is missing from the model we’ve constructed so far). Up to this point, recall, we’ve been treating the Earth as if it is a naked point: the only feature we’ve added thus far is planetary albedo, which can be thought of as just preventing some energy from reaching the planet in the first place. This is reflected (no pun intended) in the fact that our albedo factor α modifies the value of the solar radiance term So directly: albedo comes in on the left side of the equation on our model. What we’re looking for now, remember, is something that modifies the value on the right side of the equation. In order to do that, we have to tinker with the energy not before it is received, but as it is released back into space. This is what the greenhouse effect does.

But how? Departing from our ZDEBM for a moment, consider the way the atmosphere of the Earth is actually structured. The Earth’s atmosphere is highly non-uniform in several different ways. Most importantly for us right now, the atmosphere is an extremely heterogeneous mixture, containing significant amounts of several gasses, trace amounts of many more, and small airborne solids (e.g. specks of dust and soot) collectively called “aerosols.” Ignoring aerosols for the moment (which are far more relevant to albedo calculation than to the greenhouse effect[15]), the composition of the atmosphere looks like this[16]:

Fig. 4.1
Gas Volume
Nitrogen (N2) 780,840 ppmv[17] (78.084%)
Oxygen (O2) 209,460 ppmv (20.946%)
Argon (Ar) 9,340 ppmv (0.9340%)
Carbon dioxide (CO2) 393.65 ppmv (0.039365%)
Neon (Ne) 18.18 ppmv (0.001818%)
Methane (CH4) 1.77 ppmv (0.000177%)
Helium (He) 5.24 ppmv (0.000524%)
Krypton (Kr) 1.14 ppmv (0.000114%)
Hydrogen (H2) 0.55 ppmv (0.000055%)
Nitrous oxide (N2O) 0.3 ppmv (0.00003%)
Carbon monoxide (CO) 0.1 ppmv (0.00001%)
Xenon (Xe) 0.09 ppmv (0.000009%)
Ozone (O3) 0.0 to 0.07 ppmv (0 to 0.000007%)[18]
Nitrogen dioxide (NO2) 0.02 ppmv (0.000002%)
Iodine (I2) 0.01 ppmv (0.000001%)
Ammonia (NH3) trace
Water vapor (H2O) ~0.40% over full atmosphere, typically 1%-4% at surface


Different gases have different absorption properties, and so interact differently with various wavelengths of radiation. Radiation of a given wavelength may pass almost unimpeded through relatively thick layers of one gas, but be almost totally absorbed by even small amounts of another gas. This is the source of the greenhouse effect: the composition of the atmosphere directly affects how much radiation (and of which wavelengths) is able to escape to space. Recall that the wavelength of the energy radiated by an object depends on its absolute temperature, and that this means that (contrary to the model we’ve been working with so far), the temperature of the Earth depends on the composition of the atmosphere.

Here’s a simple account of the physics behind all this. Molecules of different gases have different molecular structures, which (among other things) affects their size and chemical properties. As incoming radiation passes through the atmosphere, it strikes a (quite large) number of different molecules. In some cases, the molecule will absorb a few of the photons (quanta of energy for electromagnetic radiation) as the radiation passes through, which can push some of the electrons in the molecule into an “excited” state. This can be thought of as the electron moving into an orbit at a greater distance from the nucleus, though it is more accurate to simply say that the electron is more energetic. This new excited state is unstable, though, which means that the electron will (eventually) “calm down,” returning to its previous ground state. Because energy is conserved throughout this process, the molecule must re-emit the energy it absorbed during the excitation, which it does in the form of more E/M radiation, which might be of different wavelengths than the energy originally absorbed[19]. Effectively, the gas molecule has “stored” some of the radiation’s incoming energy for a time, only to re-radiate it later.

More technically, the relationship between E/M radiation wavelength and molecular absorption depends on quantum mechanical facts about the structure of the gas molecules populating the atmosphere. The “excited” and “ground” states correspond to electrons transitioning between discrete energy levels, so the wavelengths that molecules are able to absorb and emit depend on facts about which energy levels are available for electrons to transition between in particular molecules. The relationship between the energy change of a given molecule[20] and an electromagnetic wave with wavelength λ is:

(4h)

where is the reduced Planck constant (), so larger energy transitions correspond to shorter wavelengths. When is positive, a photon is absorbed by the molecule; when is negative, a photon is emitted by the molecule. Possible transitions are limited by open energy levels of the atoms composing a given atom, so in general triatomic molecules (e.g. water, with its two hydrogen and single oxygen atoms) are capable of interesting interactions with a larger spectrum of wavelengths than are diatomic molecules (e.g. carbon monoxide, with its single carbon and single oxygen atoms), since the presence of three atomic nuclei generally means more open energy orbital states.[21]

Because the incoming solar radiation and the outgoing radiation leaving the Earth are of very different wavelengths, they interact with the gasses in the atmosphere very differently. Most saliently, the atmosphere is nearly transparent with respect to the peak wavelengths of incoming radiation, and nearly opaque (with some exceptions) with respect to the peak wavelengths of outgoing radiation. In the figure below, the E/M spectrum is represented on the x-axis, and the absorption efficiency (i.e. the probability that a molecule of the gas will absorb a photon when it encounters an E/M wave of the given wavelength) of various molecules in Earth’s atmosphere is represented on the y-axis. The peak emission range of incoming solar radiation is colored yellow, and the peak emission range of outgoing radiation is colored blue (though of course some emission occurs from both sources outside those ranges)[22].



Fig. 4.2


Note the fact that incoming solar radiation is not absorbed efficiently by any molecule, whereas outgoing radiation is efficiently absorbed by a number of molecules, particularly carbon dioxide, nitrous oxide, water vapor, and ozone. This is the source of the greenhouse effect.

A more apt metaphor for the effect, then, might be the “one-way mirror” effect. Rather than acting like a greenhouse (which suppresses convection), the presence of a heterogeneous atmosphere on Earth acts something like an array of very small one-way mirrors, permitting virtually all incoming radiation to pass relatively unimpeded, but absorbing (and later re-radiating) much of the energy emitted by the planet itself. Of course this too is just a metaphor, since true mirrors are reflective (rather than radiative), and changing the reflection profile of the system (as we’ve seen) changes the albedo, not the radiative values. Moreover, while mirrors are directional, the reradiation of energy from greenhouse gasses is not: the emitted photons might travel in any direction in the atmosphere, possibly resulting in their reabsorption by another molecule. Still, it can be useful to keep this picture in mind: adding more greenhouse gasses to the atmosphere is rather like adding more of these tiny mirrors, trapping energy for a longer time (and thus allowing the same amount of energy to have a greater net radiative forcing effect) than it otherwise would be.

The greenhouse effect explains, among other things, why the temperature of Earth is relatively stable during both the days and nights. On bodies without an atmosphere (or without an atmosphere composed of molecules that strongly interact with outgoing radiation), an absence of active radiative forcing (during the night, say) generally results in an extreme drop in temperature. The difference between daytime and nighttime temperatures on Mercury (which has virtually no atmosphere) is over 600 degrees C, a shift which is (to put it mildly) hostile to life. With an atmosphere to act as a heat reservoir, though, temporary removal of the active energy source doesn’t result in such an immediate and drastic temperature drop. During the Earth’s night, energy absorbed by the atmosphere during the day is slowly re-released, keeping surface temperatures more stable. A similar effect explains why land near large bodies of water (oceans or very large lakes) tends to have a more temperate climate than land that is isolated from water; large bodies of water absorb a significant amount of solar radiation and re-release it very slowly, which tends to result in less extreme temperature variation[23].

How do we square this with the ZDEBM we’ve been working with so far? As we noted above, the model as we’ve expressed it suggests that the Earth’s temperature ought to be somewhere around 255K, which is below the freezing point of water. The solution to this puzzle lies in recognizing two facts: first that the effective temperature of the planet—the temperature that the planet appears to be from space—need not be the same as the temperature at the surface, and second that we’ve been neglecting a heat source that’s active on the ground. The second recognition helps explain the first: the greenhouse gasses which re-radiate some of the outgoing energy keep the interior of the atmosphere warmer than the effective surface. If this seems strange, think about the difference between your skin temperature and your core body temperature. While a healthy human body’s internal temperature has to remain very close to 98.6 degrees F, the temperature of the body along its radiative surface—the skin—can vary quite dramatically (indeed, that’s part of what lets the internal temperature remain so constant). At first glance, an external observer might think that a human body is much cooler than it actually is: the surface temperature is much cooler than the core temperature. Precisely the same thing is true in the case of the planet; the model we’ve constructed so far is accurate, but it has succeeded in predicting the effective temperature of the planet—the temperature that the planet appears to be if we look at it from the outside. What we need now is a way to figure out the difference between the planet’s effective temperature Tp and the temperature at the surface, which we can call Ts.

Let’s think about how we might integrate all that into the model we’ve been building. It might be helpful to start with an explicit statement of the physical picture as it stands. We’re still working with an energy balance model, so the most important thing to keep in mind is just the location of radiative sources and sinks; we know that all the radiation that comes in has to go out eventually (we're still assuming things are in equilibrium, or rather close to it). So here's what we have.

Incoming solar radiation reaches the Earth, passing mostly unimpeded through the atmosphere.[24] It reaches the surface of the Earth, where some of it is immediately reflected, which we've accounted for already by building in a term for albedo. The remainder is absorbed by the Earth. Later, it is reradiated, but at a very different wavelength than it was when it came in. On its way out, some of this radiation is absorbed by greenhouse gas molecules in the atmosphere, and the rest of it passes back out into space. The radiation that is absorbed by the atmosphere creates (in effect) a new source of radiation, which radiates energy both back toward the surface and out to space. Our picture, then, consists of three sources: the sun (which radiates energy to the surface), the surface (which radiates energy to the atmosphere and space), and the atmosphere (which radiates energy to the surface and space). The true temperature of the surface , then, is a function of both the radiation that reaches it from the sun and the radiation that reaches it from the atmosphere after being absorbed and re-emitted. Let's see how to go about formalizing that. Recall that before we had the radiation balance of the planet, which predicts the effective temperature of the planet as seen from the outside:

(4d)

OK, so how shall we find the actual surface temperature of the planet? To start, let's note that we can model the atmosphere and the surface of the Earth as two "slabs" that sit on top of one another, each with approximately the same area. The surface of the Earth radiates energy upward only (i.e. to the atmosphere and space), while the atmosphere radiates energy in both directions (i.e. back to the surface and to space). So far, recall, we’ve been treating the part of the Earth absorbing energy from the sun as a uniform disk with an area equal to the “shadow” of the sun (that is, ¼ the area of the entire Earth’s surface); this is a fairly good approximation, since we’re already disregarding variations in albedo and emissivity across different latitudes and longitudes (that’s part of what it means to be a zero-dimensional model). We can think of the atmosphere, then, as consisting of another slab with approximately the same surface area as the surface itself. This is not quite right, but it is also a fairly good approximation. Since the atmosphere, as we’ve seen, absorbs energy only from the surface of the Earth, but emits energy both back toward the Earth and to space, we have to adjust its surface area accordingly in our model. For the purposes of absorption, we can treat the atmosphere as having twice the area of the surface, since it radiates along both the inside and outside. Just as with the surface of the Earth, the atmosphere radiates energy in accord with the Stefan-Boltzmann law. That is, it radiates energy as a function of its surface area and temperature.

We also stipulate that (since this is an energy balance model), the atmosphere emits exactly as much as it absorbs. We’ve already noted that the atmosphere isn’t entirely transparent from the perspective of the Earth: it absorbs some (but not all) of the outgoing radiation. Let us add a term to our model to reflect the opacity of absorbing surfaces. Call this term γ. A surface that is totally opaque has γ = 1 (it absorbs all the energy that actually reaches it), and a surface that is totally transparent to incoming radiation has γ = 0. Note that this term is independent of α: a surface’s opacity only comes into play with regard to the energy that isn’t just reflected outright. That is, represents how likely a surface is to absorb some radiation that tries to pass through it; reflected energy never makes this attempt, and so does not matter here. This behavior is intuitive if we think, to begin, about the surface of the planet: while it has a non-negligible albedo (it reflects some radiation), it is effectively opaque. The planet's surface does reflect some energy outright, but virtually all of the energy it doesn't reflect is absorbed. Very little E/M radiation simply passes through the surface of the planet. We can thus set . We are interested in solving for —we're interested in figuring out just how opaque the atmosphere is. From all of this, we can deduce another equation: one for the energy emitted by the atmosphere ().

(4e)

We have to include in this equation, as (recall) the atmosphere is transparent (or nearly so) only with respect to incoming solar radiation. Radiation emitted both by the surface and by the atmosphere itself has a chance of being reabsorbed.

At last, then, we're in a position to put all of this together. We have an equation for the energy emitted by the atmosphere and an equation for the energy reaching the ground from the sun. For the purposes of this model, this exhausts all the sources of radiative forcing on the surface of the Earth. If we hold on to the supposition that things are at (or near) equilibrium, we know that the energy radiated by the surface (which we can calculate independently from the Stefan-Boltzmann law) must be in balance with these two sources. The full balance for the surface at equilibrium, then, is:

(4f)

Moreover, we can deduce a second balance equation for the atmosphere alone. Recall that the atmosphere receives energy only from the surface, and that it radiates with twice the area that it receives—it is "heated" from below only, but radiates heat in two directions. With another application of the Stefan-Boltzmann law, then, we know that:

(4j)

A bit of algebraic manipulation to solve this system of equation—by inserting (4j) into (4f) and solving the resulting equation for Ts—gives us a final solution to the whole shebang (as noted above, we shall assume that the Earth is opaque and that γs = 1):

(4k)

With no atmosphere at all, and the equation above just reduces to our original equation, giving us an answer of 255K. By plugging in the observed temperature at the Earth's surface (288K) and solving for , we obtain a value of . With that value in hand, then, we can actually use this model to explore the response of the planet to changes in albedo or greenhouse gas composition—we can make genuine predictions about what will happen to the planet if our atmosphere becomes more opaque to infrared radiation, more energy comes in from the sun, or the reflective profile of the surface changes. This is a fully-developed ZDEBM, and while it is only modestly powerful, it is a working model that could be employed to make accurate, interesting predictions. It is a real pattern.

4.2 The Philosophical Significance of the Hierarchy of Climate Models

While the model we have just constructed is a working model, the like of which one might encounter in an introductory course on climate science, it still represents only a tiny slice of the myriad of processes which underlie the Earth’s climate. We went through the extended derivation of the last section for two reasons: first, to provide some structure to the introduction of central concepts in climate science (e.g. albedo, the greenhouse effect, opacity) and second, to demonstrate that even the simplest models of the Earth’s climate are incredibly complicated. The dialectical presentation (hopefully) provided an intuitive reconstruction of the thinking that motivated the ZDEBM, but things still got very messy very quickly. Let us now turn from this relatively comprehensible model to other more complicated climate models. As we’ve seen, the ZDEBM treats the entire planet as being completely uniform with respect to albedo, temperature, opacity, and so on. However, the real Earth is manifestly not like this: there is a significant difference between land, water, and atmosphere, as well as a significant difference between the composition of different layers of the atmosphere itself. Moreover, the shape and orientation of the Earth matters: the poles receive far less solar energy than the equator, and some of the energy that reaches the Earth is reflected in one location but not another, either by features of the atmosphere (clouds, for instance), or by the surface (white snow and ice is particularly reflective). Representing the Earth as a totally uniform body abstracts away from these differences, and while zero-dimensional energy balance models are useful as first approximations, getting a more accurate picture requires that we insert more detail into our model,[25] but what kind of detail should we add? How do we decide which parts of the world are important enough to deserve inclusion in our models, and which can be ignored? These are incredibly deep questions—they represent some of the most difficult practical challenges that working scientists in any discipline face in designing their models—and giving a general answer to them is beyond the scope of our project here. Still, it is worth our time to briefly examine the plethora of climate models that have sprung up in the last few decades, and to think about the conceptual underpinnings of this highly diverse collection of scientific tools. Perhaps we can at least suggest the shape of an answer to these questions with respect to climate science in particular.

In practice, climate scientists employ a large family of models for different purposes. Zero-dimensional energy balance models like the one we just constructed are the most basic models actually used in the real world, and form what can be thought of as a the “lowest level” of a kind of “model pyramid.” The logic of energy balance models is sound, and more sophisticated energy balance models add more detail to account for some of the factors we just enumerated; with every addition of detail, the model becomes capable of generating more accurate predictions but also becomes more difficult to work with. For instance, we might move from the ZDEBM to a one-dimensional energy balance model, modeling the Earth not as a point but as a line, and expressing the parameters of the model (like albedo) not as single terms, but as differential equations whose value depends on where we are on the line. This allows us to take the latitudinal variation of incoming solar energy into account, for example: in general, areas near the equator receive more energy, and the incoming energy drops off as we move north or south toward the poles. Alternatively, if we are interested in differences in radiation received by different levels of the atmosphere, we might implement a one-dimensional model that’s organized vertically, rather than horizontally. Even more detailed models combine these approaches: two-dimensional models account for variation in incoming solar energy as a function of both height and latitude.

Energy balance models, though, are fundamentally limited by their focus on radiation as the only interesting factor driving the state of the climate. While the radiative forcing of the sun (and the action of greenhouse gasses in the presence of that radiative forcing) is certainly one of the dominant factors influencing the dynamics of the Earth’s climate, it is equally certainly not the only such factor. If we want to attend to other factors, we need to supplement energy balance models with models of a fundamentally different character, not just create increasingly sophisticated energy balance models. McGuffie & Herderson-Sellers (2005) list five different components that need to be considered if we’re to get a full picture of the climate: radiation, dynamics, surface processes, chemistry, and spatio-temporal resolution.[26] While I will eventually argue that this list is incomplete, it serves as a very good starting point for consideration of the myriad of climate models living in the wild today.

Radiation concerns the sort of processes that are capture by energy balance models: the transfer of energy from the sun to the Earth, and the release of energy back into space (in the form of infrared radiation) from the Earth. As we’ve seen, careful attention to this factor can produce a model that is serviceable for some purposes, but which is limited in scope. In particular, pure radiative models (energy balance models, for instance) neglect the transfer of energy by non-radiative processes and are unable to model any of the other more nuanced the dynamical processes that govern both the climate and weather on Earth. A radiative model, for example, will be entirely silent on the question of whether or not increased greenhouse gas concentration is likely to change the behavior of ocean currents. Even if we were to devise an energy balance that is sophisticated enough to model radiative transfer between the ocean, land, and atmosphere as separate energy reservoirs, the inclusion of facts about currents is simply beyond the scope of these models.

To include facts like those, we need to appeal to a new class of models—so-called “radiative-convective” (RC) models are designed to address these issues. These models incorporate many of the same insights about radiation balance that we saw in the ZDEBM, but with the addition of dynamical considerations. Basic RC models will treat the planet not just as a set of “lamps” which absorb and emit radiation, but rather will include enough detail to model the transfer of energy via convection—the movement of air—as well. We can think of RC models as presenting the Earth as a set of connected boxes of various sizes containing gas of various temperatures. While some energy is transferred between the boxes as a result of radiative forcing, the boundaries where one box meets another are equally important—there, the contents of the two boxes mix, and energy transfer as a result of convection becomes possible as well. A simple one-dimensional RC model might treat the surface of the Earth as consisting of regions of different temperature arrayed along a line, calculating the interaction of different regions at their boundary by employing a fixed lapse-rate to model convective energy transfer. This information might then be incorporated into a relatively sophisticated energy balance model, yielding an increase in the accuracy of radiative process models as a result of more precise information about temperature gradients and exchanges of air[27].

While RC models offer an improvement in accuracy over simple radiative models (as a result of taking some dynamical processes into account), they are still far away from being robust enough to capture all the details of our complex climate. Beyond RC models, the field becomes increasingly differentiated and heterogeneous—in the last 30 years in particular, a large number of so-called “Earth models of intermediate complexity” (EMIC) have sprung up in the literature. It is impossible to characterize these models in any general way, as each is constructed for a very particular purpose—to model some very specific aspect of the global climate based on a parameterization that fixes other potentially relevant factors as (more or less) constant. As an example of the tremendous variability present in this class of models, EMICs include RC models that also model cloud formation (which is an important factor in determining albedo), sea-ice models that focus primarily on the surface processes that drive the formation (and break-up) of arctic and Antarctic ice, spatio-temporally constrained models of the short-term effect of volcanic aerosols on planetary albedo, and even ocean models that focus primarily on the procession of regular cycles of ocean temperatures and currents (e.g. the models used to predict the effects of the El Nino/Southern Oscillation on annual rainfall in the United States’ west coast). The EMIC represent a veritable zoo of wildly different models developed for wildly different purposes. The fact that all these can (apparently) peacefully coexist is worthy of philosophical interest, and warrants some consideration here[28].

4.2.1 Climate Models and Complexity

Earlier in the history of climate science, even textbooks within the field were willing to attempt to rank various climate models in terms of ascending “complexity[29].” While the sense of the term ‘complexity’ doesn’t exactly mirror the concept of dynamical complexity developed in Chapter Three, there are enough parallels to be worth remarking on, and I shall argue that the important aspects of the climate modeler’s sense are, like the various approaches to complexity surveyed in Chapter Two, well-captured by the notion of dynamical complexity. Interestingly, there’s at least some evidence that more recent work in climatology has backed off from the attempt to rank models by complexity. While the hierarchical “climate pyramid” reproduced below appears in all editions of McGuffie & Herderson-Sellers’ work on climate modeling, by 2005 (and the publication of the third edition of the work), they had introduced a qualification to its presentation:

This constructed hierarchy is useful for didactic purposes, but does not reflect all the uses to which models are put, nor the values that can be derived from them. The goal of developers of comprehensive models is to improve performance by including every relevant process, as compared to the aim of [EMIC] modelers who try to capture and understand processes in a restricted parameter space. Between these two extremes there is a large territory populated, in part, by leakage from both ends. This intermediate area is a lively and fertile ground for modeling innovation. The spectrum of models [included in EMICs] should not be viewed as poor cousins to the coupled models[30].

It is worth emphasizing that this egalitarian perspective on climate science—in which a multitude of perspectives (encoded in a multitude of models) are included without prejudice—fits nicely with the account of science in general we explored in Chapter One, and only serves to reinforce the view that contemporary scientific practice requires this multifarious foundation. Their observation that EMICs should not be viewed as “poor cousins” of more elaborate models[31] similarly seems to support the view that we should resist the impulse to try to decide which models are “more real” than others. Any model which succeeds in capturing a real pattern in the time-evolution of the world (and which is of consequent predictive use) should be given equal standing.

The sense of “complexity” here also has more than a little in common with the notion we’ve been working with so far. McGuffie & Henderson-Sellers chose to illustrate the climate model hierarchy as a pyramid for good reason; while they say that the “vertical axis [is] not intended to be qualitative,[32]” the pyramidal shape is intended to illustrate the eventual convergence of the four different modeling considerations they give in a single comprehensive model. A complex model in this sense, then, is one which incorporates patterns describing dynamics, radiative processes, surface processes, and chemical processes. The parallels to dynamical complexity should be relatively clear here: a system that is highly dynamically complex will admit of a variety of different modeling perspectives (in virtue of exhibiting a plethora of different patterns). For some predictive purposes, the system can be treated as a simpler system, facilitating the identification of (real) patterns that might be obfuscated when the system is considered as a whole. I have repeatedly argued that this practice of simplification is a methodological approach that should not be underappreciated (and which is not overridden by the addition of complexity theory to mainstream science). EMIC are fantastic case-study in this fact, a diverse mixture of idealizations and simplifications of various stripes that have been developed to explore particular climate subsystems, but whose outputs frequently are of use in more global analyses. We’ll explore the role that EMICs play in more comprehensive models in the next chapter (when we explore cutting-edge global circulation models and the tools climate scientists employ to create and work with them). For now, though, I would like to end this chapter with a few words about the limitation of the analytic method that undergirds both the creation of EMICs and much of science in general. We’ve seen a number of reasons why this analytic approach is worth preserving, but there are also good reasons to think that it cannot take us as far as we want to go.

4.2.2 Limits of the Analytic Method

It might help to begin by thinking about the traditional scientific paradigm as it has existed from the time of Newton and Galileo. The account that follows is simplified to the point of being apocryphal, but I think it captures the spirit of things well enough. For our purposes here, that’s enough: I’m interested not in giving a detailed historical account of the progress of science (many who are more well-suited to that task have already done a far better job than I ever could), but in pointing to some general themes and assumptions that first began to take root in the scientific revolution. It will be helpful to have these themes clearly in mind, as I think complexity theory is best understood as an approach to science that fills in the gaps left by the approach I’m about to describe. If you are a historian of science, I apologize for the simplifying liberties that I take with this complicated story (see Chapter Zero for more on why you’re probably not alone in being dissatisfied with what I have to say).

The greatest triumph of the scientific revolution was, arguably, the advent of the kind of experimental method that still underlies most science today: the fundamental insight that we could get a better handle on the natural world by manipulating it through experiment was, to a large degree, the most important conceptual leap of the era. The idea that science could proceed not just through abstract theorizing about ideal cases (as many ancients had) nor just through passive observation of the world around us, but by systematically intervening in that world, observing the results of those interventions, and then generalizing those results into theories about how systems outside the laboratory behaved was unbelievable fruitful. The control aspect of this is important to emphasize: the revolution was not primarily a revolution toward empiricism strictly speaking—people had been doing science by looking at the world for a long time—but a revolution toward empiricism driven by controlled isolation[33].

This kind of interventionist approach to science was vital to the later theoretical breakthroughs: while Newton’s genius lay in realizing that the same patterns of motion lay behind the movement of bodies on Earth and in space, that insight wouldn’t have been possible if Galileo hadn’t first identified those patterns in terrestrial falling bodies. It was Galileo’s genius to realize that by reducing a system of interest to its simplest form—by controlling the system to hold fixed as many variables as possible—patterns that might be obscured by the chaos and confusion of the unmodified natural world would become more apparent. All of this is very well-known and (I take it) uncontroversial—at least if you take my simplifications in stride. My purpose here is not to comment on the history of science per se but (in good classical scientific fashion) to isolate and emphasize a single thread in this narrative: that of isolated decomposition of systems.

After the revolution that this approach precipitated in physics, the basic experimental method of intervening in the natural world to isolate variables for testing came to dominate virtually all of the natural sciences for hundreds of years. Scientists in chemistry, biology, and even the social sciences attempted to copy (with varying degrees of success) the physics-inspired model of identifying single constituents of interesting systems, seeing how those constituents behaved when isolated from each other (and, a fortiori, from a complicated external environment), and using that information to deduce how collections of those constituents would behave in more realistic circumstances. This approach was enormously, earth-shatteringly, adverb-confoundingly successful, and gave us virtually all the scientific advances of the 18th, 19th, and 20th centuries, culminating in the triumph of physics that is quantum mechanics, as well as the more domain-specific (if no less impressive) advances of molecular biology (studying the gene to understand the organism), statistical mechanics (studying the particle to understand the thermodynamic system), and cognitive neuroscience (studying the neuron to understand the brain), just to name a few.

Moreover, this way of thinking about things came to dominate the philosophy of science (and scientifically-informed metaphysics) too. Many of the influential accounts of science developed in the 19th and 20th centuries rely (more or less implicitly) on this kind of model of scientific work. The logical positivists, for whom science was a matter of deduction from particular observations and a system of formal axioms perhaps exemplify this approach, though (as Hooker [2011a] argues), the Popperian model of theory generation, experimental data collection, and theory falsification also relies on this decomposition approach to scientific work, as it assumes that theorists will proceed by isolating variables to such a degree that cases of direct falsification will (at least sometimes) be clearly discernible. The account of science developed in Chapter One is intended to contribute to the beginning of a philosophy of science that moves beyond dogmatic clinging to decomposition, but it will likely still be some time before this thinking becomes part of the philosophical mainstream.

Part of the problem is that the primary opponents of the decomposition approach to science (at least before the 1970s) were the vitalists and the strong emergentists.[34] The common criticism marshaled by these two camps was that the analytic approach championed by mainstream science was inevitably doomed to fail, as some aspect of the natural world (living things, for example) were sui generis in that their behavior was not governed by or deducible from the behavior of their parts, but rather anomalously emerged in certain circumstances. The last major stronghold for this view—life—was dealt a critical blow by the advent of molecular biology, though: the discovery of genetic molecules showed that living things were not anomalous, sui generis systems, but rather were just as dependent on the coordinated action of simpler constituents as any physical system. By the middle of the 20th century, vitalism had fallen far out of favor, and most mainstream scientists and philosophers held at least a vaguely reductionistic view of the world. While quantum mechanics was busy overthrowing other pillars of classical physics, it seemed to only reinforce this one: the whole is nothing more than the sum of its parts. While the behavior of that sum may be difficult (or even impossible) to predict sometimes just by looking at the parts, there’s nothing fundamentally new to be learned by looking at systems; any higher-level scientific laws are just special cases, course-grainings, or simplifications of the story that fundamental physics has to tell.

The moral of the science’s success in the 20th century is that the mainstream scientists were right and the vitalists were wrong: living things (a fortiori, brains, democracies, economies) are really nothing over and above the sum of their parts—there is no vital spark, and no ghost in the machine, and no invisible hand. The progress of science seems to have born this out, and in a sense it has: in looking for (say) living things to behave in ways that were not determined by the behavior of their cells and genes, vitalists were chasing ghosts. Still, in the last few decades cracks have begun to appear in the hegemonic analytic approach: cracks that suggest not that the insights garnered by that approach were wrong, but that they were incomplete. This is where complexity theory enters our story.

As an example, consider the highly computational theory of mind that’s been developed by some cognitive psychologists and philosophers of mind[35]. On this account, psychology as a scientific practice is, in a very real sense, predicated on a very large misunderstanding: according to the most radical computationalists, what we take to be “psychological states” are really nothing more than formal computational operations being carried out by the firing of one or another set of neurons in our brain. It’s worth emphasizing that this is a stronger thesis than the standard “metaphysical reduction” that’s rather more common in the philosophy of mind literature, and it is certainly a stronger thesis than a generally physicalist view of psychology (where psychological states in some sense are realized by or depend on the action of neurons). The strongest adherents of computational neuroscience argue that not only do mental states depend on brain states, but that (as a methodological dictum) we ought to focus our scientific efforts on mapping neuronal firings only. That is, it’s not just necessary to understand the brain in order to get a grip on psychology—understanding how neurons work just is understanding psychology. There are no higher level patterns or processes to speak of. This is a very substantive methodological thesis—one which (if it were true) would have significant implications for how research time and money ought to be allocated.

Increasingly, it is also a thesis that is being rejected by mainstream cognitive science. In the decades since Pinker’s book was published, cognitive scientists have gradually come to recognize that neuronal firings, while surely central in determining the behavior of creatures like us, are far from the only things that matter. Rather, the neurons (and their accompanying chemical neurotransmitters, action potentials, &c.) function as one sub-system in a far more complicated web of interrelated interactions between the brain, the rest of the body, and various aspects of the external environment. While some cognitive mechanisms can be completely understood through the decompositionist approach,[36] the higher-level cognition of complicated organisms embedded in dynamic environments (humans engaged in complex, conscious reasoning, for example) certainly cannot. The gradual relaxation of the demand that all cognitive science be amenable to something like this radically eliminative computational hypothesis has produced an explosion of theoretical insights. The appreciation of the importance of embodied cognition—that is, the importance of non-neurological parts of the body in shaping cognitive states—exemplifies this trend, as does the work of Andy Clark in exploring the “extended mind” hypothesis, in which environmental props can be thought of as genuine components of higher level cognitive processes[37].

Similarly, contemporary biology has rejected the notion that the evolution of organism populations just is the evolution of individual genes in the organisms of the population. This move away from “selfish gene” type approaches to evolutionary theory might be thought of as mirroring the move away from strict eliminative computationalism in cognitive neuroscience; the appreciation of epigenetic influences on evolution[38] exemplifies this trend in biology, as does the proliferation of the “-omics” biological sciences (e.g. genomics, proteomics, biomics).

In rejecting the decompositionist approach to cognition (or evolution), though, neuroscientists (or biologists) have not returned to the vitalist or emergentist positions of the 19th and early 20th centuries—it is certainly not the case that the only alternative to the Pinker/Churchland position about the mind is a return to Cartesian dualism, or the sort of spooky emergentism of Morgan (1921). Rejecting the notion that interesting facts about cognition are exhausted by interesting facts about neuronal firings need not entail embracing the notion that cognitive facts float free of physics and chemistry; rather, it just entails a recognition that neural networks (and the organisms that have them) are embedded in active environments that contribute to their states just as much as the behavior of the network’s (proper) parts do, and that the decompositionist assumption that an understanding of the parts entails an understanding of the whole need not hold in all cases. In studying organisms as complex systems, we need not reject the vast and important insights of traditional decompositionist science (including biology, neuroscience, and others)—rather, we need only recognize that system-theoretic approaches supplement (but don’t supplant) existing paradigms within the discipline. The recognition, to put the point another way, is not that Pinker was entirely wrong to think that neuronal computation played a central role in cognition, but only that his view was too limited—rather than evolution simply operating on an unconstrained string of genetic code, it operates in a “highly constrained (occasionally discontinuous) space of possible morphologies, whose formation requires acknowledging the environmental, material, self-organized and often random processes that appear at different scales.”[39]

The move from an exclusively decompositionist approach to one incorporating both decompositionist and holistic work is underway in disciplines other than biology and neuroscience. It’s particularly important for our purposes to note that the peaceful coexistence of EMICs with more comprehensive, high-level models (to be discussed in the next chapter) requires an appreciation both of the power of decomposition and of its limits. Surveying all the areas in which this type of thinking has made an impact would require far more space than I have here, so I will let these two cases—the biological and the climatological—stand on their own, and refer the interested reader to the list of references provided here for further exploration of complexity theoretic approaches to cognitive science, economics, medicine, engineering, computer science, and others.

4.2.2 Next Steps

This quiet conceptual revolution has proceeded more-or-less independently in these disciplines until fairly recently. Increasingly, though, the question of whether there might be general principles underlying these cases—principles that deal with how systems of many highly connected interactive parts behave, regardless of the nature of those parts—has started to surface in these discussions. This is precisely the question that complexity theory aims to explore: what are the general features of systems for which the decompositionist approach fails to capture the whole story? What rigorous methods might we adopt to augment traditional approaches to science? How can we integrate holistic and analytic understanding into a unified scientific whole? These are, I suspect, the questions that will come to define scientific progress in the 21st century, and they are questions that climate science—perhaps more than anything else—urgently needs to consider.

The contribution of EMICs shouldn’t be underestimated: they are very important tools in their own right, and they have much to contribute to our understanding of the climate. Still, though, they’re highly specific tools, deliberately designed to apply to a very narrow range of circumstances. EMICs are intentionally limited in scope, and while this limitation can take different forms (e.g. spatio-temporal restriction vs. restriction to a single climate sub-system considered more-or-less in isolation), it is a defining characteristic of the class of models—perhaps the only defining characteristic. Such a narrow focus is a double-edged sword; it makes EMICs far easier to work with than their monstrously complicated big brothers, but it also limits the class of predictions that we can reasonably expect to get out of applying them. If we’re going to get as complete a picture of the patterns underlying the time-evolution of the Earth’s climate as possible, then we’ll need as many tools as possible at our disposal: low-level energy balance models, EMICs, and high-level holistic models.

In the next chapter, we’ll consider problems associated with these holistic models in detail, introducing a few of the more pressing puzzles that neither energy balance models nor EMICs are capable of resolving, and then surveying how more elaborate models are supposed to meet these challenges. However, high-level climate models (and the methods scientists employ to work with them) are not without problems of their own; while they are capable of meeting some of the challenges that EMICs cannot meet, they face other challenges that EMICs do not face. Let us now turn to the problems that force us to supplement EMICs and examine how high-level models are designed and employed.

  1. Dawson & Spannagle (2009) is perhaps the most comprehensive and accessible general reference; I’d recommend that as a first stop on a more detailed tour of the climate science literature.
  2. Schneider (2009), p. 6
  3. For an obvious example, consider the importance of the El Nino-Southern Oscillation—a coupled atmosphere/ocean phenomenon that occurs cyclically in the Pacific ocean region (and has received significant media attention).
  4. For a detailed discussion of the evolution of the science of forecasting, see Edwards (2010)
  5. In particular, it’s worth flagging that (at least recently) economic patterns have become very salient in the prediction of the time-evolution of the climate: as the activity of human civilization has become a more important factor in forcing the climate state, patterns that are relevant in predicting that activity have become relevant in predicting climate states as well. We will explore the connection with economic patterns more in the next two chapters.
  6. I’m using “artifact” in a very broad sense here. Some models are themselves physical systems (consider a model airplane), while others are mathematical constructions that are supposed to capture some interesting behavior of the system in question. The main point of model-building is to create something that can be more easily manipulated and studied than the object itself, with the hope that in seeing how the model behaves, we can learn something interesting about the world. There is a thicket of philosophical issues here, but a full exploration of them is beyond the scope of this project. The philosophical significance of one class of models in particular—computer simulations—will be the primary subject of Chapter Five, but for a more general contemporary overview of representation and model-building, see van Fraassen (2010).
  7. Or, at least, all matter with temperature greater than absolute zero.
  8. A very simple model of this sort treats the Earth as an “ideal black body,” and assumes that it reflects no energy. Thus, the model only needs to account for the energy that’s radiated by the Earth, so we can work only in terms of temperature changes. This is an obvious simplification, and the addition of reflection to our model changes things (perhaps even more significantly than we might expect). We’ll discuss this point more in a moment.
  9. The Intergovernmental Panel on Climate Change (IPCC) uses the term “radiative forcing” somewhat idiosyncratically. Since they are concerned only with possible anthropogenic influences on the climate system, they express radiative forcing values in terms of their deviation from pre-Industrial levels. In other words, their values for the amount of energy reaching certain points on the Earth “subtract out” the influence of factors that they have good reason to think are unrelated to human intervention on the climate. These radiative forcing values might be more properly called net anthropogenic radiative forcing; an IPCC value of (say) .2 Wm-2 represents a net increase of .2 Watts per square meter, over and above the radiative forcing that was already present prior to significant human impacts. Unless otherwise specified, I will use ‘radiative forcing’ in the standard (non-IPCC) sense.
  10. Even more strongly, it might be the case that calories in and calories out are not entirely independent of one another. That is, there might be interesting feedback loops at play in constructing an accurate calorie balance: a fact which is obfuscated in this simple presentation. For example, it might be the case that consuming a lot of calories leads to some weight gain, which leads to low self-esteem (as a result of poor body-image), which leads to even more calorie consumption, and so on. This sort of non-linear multi-level feedback mechanism will be treated in detail in Chapter Five, but will be ignored for the time being.
  11. Adapted from Ricklefs (1993)
  12. This explains why, in practice, the albedo of large bodies of water (e.g. oceans or very large lakes) is somewhat higher than the listed value. Choppy water has a layer of foam (whitecap) on top of it, which has an albedo value that’s much closer to the value for a water droplet than to the value for calm water. The value of the oceans as a whole, then, is somewhere between the values of a water droplet and calm water. This is an example of the sort of small space-scale difficulty that causes problems for the more sophisticated general circulation model, discussed in more detail in Chapter Six.
  13. In fact, there’s another clue that something’s not right here. Solving the equation using the values we’ve got so far gives us a temperature of 255K, which is significantly below the freezing point of water (it’s around 0 degrees F, or -18 degrees C). As you can easily verify, this is not the temperature of the planet’s surface, at least most of the time. Something is wrong here. Hang in there: we’ll see the explanation for this anomaly soon, in Section 4.1.3.
  14. Gerlich and Tscheuschner (2009). This paper should be taken with a very large grain of salt (a full shaker would perhaps be even better), as the arguments Gerlich and Tscheuschner make about the “falsification” of the greenhouse effect are highly suspect. Halpern et. al. (2010) argue convincingly that Gerlich and Tscheuschner fundamentally misunderstand much of the involved physics. Still, they are (at least) correct on this point: the atmospheric greenhouse effect is very different from the effect involved in glass greenhouses.
  15. Aerosols like dust, soot, and sulfate aerosols (which are a byproduct of fossil fuel combustion) modify the albedo directly and indirectly. Direct modification comes as a result of radiation scattering (increasing the albedo of the atmosphere in which they are suspended, providing a kind of “miniature shade”). Indirect modification comes as a result of their action as nuclei of cloud condensation: they make it easier for clouds to form in the atmosphere by acting as “seeds” around which water vapor can condense into clouds. This leads to increased cloud formation and average cloud lifespan (increasing albedo), but also reduced precipitation efficiency (since less water vapor is needed to form clouds, so clouds that do form are less moisture-dense). Aerosols thus play an important (and complicated) role in climate forcing: a role which is beyond the scope of our current discussion. They will be discussed in more detail when we consider feedback mechanisms in Section 4.2.
  16. Source for figures: Carbon dioxide: NOAA (2012), Methane: IPCC AR4 (2007).
  17. “ppmv” stands for “parts per million by volume.”
  18. Ozone composition varies significantly by vertical distance from the surface of the Earth, latitude, and time of year. Most ozone is concentrated in the lower-to-mid stratosphere (20-35 km above the surface of the Earth), and there is generally less ozone near the equator and more toward the poles. Ozone concentration is at its highest during the spring months (March-May and September-November for the Northern and Southern hemispheres, respectively).
  19. Though, of course, this means that the number of photons will also have to be different, unless the energy difference is accounted for in some other way.
  20. All of what follows here holds for simple atoms as well, though free atoms are relatively rare in the Earth’s atmosphere, so the discussion will be phrased in terms of molecules.
  21. For details, see Mitchell (1989)
  22. Figure adapted from Mitchell (op. cit.)
  23. The clever reader will note that this implies that the water on Earth’s surface plays a significant role in regulating the overall climate. This is absolutely true (aren’t you clever?), and the most advanced climate models are, in effect, models of atmospheric and aquatic dynamics that have been “coupled” together. So far, though, this too is a detail that is beyond the scope of our discussion (and the simple model we’ve been considering). We’ll return to this point in the next chapter.
  24. For simplification, we'll just assume that all of it passes unimpeded; this is very close to being the case.
  25. It’s important to note that increasing the sophistication of a model is a necessary but not sufficient condition for generating more accurate predictions. While it seems intuitively apparent that more sophisticated models should be better models, it is also the case that more sophisticated models generally leave more room for failure, either as a result of measurement error, because the model accounts for only half of an important feedback loop, or for some other reason. Recall the characterization of models as artifacts—in some ways, they are very like mechanical artifacts, and the old engineering adage that “anything that moves can break” applies here as well. We will revisit this point in Chapter Five when we discuss the special difficulties of modeling complex systems.
  26. McGuffie & Herderson-Sellers (2005), p. 49
  27. As we shall see, this practice of using the output of one kind of model as input for another model is characteristic of much of contemporary climate science.
  28. In addition, the policy implications of this diverse zoo of important models will be the primary topic of Chapter Seven.
  29. See, e.g., McGuffie and Herderson-Sellers (op. cit.), though this treatment is far from unique
  30. Ibid. p. 117
  31. We shall discuss these more elaborate models in detail in the next chapter.
  32. Ibid., p. 51
  33. For more on the role of intervention in science, see Woodward (2011)
  34. See, for instance, Morgan (1921)
  35. See, for instance, Pinker (2000). This position is also there at times in the work of Paul and Patricia Churchland, though it is also moderated at times when compared to the fairly hard-line computationalism of Pinker.
  36. Simple reflex behavior like the snapping of carnivorous plants (as well as basic reflexes of human beings), for instance, can be understood as a very simple mechanism of this sort, where the overall behavior is just the result of individual constituent parts operating relatively independently of one another. See Moreno, Ruiz-Mirazo, & Barandiaran (2011) for more on this.
  37. See Clark (2001) and (2003)
  38. Epigenetics is the study of how factors other than changes in the underlying molecular structure of DNA can influence the expression and heritability of phenotypic traits, and encompasses everything from the study of how environmental changes can affect the expression of different genes to the exploration of how sets of genes can function as regulatory networks within an organism, affecting each others’ behavior and expression in heritable ways without actually modifying genotypic code. As a simple example, consider the way in which restricted calorie diets have been shown to modulate the activity of the SIR2/SIRT1 genes in laboratory rats, resulting in longer life-spans without change to the actual structure of the genes in question. See Oberdoerffer et. al. (2008). The most important point here is that these changes can be heritable, meaning that any account of evolution that treats evolution as a process that works strictly on genes can’t be the whole story.
  39. Moreno, Ruiz-Mirazo, & Barandiaran (2011)