kind of parameterization and spin up procedure discussed above can be seen, in this more critical light, as a pernicious practice of curve-fitting: the CGCMs are designed to generate the predictions that they do, as model builders simply adjust them until they give the desired outputs.
However, as Oreskes argues, even the basic situation is more complicated than the naive Popperian view implies: in even uncontroversial cases, the relationship between observation and theory is a nuanced (and often idiosyncratic) one. It’s often non-trivial to decide whether, in light of some new evidence, we ought to discard or merely refine a given model. Oreskes’ discussion cites the problem of the observable parallax for Copernican cosmology and Lord Kelvin’s proposed refutation of old-earth gradualism in geology and biology--which was developed in ignorance of radioactivity as a source of heat energy--as leading cases, but we need not reach so far back in history to see the point. The faster-than-light neutrino anomaly of 2011-2012 is a perfect illustration of the difficulty. In 2011, the OPERA lab at CERN in Geneva announced that it had observed a class of subatomic particles called “neutrinos” moving faster than light. If accurate, this observation would have had an enormous impact on what we thought we knew about physics: light’s role in defining the upper limit of information transmission is a direct consequence of special relativity, and is a direct consequence of geometric features of spacetime defined by general relativity. However, this experimental result was not taken as evidence falsifying either of those theories: it was greeted with (appropriate) skepticism, and subjected to analysis. In the end, the experimenters found that the result was due to a faulty fiber optic cable, which altered the recorded timings by just enough to give a significantly erroneous result.
We might worry even in standard cases, that is, that committed scientists might appropriately