scientific discovery. This is manifestly not the case; whatever it is that scientists are doing, it is not just a matter of inventing algorithm after algorithm. There's something distinctive about the kinds of patterns that science is after, and about the algorithms that science comes up with. In fact, we've already identified what it is: we've just almost lost sight of it as we've descended into a more technical discussion—science tries to identify patterns that hold not just in existing data, but in unobserved cases (including future and past cases) as well. Science tries to identify patterns that are projectable.
How can we articulate this requirement in such a way that it meshes with the discussion we’ve been having thus far? Think, to begin, of our hypothetical recipient of information once again. We want to transmit the contents of S1-2 to a third party. However, suppose that (as is almost always the case) our transmission technology is imperfect—that we have reason to expect a certain degree of signal degradation or information loss in the course of the transmission. This is the case with all transmission protocols available to us; in the course of our transmission, it is virtually inevitable that a certain amount of noise (in the information-theoretic sense of the dual of signal) will be introduced in the course of our message traveling between us and our interlocutor. How can we deal with this? Suppose we transmit the bitmap of S1-2 and our recipient receives the following sequence:
Some of the bits have been lost in transmission, and now appear as question marks—our interlocutor just isn’t sure if he’s received a one or a zero in those places. How can he correct for this? Well, suppose that he also knows that S1-2 was generated by R. That is, suppose that we’ve