Page:Lawhead columbia 0054D 12326.pdf/74

From Wikisource
Jump to navigation Jump to search
This page has been proofread, but needs to be validated.

“message” that contains nothing but randomly generated characters? If we think of the message in terms of how “surprising” it is, the answer is obvious: a randomly-generated string has maximally high Shannon entropy. That’s a problem if we’re to appeal to Shannon entropy to characterize complexity: we don’t want it to turn out that purely random messages are rated as even more complex than messages with dense, novel information-content, but that’s precisely what straight appeal to Shannon entropy would entail.

Why not? What’s the problem with calling a purely random message more complex? To see this point, let’s consider a more real-world example. If we want Shannon entropy to work as a straight-forward measure for complexity, it needs to be the case that there's a tight correlation between an increase (or decrease) in Shannon entropy and an increase (or decrease) in complexity. That is: we need it to be the case that complexity is proportional to Shannon entropy: call this the correlation condition. I don't think this condition is actually satisfied, though: think (to begin) of the difference between my brain at some time t, and my brain at some later time t1. Even supposing that we can easily (and uncontroversially) find a way to represent the physical state of my brain as something like a message,[1] it seems clear that we can construct a case where measuring Shannon entropy isn't going to give us a reliable guide to complexity. Here is such a case.

Suppose that at t, my brain is more-or-less as it is now—(mostly) functional, alive, and doing its job of regulating the rest of the systems in my body. Now, suppose that in the time


  1. Mitchell (op. cit.) points out that if we’re to use any measure of this sort to define complexity, anything we wish to appropriately call “complex” must be put into a form for which Shannon entropy can be calculated—that is, it has to be put into the form of a message. This works just fine for speech, but it isn’t immediately obvious how we might go about re-describing (say) the brain of a human and the brain of an ant messages such that we can calculate their Shannon entropy. This problem may be not be insurmountable (I’ll argue in 2.2 that it can indeed be surmounted), but it is worth noting still.

64