a “sweet spot”—middling Shannon entropy seems to correspond to maximal complexity in the associated system.
In other words, identifying complexity with compressibility leads to an immediate conflict with our intuitions. A completely random string—a string with no internal structure or correlation between individual bits—will, on this account, said to be highly complex. This doesn’t at all accord with our intuitions about what complex systems look like; whatever complexity is, a box of gas at perfect thermodynamic equilibrium sure doesn’t have it. This observation has led a number of information theorists and computer scientists to look for a refinement on the naïve information-theoretic account. A number of authors have been independently successful in this attempt, and have produced a successor theory called “effective complexity.” Let’s get a brief sense of the formalism behind this view (and how it resolves the problem of treating random strings as highly complex), and then examine how it relates to the account of dynamical complexity given above.
The central move from the information-content account of complexity that’s built on the back of Shannon entropy to the notion of effective complexity is analogous to the move from thinking about particular strings and thinking about ensembles of strings. One way of presenting the standard Shannon account of complexity associates the complexity of a string with the length of the shortest computer program that will print the string, and then halt. The incompressibility problem is clear here as well: the shortest computer program that will print a random string just is the random string: when we say that a string S is incompressible, we’re saying (among other
- A system like that could be appropriately represented as a random string, as part of what it means for a system to be at thermodynamic equilibrium is for it to have the maximum possible entropy for a system constituted like that. Translated into a bit-string, this yields a random sequence.