things) that “Print *S*” is the shortest possible program that will reproduce *S*. Thus, a maximally random (incompressible) string of infinite length is infinitely complex, as the shortest program that produces it is just the string itself.

Suppose that rather than think of individual strings, though, we shift our attention to *ensembles* of strings that share certain common features. In the language of Gell-Mann and Lloyd, suppose that rather than think about the shortest program that would reproduce our target string exactly, we think about the shortest program that would reproduce the *ensemble* of strings which “best represents” the target string^{[1]}. Gell-Mann argues that the best representative of a random string is the uniform ensemble—that is, the ensemble of strings that assigns all possible strings equal probability. This is supposed to resolve the compressibility issues in the traditional information-theoretic account of complexity. It’s easy to see why: suppose we want to print a random string of length *n*. Rather than printing *n* characters directly, Gell-Mann proposes that we instead write a program that prints a random character *n* times. The program to do this is relatively short, and so the effective complexity of a random string will rate as being quite low, despite the fact that individual random strings are incompressible. Gell-Mann is capitalizing on a higher-order regularity: the fact that all random strings are, in a certain respect, similar to one another. While there’s no pattern to be found *within* each string, this higher-order similarity lets us produce a string that is in some sense “typical” of its type with relative ease.

Conversely, a string with a certain sort of internal structure—one with a large number of patterns—is a member of a far more restricted ensemble. The collected work of Shakespeare (to use one of Gell-Mann’s own examples) rates as highly complex because it (considered as a

- ↑ Gell-Mann and Lloyd (2003). See also Foley and Oliver (2011).

95