Shannon entropy, then, can’t be quite what we’re looking for, but neither does it seem to miss the mark entirely. On the face of it, there’s some relationship between Shannon entropy and complexity, but the relationship must be more nuanced than simple identity, or even proportionality. Complex systems might well be those with a particular entropic profile, but if that’s the case, then the profile is something more subtle than just “high entropy” or “low entropy.” Indeed, if anything, it seems that there’s a kind of “sweet spot” between maximal and minimal Shannon entropy—systems represented by messages with too much Shannon entropy tend not to be complex (since they’re randomly organized), and systems represented by messages with too little Shannon entropy tend not to be complex, since they’re totally homogenous. This is a tantalizing observation: there’s a kind of Goldilocks zone here. Why? What’s the significance of that sweet spot? We will return to this question in Section 2.1.5. For now, consider one last candidate account of complexity from the existing literature.
2.1.4 Complexity as Fractal Dimension
The last candidate definition for complexity that we’ll examine here is also probably the least intuitive. The notion of a fractal was originally introduced as a purely geometric concept by French mathematician Benoit Mandelbrot, but there have been a number of attempts to connect the abstract mathematical character of the fractal to the ostensibly “fractal-like” structure of certain natural systems. Many parts of nature are fractal-like in the sense of displaying a certain degree of what’s sometimes called “statistical self-similarity.” Since we’re primarily interested in real physical systems here (rather than mathematical models), it makes sense to start with that
consists entirely of the string ‘CG’ repeated many times. Surely DNA that codes for a functional organism, though, is more complex than a non-coding DNA molecule. Again, the correlation condition fails.
- Mandelbrot (1986)