In Search of Science Behind `Complexity'

Deconstructing infamous and popular notions of Complexity Theory

Part 2


[Part 1][Part 2][Part 3]

In the first part of this essay, I described what complexity means to the measurable world of science. Although it does not have any one simple meaning, there are some common themes that repeat.

In this part, I want to describe how complexity affects our ability to understand and predict system behaviours.

Implications of complexity on understanding behaviour

One of the hallmarks of complexity is strong coupling between the parts that make up a whole. In weakly coupled systems, there is a well-known principle in physics of separation of scales (which revisits the concept of dimensionless ratios). This means that what happens on a microscopic scale is independent of what happens at the macroscopic scale. Or, if you like, one can separate signal from noise, or trend from fluctuation.


Noise and signal, fluctuation and trend, separate visibly in a system with weak coupling.

If a component within a system only affects its neighbours weakly, it can't mess up it's neighbours state and cause `mixing' between their behaviours. However, in strongly coupled systems, what happens at a small scale can affect which happens at a large scale and vice versa. Not only can the dog wag its tail, but the tail can wag the dog too. This is principally what is new about strong coupling.

It is easy to show that, if you take two archetypal linear systems, like the simple pendulum, and allow them to communicate, they interfere with one another and become chaotic. See this video for a demo!

It was once (naively) believed that reductionism would account for all the questions science had to answer. If one continues to break down systems into smaller and smaller parts, we have to eventually uncover the secrets of the universe. This is certainly part of a story about the world, but it's like saying that a play is equivalent to its list of main characters.

Now, it is clear that, when we reduce a system to its components, or a play to its dramatis personae, we cannot throw away the information about how we put the world back together again. What do the characters say to one another? In a feedback system (like the turbulence example), large-scale features (edges, bumps, flows etc) can talk back to low level dynamics, closing a loop in a complex graph of causation. Time dependent boundary conditions lead to non-equilibrium physics[8-14] (Nobel prize: Ilya Prigogine).

The interactions between component parts are therefore not purely local; they leak out of their containment and influence larger regions of a system. Indeed, non-local effects include some of the most interesting and important behaviours we see in the world.

Physicists are therefore somewhat conflicted about non-local effects. On the one hand, locality makes sense for understanding the transmission of microscopic influence, but information and correlations can also exist over large scales, by daisy-chaining of local correlations. Long range order is an example, and relates to the phase transitions of material science. This is also called coherence, such as in the light emitted by fluorescent tubes and lasers, which emerges from the cooperation of millions of independent atoms helping one another to get excited (the physics equivalent of mass hysteria).

Much of the empirical science comes from fluid dynamics, studies of the weather and flow of liquids in pipes. Even today, the flow of mixed-liquids (like oil and water) through pipes is not a solved problem. This structurally simple problem exhibits unstable behaviours, that can be characterized as complex. Complexity arises in so many different ways that is embodies many themes. There is no way to justify them here, but check out the references at the end.

Major themes from studying complex systems

One particular theme is about variety. Variety, like diversity is often said to be important in ecosystems. Variety is measured by entropy, which represents information density in a message. High information density implies a richness of the pattern by which a system explores its degrees of freedom, i.e. not just a capacity to change, but a lack of repetition (no retracing of steps). Hence, high informational entropy leads to complex messages (in the Kolmogorov sense) or trajectories, which leads to patterns that explore a wider region of its possible space of outcomes.

This is essentially the principle at work in Darwinism. In rich ecosystems, there is a lot of information floating around, constrained by projection onto the genetic protein code of DNA. Species mutate at random (information input) and their ecological interactions, along with other environmental boundary conditions, put selective pressures (constraints) on the trajectories of species. Thus, Darwinian evolution is simply a parallelized random walk search algorithm. What is it searching for? Stable attractors, long-lived species.

This is the principle by which the computer science version of complexity Algorithmic Complexity is approached too (you might have heard of the complexity classes P, NP, NP-complete, PSPACE, LOGSPACE, etc). The `N'-for-non-deterministic classes use applied `randomness' (or so-called Monte Carlo methods exploit random guessing to short-cut lengthy linear searches). Instead of trying to program a long deterministic, systematic search through a huge warehouse of information, one takes random guesses about where to start the search (without retracing steps), and one looks in that local region before moving on, to explore a wider area in parallel. It is a kind of formalized brain-storming, or Darwinian evolution.

Of course, diversity, variety, or variability alone are not useful, unless these freedoms interact with constraints to bring focus. It is this interplay between freedom and constraint that leads to patterns we can recognize, and hence to controlled information. As seen in the pendulum video above, when two systems interact, they place implicit constraints on one another. A warehouse of goods (like a supermarket) has a high density of information, hence a high structural Kolmogorov complexity, but it does not lead to any new or interesting behaviour (unless perhaps you mix all the ingredients together).

So complexity has many faces, but we learn a few heuristics, like rules of thumb:

  • In certain regimes or contexts, a system might exhibit unusual emergent behaviours, when conditions are right.
  • Emergent or collective-cooperative behaviours might (or might not) be self-organizing.
  • Chaos leads to complex trajectories (Lyapunov exponent) and this is associated with strong coupling.
  • Strong coupling is associated with brittleness, or `catastrophic' behaviours, because there is strong propagation of information throughout a system.
  • Detail in structure and behaviour can come from cooperation between many inter-agent interactions. These are often equilibrium behaviours with `detailed balance'.
  • In a strong-coupling system, there is no clean separation of scales, and hence no simple hierarchical view.
  • A complex trajectory is likely to explore more of the phase space than a simple stable orbit. Lyapunov exponent. This is sometimes useful as a source of randomness, and as a search algorithm solving "NP" complete
  • Coarse graining leads to loss of information. This happens when agencies at different scales interact with one another, projecting away information.
  • Self-organization: patterns emerge in the large-scale structure of a system through non-linear feedback and long-range order. The same effect can be seen in rule-based software systems, where interactions and algorithmic boundary conditions are imposed by software rules, e.g. cellular automata like Conway's Game Of Life.
  • Auto-feedback (generalization of autopoiesis in cybernetics) is expressed through non-linearity in mathematics. Information gets recycled, and errors can be compounded or corrected, depending on whether there is convergence or divergence.
  • Catastrophes and phase transitions are changes to collective (emergent) states that lead to qualitatively different behaviour. These can be self-organized states, created through bursts of complex interaction, which then stabilize.

Complexity as an umbrella for everything

The complexity narrative is already extensive. Observe the incomplete figure from the Wikipedia page on complex systems:
(image from Wikipedia)

What the diagram shows, as much as anything else, is a subject rife with interconnections (and perhaps distorted a little by hero-worship: just as relativity has Einstein, so complexity has some historical figures---and different authors have their favourites (Wiener, Shannon, Kolmogorov, Mandelbrot etc -- even David Byrne (but not of Talking Heads))).

The danger in coming to complexity from a single area, (e.g. hoping that it's all just biology or cybernetics), is that one expects from the world of linear systems to be able to apply learning from one area to learning in another. This is harder to justify in a strong-coupling system. How do we know what to include and exclude? Specific conditions are more important in a strongly coupled system, so generalizations don't obviously work.

Is complexity a theory? The purpose of a theory is to pose certain questions, within a consistent set of ideas. Theory implies something that reasoned, with explanatory value. Complexity theory is much about the limits of explanation, i.e. about uncertainty. Science itself is about uncertainty management through the processes of observation and reasoning. Perhaps complexity is necessarily a messy theory, and perhaps it has yet to find its clarifying principles.

Does causation disappear in a complex system?

One myth about complex systems is that causation (i.e. the law of `cause and effect') disappears from them, as the Newtonian clockwork universe evaporates in a puff of mysticism.

It has become popular to say that Newton was wrong about determinism. This is not a fair of accurate thing to say. The problems studied by Newton are characterized by a strong degree of determinism. It is more accurate to say that post-Newton, science probed systems where essential indeterminism was increasingly evident. The effect of Newton's work has had a pervasive influence on our culture, however. This has made it difficult to escape from Newtonian thinking as the bounds of experience have shifted. Non-linear behaviours are also causal in the same sense, but the underlying causation rapidly erases its tracks.

Similarly, modern lore on statistics is responsible for provoking a whole new level of confusion about causation If there are any doubts about the existence of causation, a simple punch on the nose should clear them up.

Complexity theory does not claim that there is no causation, only that it forms a network that even has circular paths, due to feedback. Moreover, non-linearity can amplify or dilute information from interacting sources, increasing or decreasing the efficacy of causal information channels.

There is a difference between a system of many variables and a regime of complex behaviour (far from equilibrium). Close to equilibrium, almost any system is predictable, and there is no causation at all because there is no change. System evolution, at the scale of an equilibrium, ceases altogether.

Confusion arises because causation is sometimes over-simplified to mean: initial state leads to final state, with long-range interpolation. The effect of strongly-coupled systems is that there is not only a lack of separation between scales, but also a lack of separation between boundary conditions and propagation of solutions. Feedback systems quickly forget their initial state, due to this mixing, and tend to adopt self-consistent configurations that are local eigenstates of the network of interactions. The trajectory of an agent in the system at time t+Δt is not simply based on an equation of motion and the trajectory at time t, but also on the time-dependent boundary conditions generated by the trajectory acting on itself (turbulence). Thus the basis for repeated prediction is lost, and going backwards to predict how you got into a state requires increasing amounts of memory.

Video visualization: Don't believe that isolated, linear couplings lead to Markov processes in which one can trace causation? Take a look at this video of reversible laminar flow. Although the system does not remember its past state, it can be reconstructed because the layers of fluid are only weakly coupled, and hence the evolution is constant. In a turbulent, chaotic flow, the layers mix continuously, and evolution is changing at every time step, making reconstruction completely impractical. The analogue of the latter in digital systems is used for irreversible Crypto-hashing.)
The methodology of physics, since Newton and Leibniz has been to break up trajectories into isolated differential increments, over which dynamical quantities are conserved, i.e. bodies are essentially free along trajectories. This implies that the equations of motion, which propagate these incremental trajectories take the form of reversible rules. This does not mean that physics is fundamentally reversible, but it leads some to claim `physics can't predict causation because its laws are reversible'. This is incorrect. Only the generators of constrained symmetries are reversible, but they are only half the story! They do not make up the full physics. Equations of motion express only the capacity to change (degrees of freedom matched with constraints), but they are vacuous without the input of boundary conditions. This is where all the information about causation and scenarios enters physics. There is no paradox. See the excellent talk by George Ellis[26].

Next ...

In the final part of this essay, I want to explore how (if at all) a knowledge of complexity's complexities can inform our understanding of applied fields.


[Part 1][Part 2][Part 3]

Wed Feb 18 13:27:45 CET 2015

Popular Book References

[1] Norbert Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine (1948)
[2] Shannon and Weaver, The Mathematical Theory of Communication (1948) reprinted 1949 [3] William Ross Ashby, An Introduction to Cybernetics (1956)
[4] Arun V. Holden, Chaos (1986)
[5] P.C.W. Davies, The New Physics (1989)
[6] Thomas Cover and Joy A Thomas, Elements of Information Theory (1991)
[7] Stuart Kauffman, The Origins of Order (1993)
[8] Ilya Prigogine, The End of Certainty: Time, Chaos and the new laws of nature (trans. La Fin des Certitudes) (1996)
[9] Robert Axelrod, The Complexity of Cooperation, Agent-based models of competition and collaboration (1997)
[10] D. Dasgupta, Artificial Immune Systems and Their Applications (1999)
[11] Eric Bonabeau, Marco Dorigo, and Guy Teraulaz, Swarm Intelligence (1999)
[12] Stuart Kauffman, Investigations (2000)
[13] Steven Johnson, Emergence (2001)
[14] Albert Laszlo Barabasi, Linked (2002)
[15] P.C.W. Davies and Henrik Gregersen, Information and the Nature of Reality (2010)
[16] Mark Burgess, In Search of Certainty (2013)
[++] Melanie Mitchell, Complexity - A Guided Tour (2009) - a nice overview somewhat different to mine

Academic references

[17] J. Von Neumann, The General and Logical Theory of Automata. (1951)
[18] Seth Lloyd, Measures of Complexity - a non-exhaustive list
[19] Murray Gell-Mann and Seth Lloyd, Effective Complexity
[20] Stephen Wolfram, Universality and Complexity in Cellular Automata

Web references

[21] Cynefin framework for sense-making of complex knowledge
[22] Cellular automaton papers by S. Wolfram
[23] Notes on Melanie Mitchell: defining and measuring complexity
[24] How Can the Study of Complexity Transform Our Understanding of the World?
[25]Algorithmic complexity
[26]The nature of causation in complex systems (George Ellis)
[27]John Allspaw, Translations Between Domains: David Woods
[28]Richard Cook, How Complex Systems Fail (Medicine)
[29]Combining complexity with narrative research.

Risk, and human factors

[30] J. Tainter, The Collapse of Complex Societies (1988)
[31] D. Dorner, The Logic of Failure (1989)
[32] J. Reason, Human Error (1990)
[33] S. Dekker, Drift into Failure (2011)