In Search of Science Behind `Complexity'

Deconstructing infamous and popular notions of Complexity Theory

Part 3


[Part 1][Part 2][Part 3]

Complexity and self-organization are common words that anyone with some authority could adopt---but they do not always represent the body of theory that has become `complexity theory'. We need to remember the technical meaning, independent of who uses it, based on measurement.

In the final part of this essay, I want to address why it might be reasonable in some cases to draw broader conclusions about complexity in human-machine systems, but also why we should not stray too far away from the origins of complexity and the firm scientific footing it provides.

What complexity teaches us - it is re-usable?

One of the things you've probably picked up from this essay so far is that terminology and conceptual nomenclature are both frail and seductive.

Are systems complex, or merely their behaviours? I have made my favourite clear in voting for complex behaviours and properties. I think it is a reasonably unambiguous usage. Sometimes, authors try to distinguish between complicated and complex.

  • Complex structure
  • Complex behaviour
  • Disordered structure (complicated but linear)

One can ask: is there a difference between disorder and complexity of structure? I think there is. Something can be messy without there being any relationship between the parts. If we crumble a building it will stay broken if the parts are simply passive blocks. However, if we break up a slime mould in biology, the pieces will reassemble, because they are active in their environments, i.e. interacting.

Complexity of structure implies inter-relationships, and this can lead to complexity of behaviour (e.g. think of a rain forest), by introducing a mesh of communication channels, and overlapping degrees of freedom. It can also lead to simple stability, by allowing self-stabilizing feedback loops. Conversely, complex behaviour leads to non-uniform structure[8,11,13], as spacetime topology is made non-simply connected by feedback.

Disordered systems (those without large scale uniformity) can be messy, but not complex in the sense of complexity theory.

These are straightforward ideas, and there is a tendency to want to grasp onto certain non-deterministic `truths' (like chaos or a butterfly effect) and then use them to argue something deterministically about a human problem-solving context (like invoking the butterfly). This is potentially dangerous, especially if used to argue a case by force of authority.

Philosopher John Searle wrote in his book Intentionality: `...anyone who attempts to write too clearly runs the risk of being understood too quickly', something of which he could not be accused. This advice, however, explains why the borrowing of popular concepts is potentially dangerous. A little knowledge can be a dangerous thing.

A common pattern is this:

  • Take some heuristic results associated with one of the areas mentioned
  • Generalize to a similar scenario in business or information systems
  • Assert a recipe for success

Is this valid? One should not break the law of scales, unless there is real non-linearity. Strictly speaking, inference from one system to another requires dynamical similarity.

Can we reuse lessons learnt in one system to inform another? In physics, dimensional analysis tells us that we can do so if there is dynamical similarity between them. Reynolds showed that applies even in non-linear systems.

Applying the idea of complexity in other fields

Heuristic complexity theory

So to everyday life. Let me introduce something that I observe on a mundane level: the cherry-picking of concepts from complexity theory by lay-persons, in a heuristic way, to explain situations, failures and unpredictability. Heuristic references to complexity tend to decide a binary classification of either `not complex' or `complex'. Often this is chosen to justify the points the author wants to make. In the complex case, one can argue about emergence and loss of causation, etc In some cases it is used to argue that there is `no cause' for a phenomenon, because it is hard to trace the trajectory of a system.

If we can't measure complexity, then who gets to decide whether something is complex or not? And to what degree? Is it binary, or nuanced? If there are different kinds of complexity, with different consequences, then we need to know how to identify them - this matters - it cannot be the case that a system suddenly behaves differently because one engineer decides to call it complex. Either there is an objective phenomenon there or there isn't. The list of definitions above shows that there are several clear technical meanings, which we should not muddle up.

One approach is to try a best guess approach: one defines a system as complex if the agencies within it modify what we perceive as the system as they evolve (sometimes called co-evolution). This is inspired by the idea of the complex adaptive systems mentioned above. It is hard to either agree of disagree with such a definition: on the one hand, it follows the forms of a rigorous definition of complexity, on the other it also makes the assumption that one can identify `agents' at will. The scaling of agency, that would make such an argument rigorous, is not well understood, precisely because the boundaries between agencies are often unclear when it matters the most. If you start with atomic agents (say in a cellular automaton), then you have a simple base-line, but if you are talking about populations there is room for doubt.

We see that there are, in fact, many kinds or dimensions to complexity, but they all relate to the composition of the whole from the parts, in some way. They are about the information that describes a system's state. This could only matter if the nature of the composition alters the properties of the whole under certain circumstances. Thus we also need to understand what those circumstances might be.

So a scientific question would be: under what circumstances can we expect systems that are measurably complex to exhibit special behaviours? And can we characterize those behaviours and interpret their semantics as helpful or unhelpful in a given technological context? This is something that can be attacked by a scientific method.

Is there any predictive value to understanding complex systems? Chaos theory explains to us that we cannot predict non-linear system behaviour in the long run. Moreover, if we keep bundling more and more aspects of strongly coupled systems into `complexity theory', it becomes a jumble that says nothing. As the scope of a theory grows, it becomes harder to maintain coherence.

In the physical sciences, we know quite well when results from one scenario apply to another. This is the condition of dynamical similarity (championed by Newton and Reynolds), which is characterized by dimensionless ratios. A conclusion at one scale normally does not apply to another scale, unless it is scale invariant (fractal). Almost no physical systems are scale invariant in reality

Reasoning often goes like this: I want X to be true, and I need to back it up with some arguments. I like what I hear in this complexity stuff. The words resonate with my particular understanding. So I am going to use it as proof of my correctness. This is called confirmation bias, or simply pre-judgement.

There are practitioners who try to apply the ideas of complex causation self-consistently without going off the rails. There is a field of cognitive complexity around this, which is loosely based on Complex Adaptive Systems.

The hallmark of complexity is unpredictability, but ironically this is too simple as it abstracts away the details. It is difficult to tie this to causal factors

System failure analysis attributed to complexity

One area where this is of practical importance is in understanding risk in the presence of Byzantine failure modes. This has been of particular interest in the realm of human factors, and in the discussion around the role of `human error' in processes where disasters strike[27,30-33]. When causation is multi-variate and even circular, how do we begin to address their safety?

Do failures have anything to do with complexity? There is certainly an argument that they might, again by allusion to complex adaptive systems. When human cognition is involved, other factors come to bear too. Understanding of a situation itself leads to circular reasoning, which may or may not be formally complex. From a scientific perspective, we need to exercise caution before pronouncing the role of complexity.

In principle these are separable issues: complexity can lead to instability and hence unpredictability (see [27,28] for a popular discussion). The characterization of failure is a semantic judgement, however: it is not a property of a system's dynamics. Complexity is about dynamics, not semantics; however, the ability to define consistent semantics depend on stable dynamics, so it must enter the picture. (`Dynamics always trump semantics' in a system [16].) If a plane falls out of the sky, it is not because the laws of physics failed, but because we found a behaviour that was deemed undesirable to users, i.e. with incorrect semantics. This is an important distinction to remember. It is the lot of technologists and designers to form value judgements on outcomes, not of science which is designed to have no opinion at all.

Stimulus and reactivity in systems is not enough to claim complexity: plenty of responses are linear, even when placed into `complex' networks (there is a whole branch of physics known as linear response theory). The eigenvalue problem (e.g. principal component analysis) demonstrates this. Even linear systems can fail catastrophically, e.g. by component removal. Complexity may or may not introduce new failure modes. What we can say is that is makes them potentially hard to predict, because it makes causation hard to trace. Network causation can either stabilize (as eigenstates) or destabilize (like branching or bifurcation with single points of failure).

What is important to me about these writings is not whether the systems measure up as being significantly complex in the sense of complexity theory. If one takes away the word complex, the ideas are still just as valid. Without knowing more about the details, one would have to conclude that the writings were inspired by topics often discussed in connection with complex systems, without necessarily a strong dependence on them.

Paradoxically, complexity might also make faults easier to find (whether you find them in testing or in production). Complex behaviours contain more information than linear behaviours, so they tend to explore a parameter space more efficiently. If there is an error to be found, it is possible you might hit it more easily. Sometimes this is good, e.g. in testing. For instance, complexity (in the form of randomness) is used to intentionally solve NP search problems about harmless information, but when systems (like aircraft) unintentionally solve the search for a weakness in their composition, the consequences can be judged negatively to human lives.

This has implications for system testing. The effect of strong-coupling on causation or feedback suggests that unit testing is not where the action takes place in a dynamical system. Unit testing tests semantics (intended function), not whether the function was well founded. Isolated tests cannot tell us much about the dynamical behaviour of of a component in a `complex' system, i.e. one in which the effects of feedback have positive Lyapunov exponents.

As a final point, complexity has occasionally been abused as a way of arguing for the avoidance of blame, by loss of causation. I feel this is a misunderstanding. Causation and blame are different things; and the reasons not to assign blame are rooted in semantics, not in dynamics.

Attributing causal factors is not impossible (at least in principle), but might be much too expensive to be worth the effort (recall that it requires exponentially growing memory in a non-linear system). Unless a thing is isolated and `atomic' in nature, there will not be one single channel of causation but many. There can still be a dominant channel however, which tips the balance from stability to instability.

Blame, on the other hand, is a retributive feedback to someone, which may stabilize or destabilize a human interaction, depending on the person. Experience suggests that blaming a person is quite likely to destabilize their behaviour in a western culture, perhaps not in a totalitarian regime (that hypothesis requires testing). These questions belong to group psychology and humanism, not to fundamental issues of causation.

Saying that a root cause does not exist, is reasonable in any system that has coarse grains information, whether strongly or weakly coupled, linear or non-linear. This is not the same as saying there is no causation. It is worse for complex systems as they forget their past states quickly, drowned out by noise and counterforce of environmental mixing. So perhaps a more relevant question to ask is: what interactions are to blame for this loss of traceability: the erasure of information.

Systemically (coldly) it does not make sense to avoid identifying `blame' in the sense of `reason', if you are are trying to improve. To scientifically say that there no simple cause, you would have to map out your system and prove that it is complex, or simply measure correlations to show. The cost of that is generally too high to bother with.

We might want to avoid the uncomfortable responsibility of pointing at a person, rather than an inanimate part that can't feel humiliation, but that is more emotional than rational. (The Norwegian government has recently tried to replace competence with `raushet'---in case of error, praise other qualities and give a hug for forgiveness---as part of their system of values. These trends tend to come and go in politics, and mistakes are usually made in both directions of the pendulum.)

Separation of concerns (modularity) is IT's rational answer to this. However, this is often naively applied. We interpret separation of concerns for human delegation (divide and conquer) or for tidiness. As Tainter points out in The Collapse of Complex Societies[30], this leads to a high economic cost which overwhelm a system, leading to an economic catastrophe! What we should really be doing is separating on dynamical lines: by scale and by thinking carefully about cross-module influences, such as dependences. This includes stigmergic dependence, e.g. by mutual dependence on data (e.g. a common database).

Ironically, teamwork is the opposite human-process answer to this. Instead of separating causation, one tries to average away any conflicting semantics or dynamics by mixing. This begs the question whether humans working together are dominant causal factors in perceived faults.

Complexity in human systems

More generally, can we use a knowledge of complexity to say something about behaviour in human systems? This doesn't seem to be an easy question to answer. Certainly the popular consciousness has grasped onto complexity theory, but more in a heuristic popular way than by fundamental intellectual curiosity, as already mentioned above.

I am on the record as saying that the greatest challenge of the next decade is to understand knowledge management. One of the most important ways we humans learn and operate is through story telling, or narratives. Complexity naturally makes narratives a challenge. (I have a lot of respect for the work of David Snowden[29] here, though his terminology and dry humour sometime gets in the way of seeing compatibility with the precise language of dynamical systems). It is possible to use the basic understanding of the ways in which causation gets muddled through implicit non-linearity, and try to use this awareness to make credible choices during decision-making. As I mentioned above, this places the burden of sense on an expert who can applying semantic expertise in a dynamically savvy way. This often appears to involve a magic wand, even when well-reasoned because it pre-supposes a special blend of knowledge about a context.

As far as science is concerned, there does not seem to be a simple and consistent way to measure information, complexity, or predictive exponents (Lyapunov, Hurst) in human behaviours and systems. I can say this from experience, because I have tried from the bottom up, through the study of time-series and network influences in human-computer systems (see [16] for a summary of the studies). This adds to making it something of a black art.

Some questions I've seen posed where complexity is implicated: Is the brain complex? Does communication help to make teams complex if they talk more? Can we identify dominant causation for semantically undesirable outcomes? There are implications for management here[29].

There are no quick answers, but we can try to reason about this rationally. Here are some brief thoughts on how we might reason about this (I will not claim answers here).

The measurable complexity of a brain does not predict anything about a process in which it is engaged, if it is not strongly coupled in the process (I am not being facetious). Humans have the ability to forego use of intelligence and act as dumb agents.

  • If humans are acting a dumb (constrained) agents within a system (not using their intelligence or problem-solving capabilities), then they might increase their Kolmogorov complexity by combining several humans in this dumb process, allowing them to cover more search-space. This is not a conclusion, however.
  • Humans behaviour manifests through the brain, which already has a significant level of (say Kolmogorov) complexity. Indeed, the brain is one of the most complex structures we know of. Doubling the size of a brain by introducing a talking channel from one to another does not increase complexity measurably. It is unclear how much measurable complexity could be added by networking a couple of humans. One smart thinker is probably smarter than four networked dumb automata.
  • Adding one more connection to a brain hardly increases its complexity in any measurable sense, but we must also not forget about boundary conditions and non-linearity. Someone else's opinion could make a large semantic impression on someone's reasoning, and this can change the behaviour significantly. In this case, cooperation can have a large impact.

As for those annoying extroverts who claim we all need to hold hands and talk more in teams (falling backwards into swaddling arms): nope, that doesn't follow either. Talking is not the only form of communication (thank god for the silence of lambs like me ;-P). We have:

  • Direct communication: e.g. speech, body-language, etc
  • Indirect (stigmergic) communication: via proxy, e.g. documentation

In `swarm intelligence' [11] of ants (the most successful species on the planet aside bacteria), stigmergic communication is dominant. While they are built as autonomous agents, they are not successful without interaction. Information is communicated indirectly, by marking in the environment and later being perceiving it: e.g. the laying of pheromone trails. Ants also come together around tasks, like moving the same branch. Humans do the same with documentation and training. I find it interesting that training (as opposed to education) encourages humans to be somewhat simple agents that follow patterns, rather than intelligent reasoning engines. This is surely worth pondering.

One of the pseudo-sciency ways that complexity is used today is in talking about management. Agile manifestos and team-building a justified using the language of `self-organization'. Intentional self-organization (linear) is not the same as emergent self-organization (non-linear). Autonomy refers to the absence of external dependencies, meaning essentially that feedback is voluntary. This is a semantic decision, not a dynamical consequence. In human systems, these two aspects have to be both taken into account.

So, there is a need to properly understand the relationship between dynamics and semantics. This is the subject of my book In Search of Certainty, and it relates to promise theory. The latter attempts to build a semantic theory, compatible with dynamics. (Like complexity theory, promise theory has its fair share of misinterpretations and misrepresentations.)

The message that comes out of the foregoing essay is that complexity rarely leads to universal conclusions in a precise way. So the only certain way to answer the question of whether manufacturing or observing complex systems will have certain results is to measure them, in every case. What applies to one will not necessarily apply to another, because there is great sensitivity to ever-changing boundary conditions.

Conclusions? Fainting in coils.

Alice did not feel encouraged to ask any more questions about it,
so she turned to the Mock Turtle and said: "What else had you to learn?"

"Well, there was Mystery," the Mock Turtle replied, counting off the subjects on his flappers, "--Mystery, ancient and modern, with Seaography: then Drawling--the Drawling-master was an old conger-eel, that used to come once a week: He taught us Drawling, Stretching, and Fainting in Coils."

--Lewis Carroll

So, is there a science behind complexity? Certainly. How much do we know? We know a lot about principles, but it is also the nature of the beast to be unknowable. Clearly everyone wants there to be a rigorous science to their ideas about complexity, but at the moment it's still a bit of a curiosity shop of odds and ends.

Science strives for repeatability, but complexity implies that observables do not exhibit repeatable behaviour. This is a paradox at the heart of making complexity a science.

There is a human tendency to grasp onto the mystical non-deterministic aspects of complexity and treat them as a kind of magic sauce (confirmation bias), or medicinal aid in business and human affairs. This can only be described as pseudo-science, no matter how much we want it to be true. Some seem more careful than others to get the science right [21].

I do believe that there is an emerging science of human-machine systems. Indeed, I have spent the past 20 years trying to scrape away at it from the bottom (the easy end). I am inspired by researchers who are willing to stick their necks out and treat social systems with the same kind of rigour as in the `physical sciences'. Physicists have been too keen to dismiss these as nonsense.

Physics, on the other hand, is the most advanced of the theoretical sciences, so one should not dismiss its findings either. Without it, complexity might not be understood at all today. It tells us that there are limits to knowledge. For instance, some of the conclusions of complexity only apply in the limit of large numbers of agents. How can one then apply the results to individual humans and small teams?

There is the underlying principle of strong coupling (between the agents of a system) which leads to implicit feedback loops (not necessarily of the cybernetic variety, but of the self-organizing kind) and multi-scale causation.

My own interest in promises (Promise Theory) is motivated by the fact that physics tends to ignore semantics of information, There is something to be learnt from that oversight. Sometimes (with physics envy) we try to deride the fact that physics can't explain complexity any better than heuristics (then you don't have to get an expensive degree to back up what you are saying). This is wrong. It is simply the misapplication of physics in which reductionism implies isolationism that does not explain. If you do the physics properly, it is all perfectly consistent, and we have come a long way. Moreover, the increasing realization, throughout the 20th century, that physics is about the transmission of information brings the stories together nicely.

At every scale there is a possibility for new information (ironically due to loss of detail), applied as selection pressures and constraints, by proxy, even if not by new forces of nature. Just think of the analogy with of Skype conversations: peer-to-peer autonomous agents cooperating over long distances.

As for the heuristic complexity arguments, these are really about how information flows, and how it impacts behaviour at a relevant scale. The kind of complexity that could affect such things cannot be hand-waved, but it can be studied and modelled.

And that's where this story must end.

Next ...

Now to embrace your feedback destiny, start again from the beginning:


[Part 1][Part 2][Part 3]

Popular Book References

[1] Norbert Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine (1948)
[2] Shannon and Weaver, The Mathematical Theory of Communication (1948) reprinted 1949 [3] William Ross Ashby, An Introduction to Cybernetics (1956)
[4] Arun V. Holden, Chaos (1986)
[5] P.C.W. Davies, The New Physics (1989)
[6] Thomas Cover and Joy A Thomas, Elements of Information Theory (1991)
[7] Stuart Kauffman, The Origins of Order (1993)
[8] Ilya Prigogine, The End of Certainty: Time, Chaos and the new laws of nature (trans. La Fin des Certitudes) (1996)
[9] Robert Axelrod, The Complexity of Cooperation, Agent-based models of competition and collaboration (1997)
[10] D. Dasgupta, Artificial Immune Systems and Their Applications (1999)
[11] Eric Bonabeau, Marco Dorigo, and Guy Teraulaz, Swarm Intelligence (1999)
[12] Stuart Kauffman, Investigations (2000)
[13] Steven Johnson, Emergence (2001)
[14] Albert Laszlo Barabasi, Linked (2002)
[15] P.C.W. Davies and Henrik Gregersen, Information and the Nature of Reality (2010)
[16] Mark Burgess, In Search of Certainty (2013)
[++] Melanie Mitchell, Complexity - A Guided Tour (2009) - a nice overview somewhat different to mine

Academic references

[17] J. Von Neumann, The General and Logical Theory of Automata. (1951)
[18] Seth Lloyd, Measures of Complexity - a non-exhaustive list
[19] Murray Gell-Mann and Seth Lloyd, Effective Complexity
[20] Stephen Wolfram, Universality and Complexity in Cellular Automata

Web references

[21] Cynefin framework for sense-making of complex knowledge
[22] Cellular automaton papers by S. Wolfram
[23] Notes on Melanie Mitchell: defining and measuring complexity
[24] How Can the Study of Complexity Transform Our Understanding of the World?
[25]Algorithmic complexity
[26]The nature of causation in complex systems (George Ellis)
[27]John Allspaw, Translations Between Domains: David Woods
[28]Richard Cook, How Complex Systems Fail (Medicine)
[29]Combining complexity with narrative research.

Risk, and human factors

[30] J. Tainter, The Collapse of Complex Societies (1988)
[31] D. Dorner, The Logic of Failure (1989)
[32] J. Reason, Human Error (1990)
[33] S. Dekker, Drift into Failure (2011)