The Brain Horizon

At what point should an organism become a society?

After years of investing in distributed approaches, IT Server Management has begin to return once again to that age-old idea of centralized control; now also Software Defined Networking has proposed the same -- surely unthinkable. Even Google is going out and supporting it. Surely centralization doesn't scale? What does this mean? Personally, I like to think about this in terms of brains and societies.

Brains and Societies

A brain model is one in which there is a centralized controller that reaches out to a number of sensors and actuators through a network with the intent to control it. Obviously, this is how brains work in animals: the brain is localized as a logically centralized controller, and the nervous system connects it to the rest of the body for sending and receiving pulses of communication. Not all of the body's systems are managed in this way, of course, but the intentional aspects of our behaviour are.

A brain model is a signalling model. Signals are transmitted from a common command-and-control centre out to the provinces of an organism, and messages are returned to allow feedback. We are so familiar with this approach to controlling systems that we nearly always use it as the model of control. We see it in vertebrates, we use it to try to force-govern societies, based on police or military power, and companies. Most of management thinking revolves around a hierarchy of controllers. The advantage of this model is that we can all understand it. After all, we live it every day. But there is one problem: it doesn't scale favourably to large size.

The alternative to a brain model is what I'll call a society model. Now I am assuming a society that is not a dictatorship police-state or a military junta. I imagine that one has a number of autonomous parts that are loosely coupled, without a single central controller. There can be cooperation between the parts (often called institutions or departments, communities or towns, depending on whether the clustering is logical or geographical). There are even places where certain information gets centralized (libraries, governing watchdogs, etc), but the parts are capable of surviving independently and even reconfiguring without a central authority.

Societies scale better than brain models, because they can form local cells that interact weakly at the edges, trading or exchanging information. If one connection fails, it does not necessarily become cut off from the rest, and it has sufficient autonomy to reconfigure and adapt. There is no need for there to be a path from every part of the system to a single god-like location, which then has to cope with the load of processing and decision-making by sheer brute force.

Brute force can be a successful strategy for operating an organism, but when it fails, it fails catastrophically (think of images of a falling high-rise/skyscraper and compare to a house). When a society fails, it is a slow process of decay.

The argument for centralization - intelligence as a service

What you gain from bringing information together is the ability to make comparisons. Once you can compare, you can also make logical (arithmetic) decisions. This is what we think of as intelligence, and it is compelling. In many ways, it is the homunculus model or Mechanical Turk -- in which we think of there being a little man inside the machine making the decisions and pulling the levers. Many of the technologies we build are exactly this: a remote-control infrastructure and either a dashboard for a human homunculus to interact with, or a semi-automatic controller that does some remote regulation from a central place.

The problem with transporting data is that it has a high cost. you don't want to transport very much data, else it would take too long to react and make decisions. In other words, "big data" is not a brain model strategy for intelligent reactivity, for that we want simple signals. But a society can embody "big data" in a distributed sense, and process it as distributed state without any bottleneck of communication or processing.

The other problem with transporting all the data is that you bring it to a finite resource: the brain has a finite speed and a finite capacity to read at a certain rate. This makes it a bottleneck that puts a maximum rate of response. Then add to that the time it takes for signals to be sent, and you can calculate the maximum size and reaction time of the organism it can handle.

What you potentially lose by centralizing too much is the need to suppose a nervous system that speaks the language of all your sensors and actuators. If your brain needs a `device driver' for every replacement sensor cell, you either prevent the independent evolution of the distributed sensors and actuators, or conversely try to keep updating your brain software to match their evolution, then you've amplified the congestion of the bottleneck. Thus the more different semantics you have to support in your brain, the more of a burden it becomes to support environmental diversity.

Is a brain the only way to do it, or is there reactive intelligence in a society? Reptiles and invertebrates have a greater degree of autonomy. If you cut a worm in half, it continues as two worms. Perhaps the balance between centralization and decentralization has more dimensions than we think. It always boils down to two key forces: dynamics and semantics.

Finite speed of communication - and CAP conjecture

Then there is signalling rate. All signals are transmitted with a finite speed. That has several implications.

  • We can only respond as fast as information travels.
  • There is an event horizon beyond which we cannot see at all, because signals do not have time to reach us.

This basically summarizes the CAP conjecture, and the limits of information availability and consistency. Our ability to form and maintain relationships (knowledge) with remote parts depends on them being local. Long distance relationships don't work as well as short distance ones! Certainly, this depends on the speed of responses. If messages take longer to send and receive, then an organism can react more slowly, so scaling up size means scaling down speed, and vice versa. This certainly fits with our knowledge of the animal kingdom (another centralized management expression!). Large animals like whales and elephants are slower than smaller creatures like insects. The speed of impulses in our bodies is some six orders of magnitude slower than the speed of light, so we could build a very big whale using photonic signalling.

Still, we build models in this way. Most recently there is OpenFlow and the return to centralized management of networking -- though they seem to be thinking that the world lives basically inside datacentres in very large LANs. This centralized approach is a modern luxury afforded by high bandwidth reliable networking where the overhead and reliability of signalling are favourable to this strategy. Yet simple physics tells us that there has to be a limit to the size of such an animal, and the speed of light cannot be improved much, so physical reach will be limited. Fortunately, our planet is about 0.4 vacuum light seconds in diameter, which means it is a few signal seconds journey time on a good day. Centralized solutions can be fast enough on a planetary scale, but they might not be safe.

Paralysis by hardware and software

Another fragility of a brain model is that signals can be cut off. If you sever an animal's spinal chord, it is pretty much paralyzed as a sensor network. Its separate respiratory system keeps it alive in a process sense. Even without a physical break, if the `device driver' software becomes outdated to talk to the external world because the brain can't keep up, there will be a virtual disconnection.

One can of course rely on redundancy to make systems more resilient, but then you are basically starting down the path to a society model. The need to committing to the full transition will come from economic imperatives.

There is a cultural hindrance here though, as I see it. Engineers are taught to think like controllers. They are always asking: how can I get involved in this process? Give me sensors and state that I can calculate with and manipulate. Let me program or build a cool mechanism to do smart things. Engineers want to insert their own logic into processes and be part of the system. Architects, on the other hand, and town planners have to think differently. They want to design systems that will stand on its own merits, with all of the cooperative parts necessary, and all the continuous relationships internal to the structure. Brute force allows us to avoid this kind of discipline, by trading effort for sound design -- but how long will we get away with this approach I wonder, before a Three Mile Island, or Tjernobyl even takes place in the world of information technology?

Trade-offs in centralization - some promise theory

At what point do we need reactive intelligence (as a service)? Let's examine some of the promises that agents can keep, and dissect these models according to promise theory principles. Promise theory does not say anything about centralization per se, but we can find some simple conclusions by following the principles.

Two basic tenets of promise theory are:

  • Every agent promises only its own behaviour.
  • Independently changing parts of the system exhibit separate agency and should be modelled as separate agents.

A brain `service', whether centralized or embedded in a society, promises to handle impositions thrust upon it by sensors, and to respond to actuators with its own impositions. It also promises to process and make decisions, thus any brain will benefit from speed and processing power. Any agent can handle up to a certain maximum capacity C impositions per second, before becoming over-subscribed. Thus every agent is a bottleneck to handling impositions. Horizontal expansion (like a society) by parallelization handles this in a scale-free manner. Vertical expansion (like a central brain) has to keep throwing brute force, capacity and speed at the problem. Moore's law notwithstanding, this probably has limits.

Borrowing a simple truth from queueing theory, it is now easy to see how to define a horizon for a centralized brain model, in a scale-free way. It is the queueing instability, when the rate of incoming impositions reaches the average rate of processing them (see figure). A society will scale more or less linearly until the cost and limits of communication flatten its growth.

Specialization implies regions of different agency, with all kinds of implications, such as learning, etc. Here promise theory simply tells us that these should be understood as separate agents, each of which promises to know how to do its job. Those regions could be decentralized, as in societal institutions, or it could be specialized regions of a central brain (as in Washington DC or the European parliament). A centralized brain makes it easier (and faster) for these institutional regions to share data, assuming they promise to do so, but it doesn't place any limits on possibility.

Only observation and correlation (calibration) require aggregation of information in a single `central' location. Control and dissemination can be handled in layers of details:

  • Micro-management and central authority - fragile, scales by brute force
  • Policy guidance and local autonomy - robust, scales through caching
  • Complete autonomy with no sharing - lacks intelligence of shared state

I have always argued for the mid-position, inspired by biological systems amongst other things, but we first of these is always the default position we come back to, as I pointed out in In Search of Certainty. It is just comforting, if somewhat for the wrong reasons.

We can summarize:

Brain (centralized)Society (decentralized)
Easy to understandHarder to understand
Easy to trustHarder to trust
Direct causation by serialized signallingEmergent parallel causation spontaneous
Global information
"God's eye view"
Local information
"Local observer view"
Sense of hands-on controlShepherded control
Chance to determine a global optimumLocal optimum
Push thinking (imposition)Cooperative thinking (promise)
Long signal time overheadShort signal time overhead
Slow reactivity over wide areaFast reactivity over small area
Quick adaptation to global stateQuick adaptation to local state
Catastrophic failure modesAttrition failure modes
Fragile architecture (strong coupling)Robust architecture (weak coupling)
Congestion bottleneckNo congestion bottleneck
Calculable uncertaintyCalculable uncertainty
Scale dependent
(fixed by brain capacity and signalling speed)
Scale independent
DependsDepends
Possibility to exploit comparisons
(assumes availability and low latency)
Exploit parallelism in situ
Hidden treasure, X marks the spotHere be dragons

The arguments for centralization

The arguments for centralization are very often based on our prejudices of determinism, though we cannot discount the argument for intelligence (even if intelligence is overrated as a biological strategy). Centralized systems are slightly more deterministic than decentralized ones, but not completely so. And we are so very, very afraid of indeterminism. Therein lies the problem -- it's coming to a datacentre near you, whether you like it or not. The same arguments could be applied to any organization, and even to software development (re: free software). Eventually, you will have to confront it and go "society", yet we prefer to hide from the inevitable.

Companies like Google, wielding massive computational resources, are re-centralizing and going to brute force route with their SDN strategies. They have the speed, expertise, and the capacity to make this work for longer than most others, but they are still vulnerable to catastrophic failure modes. What do we win from this kind of intelligent global optimization, versus what one stands to lose from the risks of catastrophic failure? Is it worth it? It is hard to generalize about such things. That is probably a business decision until it affects society as a whole -- then it becomes an ethical imperative, ripe for regulation.

Organisms evolved for a reason. Brains make organisms smarter, more adaptable. If they are quick on their feet, they can avoid danger by modelling and predicting. A society, on the other hand, adapts slowly as a whole, but has resilience and can quickly fight off a localized problem, without needing the permission of a brain. Societies scale by embedding policy (like DNA in cells, or norms and rules) so that all of the parts know how to react independently.

We are irrationally afraid of emergence, but the very systems that support us every day all rely on emergent behaviour: agriculture, the weather, the atmosphere, the Internet, society itself, and indeed our cellular bodies. We organisms will die one day, but society can live on. What does that say about the way we should design information infrastructure?

Tue Jul 22 08:59:54 CEST 2014


How CFEngine exploits a decentralized "society" model - see my velocity presentation.