The End of Sympathy?

Giving up our world to the robots, and why it bothers us

Norbert Wiener, father of Cybernetics, once wrote (paraphrasing) that every constraint we loosen on a machine's behaviour is a way in which it could deviate from its intended function. His context was the idea that machines might get opportunistically smart and fly out of our control. Today, we fear this prediction most in software, where the debate about Man and Machine and the future of work rages (once again) around the coming artificial intelligence (A.I.), but the essence of the debate we are having today around A.I. is not really about fearing machines, or even information technology, it is one about fear of delegation.

The Master and the proxy

What gets overlooked, in the polemics about how humans will be replaced by `robots and A.I.', is that the arguments we use to express fear of automation apply not only to information technology, but to every kind of delegation of work, whether it be to a machine, an animal, or to another person. The transfer of a role away from us, always feels like a loss, simply as a matter of human nature.

Delegating tasks and formalizing them to the point where they could be trained and carried out by rote (what we call `process management') is essentially the art of making humans give up their human freedoms to subordinate themselves to a process, and act like machines. Before the industrial revolution, we used domesticated horses and mules in the same way. We domesticate them to remove the uncertainties and ad hoc deviations in behaviour, and foster predictability. Once a process has been reduced to its simplest, repeatable essence, that essence can be more easily mass produced, or amplified using tools that outshine our own abilities. This is how we traditionally scale outcomes in business.

Wiener's belief was that, even a very rigid machine or process has a far greater likelihood of acting as our compliant proxy if it does not remain an untamed wildling, allowed to deviate on a path of its own making. In other words, bringing freedoms like intelligence into the discussion of automation is at odds with the goals of automation, because intelligent thought and variation are precisely what we are trying to suppress in commodity replication.

Our unwillingness to let go of the reins of our domesticated workforce remains with us, and is to be seen, even today, in the rush to insert APIs into software for any purpose. to micromanage every detail of our world. Curiously, software engineers publicly rail against the kind of top-down factory-style micromanagement in human relations, but then go on to argue that they must have it in their software. Basically, we all just want to be the boss.

Serial killing

Since humans progressed beyond bands and hunting tribes, our family names have been our trades: Smith, Carpenter, Cooper, Burgess, ... signalling lifelong commitments to specialized roles in a cooperative society. Our sense of dignity and identity is closely coupled to our role in productive `work'. Before such specializations, people competed at the same roles of hunting and gathering, often leading to alpha conflicts and competition for dominance. We still see this at the level of businesses and political bodies.

Giving ourselves over to abstract roles has brought peace and propsperity in human affairs, separating our behaviours from other animals. On top of this fragmentation into cooperative services, is the need for some master plan to guide them and fuel the engine of cooperation, Often, our notion of leadership is seen as steering rather than pointing to a desired destination, and this traps us into a non-scalable regime of governance that we have largely escaped in many areas of civil society. Business, however, focussed as it is on survival, does not have a notion of democratic or civil society. It is still run in a feudal or even tribal manner. The first instinct of management, whether for human or machine, has been to want a `brute force' approach to control. We want visceral pulleys and levers to rein in our desires. This was feasible in the more concentrated industrial era, but such brute force methods run out of steam in a distributed age, because the separation of scales required for stability is in conflict with the need for control.

At the scale of a worker or agent, we design tasks like stories (called processes), because we humans think about the world in terms of tales we can tell around the fire. We try to make systems by looking for simple serialized narratives, in which we can automate repeatable quasi-deterministic processes. This was the origin of imperative logic. We mostly ignores the fact that this is not how systems, as a whole, work when they cannot be totally isolated---and hence we rely on systems never deviating from a regime in which this illusion can be maintained. The effects of cooperation, and the resulting emergent behaviours in systems are typically dismissed as `bugs' when they don't fit expected patterns, yet these are precisely what we need to master at scale.

The stages we pass through, from manual work to automation, follow something of a pattern:

There is a point at which humans have to step aside and allow automation to operate independently, without human hands on the reins, because keeping a system so tightly coupled as to make it ammenable to singular control also makes it fragile and dangerous.

As we separate, capture, and bottle our experiences into faithful animal or machine proxies, some human roles are replaced, but new jobs open up to service and maintain them. Roles get rewritten, and the cycle begins again. The job displacement and creation is not 1:1, and certain sectors lose out at certain times; but, on average (and taking population growth into consideration) global employment levels have been fairly stable thus far.

So, we are good at inventing new solutions, and automating certain simple serial tasks, as long as they remain at manageable scales. This suggests to us that the skill of a task lies in the skill of the agent that performs it. Could machines take over this whole cycle? If separable and apprenticable skills are all that is needed in a society, this may be true; but, once society is completely filled with such skilled workers, who is going to buy their services?

We are not very good at canning and commoditizing the broader large-scale aspects of shepherding systems, where some outcomes are emergent, because we find emergence hard to understand. Gently guided equilibria of weakly-coupled agents, working within a constrained environment, is a language that a few scientists or engineers might comprehend, but not someone trained in industrial age management, who can't seem to understand that management is about environments, and not about hopeful storylines. In Promise Theory, the idea is to abstract outcomes so that it doesn't necessarily matter who or what agent does the work, as long as the intentions are met -- or, if you like, the promises are kept. Systems that are able to separate promises from agent characteristics are robust and predictable. This is why delegation and thence automation can work at all. Some agents may be fast, some may be slow. Some may be accurate, some sloppy. What is important is: did they keep their promise, and was it what you wanted?

Shedding old skins to survive

The nature of work has changed constantly as society, civilization, and their economics evolved, and will continue to do so. Only the speed at which that change is happening is new today. The cycle of delegation is accelerating, helped along by the generic tools of information technology. The pace is still quite modest, but alarm bells are starting to ring in the minds of a few. This has led some to ask: is human work coming to an end? But the question is a weird one to ask. If human work ended, civilization as we know it would surely collapse. And why would we let that happen? Let's consider this.

Organisms need to shed their pasts to survive. We shed old skins, either one cell at a time, or as a one-piece suit. We forget old memories and decouple from past relationships to cope with novelty. If we want a future, we can't be afraid of change. So what are we saying? Let's put it all together:

  • Certain template tasks can be done faster, and with greater fidelity, by agents other than ourselves. Information technology allows `smart' agents to handle intricate and context aware tasks, within a limited context.
  • These new improved skins are instrumental in adapting and growing the economy, because our economics is based on competitive warfare for niches within the market ecosystem.
  • Competitive pressure drives the design, training and delegation of better and better proxies. This investment is expensive. It might take some time before it makes economic sense to replace a `pretty good' human with an excellent specialist.
  • Does the success of humans mean that we should built humanoid machines, with general intelligence? No, it's not our intelligence and adaptability that makes is well suited to general tasks. After all, we are usually asked to suppress those qualities at work. We think of human form because the tasks we serve are aimed at humans in the first place, so we already have a rough natural advantage in our shape and disposition.
  • Each job can be optimized without prejudice of form: compare the idea of a mechanical traffic cop versus traffic lights? The rush to make human-like machines is not rationally motivated.
  • As jobs evolve, so must the proxies, else industry would stagnate. Designing each new generation requires continuous research, investment, and above all new input of expertise --- by human experts. Would that go away too if we had generic intelligent humanoid machines? No, because it would be simpler and cheaper to augment as cyborgs than to try to steer `intelligent' and unpredictable technology to do our bidding.
  • The job of the creative designer, the entrepreneur, cannot be taken away, even if automatic self-improvement sustains technologies significantly, because we are not even close to being able to design systems that can see the big picture, or that have human welfare as a priority.

What about us then? Suppose we could extend ourselves as cyborgs to work faster and better, with fancy extensions. This is happening already, up to a point. The problem, however, is that we don't know how to make humans think faster (only how to give up thinking). Even with search engines and calculators, there is a fundamental mismatch between the way humans reason (in conjoined stories) and the way that massive, parallelized, iterating network systems compute and operate. Funneling the latter through the former can only lead to madness as we turn up the speed. We shouldn't be doing anything much faster than we can make sense of it (even flinching from danger). We have to choose between stepping aside, or clinging on.

At some point, extending our own natural limitations must either change us into a different species, that thinks in a different way, or decouple tasks from our world entirely. At that point, this acceleration has to end. Is the answer for us, then, simply to step aside from those tasks? If we do, we risk creating a world in which we are not safely in control.

Our limited skills at capturing behaviours often lead us to delegate roles to substitutes that are actually less than human skills can produce, in the name of commoditization (instant coffee, MP3 compressed music, etc). Much marketing is expended on pushing inferior products. Eventually, a consumer will tire of this and start their own business. Using proxies and automation for purely economic reasons can backfire too.

Is there big-picture automation on the horizon?

Our current approach to automation (including A.I. and robotics) views delegation to proxies as small-scale optimizations inside larger problems, as spot replacements or component upgrades. The proxies perform at the level of trained farm animals. There is no big-picture rethinking of systems. Even the idea of artificial intelligence for adapting and changing systems does not address this. Indeed, if a system ever became smart enough to adapt and self-improve, would we (ethically) have the right to tell it what to do?

Could automation replace big-picture matters of state? Company strategy? Logistics? Why not? It seems technically possible, to some level of competence. Such a system might even be considered to have responsive properties that could be called `intelligent'. But, would anyone profit from that? Is the aim of automation to create artificial life (out of curiosity) or to help our own species to prosper and raise standards of living?

Lately, we have learnt how to mimic the work of multiple humans and automate whole teams, by using combinatoric services. This is particularly uncomfortable: when soulless proxies eat into our social fabric it feels very personal. Should we be taking humans out of the loop just because we can?

The problem is that our very sense of self is tied to our habits. Learning works by training our memories, by repeated visitation and rehearsal, as in a relationship. Work roles become like trusted friends, in our psychology. We visit them every day for most of our lives. How many hours a day do you spend with your work, and how many hours do you spend with your closest friend? Or lover? Loss of work, is an event in someone's life akin to a death in the family or a divorce. We maintain a sense of self, by replacing one relationship with another.

Our roles in society are based on what we do, our knowledge, our expertise, because work itself is already a proxy for a human relationship. We receive a sympathetic back-rub just by reconnecting with our work each day, in a massaging rhythm of life. Our civilization works by melding our social predilections with our survival functions, and the result is society.

When we delegate to proxies, suddenly all the team building exercises (that introverts hate) become team debuilding. Rhythm is replaced by urgency. There is often little human engagement. Some system interfaces are smarter than others: they might gamify the experience of using their software so that we become more engaged in a new surrogate relationship. I believe that the correct way to judge human involvement should be based on timescales rather than functions or responsibility. If we cannot maintain awareness of a process, to know it like a friend, then it has to decouple from humans completely (e.g. high speed trading). If humans are playing a role, then they need to be kept in the loop, and kept engaged through `gears' or transducers that allow them to stay in the game, not merely struggling to catch up, in a daze, after it's already too late in a post-mortem. Consider how a regulator, or officer of the law, might struggle to judge whether activities were ethical or lawful and police them.

That applies to a single task, but what about the turnover of roles and careers? When the cycle of replacement happens quicker than we can comfortably adapt to? That, in itself, is not a mystery: people separate, divorce, and change jobs. These are disruptive experiences that challenge our sense of who we are. Must we give up who we are today to play in the game of work? Must we become less human? Is society to become like George Lucas's THX 1138? Or Logan's Run, where future society lives on superficial, casual relationships that come and go without commitment? Then we might actually have to terminate our lives earlier, before our brains fill up with the complexities and anxieties that make us seek stability as we get older. Could we learn to treat our major relationship `work' as a casual lover? Contrary to the idea that we will all embrace creative pursuits in the future, a great many of the aging population will simply want the meditative repetition of familiarity.

If humans become pure, uninvolved consumers (like the Eloi in H.G. Wells' Time Machine), how could we sustain this cycle? Opportunism and entrepreneurship are key to evolving an economy. These are not skills that proxies can learn. Opportunism and innovation come from the experiences and insights of being connected across disciplines. Rewiring of relationships is not a habitual task.

Sympathy for the devil

The almost chilling assumption of economists is that our work is not for us, but that we are somehow subordinate to The Economy, as if the very turning of the Earth depends on this mysterious divine pyramid scheme of global economic growth. Societies and economies that were unsustainable have collapsed many times in the past, and there is no reason to think that ours is immune.

If the economy fills up with fashioned proxies, freezing the evolution of our species in a single train of thought, then what will become of the originals? Our essence will simply have died, like a brain that has filled up with memories leaving room for nothing more. Every system has to forget in order to survive. Don't be distracted by big-data fiends. We cannot keep everything.

Are people overreacting about this crisis of work identity? Probably not. What is interesting is that we only wake up to this challenge now. We love a good narcissistic hard-luck tale, especially when there is someone to blame. We like fishing for that sympathy that is missing from our mechanized lives. Few are actually thinking seriously about the future. In Western society, we live as cogs in various machines, and are largely comfortable with that. We see ourselves as subordinate. But, like Pink Floyd's or John Brunner's sheep, we march into our destinies obliviously assuming that someone is keeping us safe. And someone should. After all, without us, who or what is the economy even for?

What we need to ask is this: who do we think we are? What is our sense of self about now, and what will it be in the future? Could we overcome our dependence on the surrogate relationship of work? Would we become child-like game players, dependent on technology we no longer understand, or parental guidance councillors helping along the Third World? Alternatively, could we partially decouple from our industriousness and forge a new sense of purpose and belonging, perhaps around entertainment, re-engineering humanity with a new basis for stability? This would have to preserve the non-competitive benefits of social roles. Without niches, we will simply start fighting again for dominance. What wound are we willing to inflict on civilization?

These are the questions we should be asking; not merely: are we afraid of our own A.I. shadows? Rather than blaming robots and artificial intelligence for job losses, perhaps we should consider that the real dehumanizing influence is the way we manage our economies: by KPIs rather than human values. What will everyone do in the future if we actually went ahead with the ridiculous notion of putting ourselves out of a job to optimize competition? Play games? Explore? Do needlework? Those who thrive on low risk repetition would have the hardest time finding a role. It would create new class divisions in society. There would always be a role for entrepreneurs and innovators, assuming they had means. But how would that economy even work? If everyone is replaced by a robot, who would be earning any money to buy what the robots made? This doesn't make sustainable sense. On what basis will we pay people, or allocate living rations? Without incentive, would we collapse into indolence?

If the engine of our world were to change so rapidly and on such a scale that we humans became insensible of it, then progress and humans would finally inhabit entirely different worlds, nothing to do with one another. The economic world we created would no longer be for us. We would have to let go. Wouldn't enabling that simply be an act of suicide?

We can redesign our lives, our schedules, our economy, our relationships, and who we want to be, to enter the future. All the possibilities are ahead of us. The only real question we have to answer is: where shall we go next?

Mon Sep 14 07:45:25 CEST 2015 - Thu Sep 17 21:59:03 CEST 2015

Acknowledgement

I'm grateful to Lisa Caywood, Chris Little, Ruth Malan, Jeff Sussna, and John Willis for their peer reviews and generous comments.

Appendix: allusions and references

Smokescreen (or how to ignore people in a bar), painting by the author.
The Master and The Margarita, novel by Mikhail Bulgakov
Sheep, Pink Floyd, from the album Animals
The Sheep Look Up, novel by John Brunner
Who Do We Think We Are? Deep Purple
Sympathy for the Devil, Rolling Stones (re Bulgakov again)