The Cyborg Compulsion

Why the robots aren't coming in the way you expect

``The most powerful dehumanizing machine is not technology but the social machine, i.e. The formation of command structures to make humans emulate technology in order to build pyramids and skyscrapers...''

--Lewis Mumford (1967)

Robots and artificial intelligence remain widely misunderstood, and a prime target for fear-mongering about the future of human jobs. I am not particularly afraid losing control to a machine intelligence, or even of losing my job to a machine. What we might lose control of, on the other hand, is our notion of civil society, as factions within us increasingly want (not to be replaced by, but) to actually become the machines.

The restaurant and the microwave dinner

They tell us that, when robots become widely available, all human work will be replaced by automation. We will all live in the Asimovian dystopia of The Naked Sun, where robots do the work, and we communicate electronically, too afraid to meet one another for all the dirty biological consequences.

But, predictions of workers being replaced have been made many time in the past, and, well, most of us are still here. After all, robots are just tools, and tools are the very magic that allowed us (in developed societies) to climb out of poverty, scaling up subsistence living to make a surplus. Surplus allows trade, and trade allows competition. From there, the story is straightforward. Competition drives force-fed industrialization (one size for all), enslaving people again to work as slaves to proletarian processes. Finally, their standard of living rises, and consumers begin to demand variation. Opportunity from small businesses competes against monolithic factories, and microserviced niche markets emerge. The industrial mass production is not necessarily replaced, but it becomes more marginalized.

This progression was argued by Alvin Toffler and his wife Heidi, in the 1970s, and we can see it happening before our eyes. (See Future Shock, and The Third Wave. ) Robot automation can be a leg up in the expansion phase (in the mass production phase), but they can equally be a liability for the end-game, unless their integration into a human-computer system is designed rather carefully, with human values in mind.

Tools can be an equalizing force enabling some to climb out the poverty trap, while for others, the inequalities they bring can be the trap door itself.

The narrative is all about how the scaling of labour, with tools, will liberate us from drudgery. But the question remains as to whether we will use the tools, and how we might use them. Will we wear the tools as decorative symbols of power, or elevate them as co-workers along side us? In other words, will the tools make us look better, or make us look like a weak link to be replaced? This is the nature of disruption. It is a social process, not a technical one.

  • The music synthesizer was going to replace orchestras. Unions were up in arms.
  • Microwave dinners were going to replace real food, with robotic food: these "TV dinners" are an innovation that packages food as a commodity, to be prepared at the push of a button. It is probably prepared by industrial machinery (aka robot), it is packaged by industrial machinery (aka robot), it is transported by a horseless carriage (aka robot) and heated by a push-button machine. So far, all we have to do is eat it, though that could also be automated if we really want to.
  • Instant coffee did not replace hand-made `artisan' coffees. The espresso machine stimulated an entire coffee economy.
  • Hand-made confectionary is still popular.
  • Tupperware has not replaced basket weaving, ...
  • Although cars are easy to drive, there are more chauffeurs than ever.
  • Although music can be programmed by computer, opera and classical music are booming.
  • Although movies can be downloaded, cinemas are doing relatively well.

Technical novelty inevitably looks appealing to innovators and gamblers, but when the shine of possible profit fades, we look with more critical eyes, and what was seen as "progress" can even slide back into manual labour.

After initial pushes, often backed by financial muscle as much as curiosity, these trends get pushed back into a more stable equilibrium, living along side diversity. Market shares can be dominated by automated commodities for a while, but still leave significant leftovers to satisfy the diversity of markets, and for the rehumanization of the ecosystem over time. Automation, robots, and computers are not taking over with a long-lived economic inevitability. They tend to persist in the out-of-sight places, like factories and sewers, not in full public view. It's more about what we want to happen. So what do we want?

The robots called and cancelled

The reason I don't fear robotics is that those fears seem to be based mostly on a lack of imagination. First of all, the robots are already here, and have been for more time. Look around. Robots have already infiltrated society. Automata hide in plain sight:

  • Vending machines - eliminate the manual process of trade.
  • Banking transactions, credit cards - eliminate the manual movement of gold
  • Rice cookers, coffee makers, vacuum cleaners.
  • Automatic gearing/transmission - eliminate the manual movement of spindles
  • Autopilots - eliminate the manual movement of wing edges
  • Even motorbikes and cars are robotic horses.

No, I am not cheating -- true, there is no humaniform automative homunculus operating the controls (and surely no inflatable autopilot, but don't call me Shirley), but that idea was just silly. The engineering automation solution is the self-driving car, not the manually operated car with a simulated driver working the pedals. The latter is "API" thinking, i.e. getting caught up in how to connect bits together instead of actually solving the problem.

Asimov knew that, in the telling of a good story, readers need to identify with the face of automation, so he made them anthropomorphic, bipedal `humaniform'. The fictional form of robotics was an allegory for slavery and for the dehumanization of the workforce. It was designed to make us see the invisible trend, to think about it and to see fear in it. That is what fiction is for.

What is a robot then? It is an autonomous process that wields a strength that a human cannot muster. It works autonomously. Such tools are everywhere, even today. But, if you look closely, they are never very far away from humans. This is because they are just parts within a larger system: a human-machine ecosystem. They work symbiotically, not competitively.

In spite of automated control systems, the train still has a driver. The plane still has a pilot.

Doomsayers are assuming that automation will simply treat a human as a component, and replace it with something cheaper. But systems are designed as interactive wholes, rarely in parts where humans are merely replaceable cogs. If they were, Lewis Mumford would be right, and technology would be liberating, if not necessarily welcomed.

The "rational economic arguments" for automation, are over-sold, in an oversimplification that focuses only on the dynamics of money, ahead of purpose and intent. Life is not merely a meteorological ebb and flow of money, we also insist on a purpose. Many of the Nobel prizes in economics over the past two decades have recognized that this detached view of economics was actually misguided, and the human element must be restored to interpret value.

Elinor Ostrom, for instance, showed how local (human) organizational and community values interact strongly with simple supply and demand economics, and the governance of institutions (businesses). Simple rational economic game theory is insufficient to predict behaviours, and rule based systems like organizational structure, were insufficient governance to stabilize outcomes. From my own Promise Theory perspective, any explanation based purely on dynamics (economics) misses half of the important constraints: the intentional semantics. If you take only a piece of a puzzle in isolation, you can make false economic arguments. W.E. Deming and E. Goldratt also pointed this out about factory production process.

Alvin Toffler argued convincingly that the dominance of singular mass production is only a transitory stage, along the way recapturing a more humane state, based on diversity. Only the most basic utilities will end up as permanent commodities, without disruption. Cynics might say that you have to eliminate humans because of expense and human error, etc, but now we are losing sight of purpose: what are humans here for? Humanity is the source of intent, and there is no independent evolutionary mechanism competing with that. Without humans, there is not even a process to automate.

All this suggests that, as humans are limited, so are machines, and there are limits to when human cogs can be replaced with machinery (and vice versa). It's remarkable that this seems so hard to understand, but I experience this blindness every day.

Why do we work?

I recall as a ten year old, being asked at school: why do we eat? It was a trick question, of course. Most people said: to survive, or because we are hungry. But the teacher had a different answer: because we want to.

This is clearly a first world answer, of course. In an economy of plenty, when our basic needs are taken care of, we don't do things because we have to, we do things because we want to. You could apply the same thinking to why we work. Economics is undergoing a transition in the West from one of trade and subsistence to one of leisure and vanity.

Why do you work? What makes you want to get out of bed in the morning? If you have ever burnt out, lost your jobs, or undergone a sudden change in life situation, you might have had to ask yourself that question. The ennui that comes from that loss of anchorage, or repetition, of repetitive relationship, is what throws us into confusion. Even artists and "creative" talents rely on routine.

Automation is not about replacing jobs that are not creative. It is also about whether humans will step aside from their treasured relationships.

From the perspective of the practitioner, no one ever wants their particular job to go away, or predicts that it might. It is hard for them to see why it would --- after all, we are usually perfectly happy doing it in a certain way, even before we realize it might be an oppressive burden. A job is a sense of identity, of purpose, which we crave, thanks to our evolutionarily-inflated craniums. That sense of contribution, even duty, is often worth more than the hardships of the job. Outsiders see things differently though. If you only consume a service, you have no particular interest in how it is performed. You care only about cost.

Why, then, do we stand in line, and pay exorbitant prices to get fancy artisan coffee, when we could get it cheaper from a machine produced vending machine (and when it doesn't actually taste better than we could make ourselves)? We do it because we are still human. Price is not everything, especially in an economy of plenty. The entertainment value of the experience, and the image it conjures, is worth more to us than the price. The economics change, once diversity is overwritten with a single colour, to reflect the human quality of life.

For `robots' to take over, we have to lose interest and get out of the way of the automated solution. This requires trust (and sometime fatigue or merely distraction). All the history and tradition of Byzantine intricacy that we put into maintaining work has to be ejected in favour accepting a new simplicity. To acclimatize ourselves, we do it in stages:

But, this can also be reversed, as humans don't just lie down and accept fate. We can also turn ourselves into the machine. For many, this is infinitely preferable. It makes them the hero in their own narrative. What many want is to simply become the machine:

I have witnessed the reversal of automation technology in my own work. When I introduced CFEngine as hands-free, autonomous, self-sustaining robot for IT server maintenance, it didn't occur to me that this would not be seen as progress. Once adopted, surely there would be no turning back. But this was not what happened. The culture changed, with the arrival of web commerce. The next `products' based on CFEngine's lead rolled back the hands-free approach, making human administrators and software engineers more involved, re-enabling an interactive role. Later, these supplanted by tools which basically put a smart remote control in the hands of a human operator. Instead of lessening human involvement, the work culture restored it along side a work reorganization, to recapture the sense of control.

What becomes popular, in a culture, is a different question to what might be optimal, or even safe. Human choice often seems capricious, because we have so many conflicting priorities.

The real danger: the software designed society

If robots and AI are not the progress to be feared from information technology, then what is? The real danger, I believe, lies in the fact that the society we take for granted always hovers on the edge of instability. By relying increasingly on software, with more and more of the wires exposed (microservices, APIs, RPC, etc), we are placing ourselves next to the modern day equivalent of a nuclear reactor, but with none of the safety features.

Nuclear reactors, on the other hand, have what I call "true" automation (or storm drain automation) which stabilizes without human intervention, but this is no fun. What we want for automation is to be a part of the machine, part of the game. Culture plays a major role here. The fundamental non-linearity of something as large as a society means that it has to be regulated in its behaviour all the time.

There are different systems in play, for this, which are preferred by different cultures. Politeness, the class system, gender roles, rituals, slavery, the law, etc, can all be seen as ways of maintaining roles in a stable system. As education has improved, and we've understood that many of these systemic mechanisms were unfair on a human level, even if effective on a system level, and even contrary to our sense of morality or ethics (society doesn't have to be civil in order to work). However, many are also being disrupted by technology, driven by economics. For extra speed, we forego politeness. For profit, we manipulate the class systems, etc.

As we adapt our moral compass or find new solutions, often too slowly, there is room for the system to go unstable. This is what is scary.

One such change today is the very fascination with information technology, in which programmers can wield immense power over others' lives, just by tapping with their fingers. Our fascination with this actually makes IT workers emulate the machines they think they are controlling. Many IT engineers and programmers are gamers who want to be part of the challenge to control the system. They see it as the greatest game of our generation. They reject the stability of hands-free automation, in favour of power tools (like Ripley's power suit). Imagine if a nuclear power station did not have self-regulating cooling, but relied on typed commands to maintain the cooling rates.

They don't want to replace the programming of hundreds of computers by a commodity process, because they want to show that they can do it themselves. Many would rather have a robot arm, or a brain implant, as an extension of self, than watch a safely self-contained automated system. We want to become cyborgs not be served by robots. It is entertainment, and it is pride.

We are so afraid of losing a cherished relationship with work that we arrange to work the pedals (APIs) ourselves. After all, why not? Who is all this for anyway? Society? Us? These are ethical questions, not technological ones. But the tech industry is ethically naive, just as the physicists of the Manhattan project were naive.

This, I believe, is more dangerous to society than any kind of artificial intelligence or economic imperative. We could lose our sense of what is right, safe, and generally advisable for civil harmony, to the capricious wants and needs of individuals with too much power. (Let's face it, there is nothing more human than that.)

Artificial .. what?

I confess to have rolled my eyes when I heard that Stephen Hawking has pronounced the end of humanity, at the actuators of artificial intelligence. I am also amazed at the predictions of Ray Kurzweil about `singularity', applying Moore's law to intelligence. I mean, these are very smart people. But smart people don't always say smart things. I don't see how advances in computation will advance the understanding of intelligence. The tacit assumption, that computation and data have something to do with intelligence, seems just wrong to me.

Our intelligence grows from childhood over many years of training, through our physical and mental interactions with the world. We learn methods along side experiences. It is not about the speed of linear computation, or the amount of memory. There is an interplay between instinct (muscle memory, if you like) and cognitive thought. Concepts are built up through ostensive communication, which is impossible without extensive sensory apparatus. For an intelligence to emerge, in an artificial system, we would have to very purposely built it and train it interactively. We are not merely databases.

Even if we could do this:

  • Do we know what intelligence is?
  • Why would be make something to imitate our own?
  • Would an artificial system have curiosity? (perish all the Internet cats!)
  • Why do we think that intelligence would escape and kill us?
  • Why would we equip the intelligence with access to the tools for our demise?
  • Are we so sure that we would even be noticed or interesting to an artificial intelligence?
  • Would we even recognize artificial intelligence if it were different from our own, and vice versa?
  • If it is just like us, then isn't this just a Disneyworld theme park? Westworld?

Personally I believe that all of those human qualities that we pretend are weaknesses (and superfluous in robots) like emotion, dreaming, and imagination, are precisely the keys to understanding what we mean by intelligence. The brain is far too non-linear to be a Turing machine, and these contextual states are what makes flat information into actionable intelligence. We seem to be trying to compete with a waterfall by binding together hosepipes.

Wisely, we no longer call it Artificial Intelligence. Rather, we speak more appropriately and humbly of "machine learning" and "analytics". These are tools that do some heavy lifting to assist our own intelligence. They could be called `smart' in a branding sense, but they are not independently intelligent. They are cyborg attachments.

As an interested amateur, who has worked a bit on machine learning, and system stability, it seems to me that enhanced intelligence is more likely to come from brain science, than from current ideas of artificial logical reasoning. To imagine that silicon technology is the way to advance intelligence seems like the hubris of computer scientists rather than a balanced judgement based on advances in science.

  • Artificial Curiosity.
  • Artificial Intent.
  • Artificial emotion.
  • Artificial Hypotheses (Artificial humour)
  • Dreaming.

It strikes me that, in our current approaches, we are obsessed with ours feelings, not with the nature of intelligence. Computer scientists are trying to emulate only our perceived experience of intelligence, using computers, not what actually makes it tick. This means it is largely a cyborg vanity project, window dressing. We might be able to use it to extend our own capabilities, provide cognitive glasses to see massive data with sharper focus, but we have not made anything like a feared brain to compete with our own intentions and judgements.

How we receive the benefits of intelligence does not give us a recipe for building it. Trying to recreate a video stream does not recreate the author of its script. Current AI is merely building Delos -- a Westworld amusement simulation. What we see is the emergent result that is tied to our linear understanding of time. It is devoid of artificial curiosity, or whatever seat of intent drives that stream of consciousness forward. It might be fine for a standalone Turing test, but it will not be compatible with us.

We are perhaps too concerned about the rise of artificial intelligence `emerging" because (i) we don't understand intelligence very well, and (ii) we don't understand emergence either, hence we fear something we don't understand.

Mark's law: nothing "emerges" without the biases and imaginations of observers (emergence is the subjectively apparent recognition of a function).

Emergence is about seeing faces in clouds, or intelligence in swarms. You have to know what you are looking for to be able to recognize it.

I suspect we will learn more about intelligence from stem cell research, neuroscience and Alzheimer's research, which are teaching us about the tissues surrounding neurons, and the roles they play in contextual regulation. Even the largest computer systems we have do not approach the kind of level of intricacy that exists in the human brain. It is not about the size, but the number of causal pathways.

I have no API, and I must scream ...

(with apologies to Harlan Ellison)

So, as far as I can tell, hands-free automation, and independently thinking machines are not really going in the directions that doomsayers imply. Even stark claims of machine creativity are oversold results of algorithmic searches on flat data. The evolution of modern information technology looks a lot more like a cyborg vanity project, in which we equip ourselves with power tools to emulate super-powers.

We are using "AI" methods and robots not to simplify or change the way we work, but to give ourselves superpowers. We have a cyborg complex, and we are now looking for ways to reinsert humanity into a search-enhanced cyborg suit for the Software Designed Society.

  • AI is pursing computational simulacra, which seems to be incompatible with understanding the origins of intelligence.
  • Worker robots/automata will flower up in the shadows, below our radars. Their place is to work out of sight, under the streets, in the walls or the cars, on construction sites, etc.
  • The only reason to make them seem human is for entertainment, at home or in amusement parks.
  • Putting robots into kindergartens and old people's homes, as helpers, seems like a way of scaling some challenges we face today, but we really need to ask whether we are letting our moral compass become bent out of shape by lazy thinking. Technology cannot fix society's moral problems.

I have said before that the APIs (programmer interfaces) will be our undoing, and I still believe this is a more likely doomsday scenario than an artificial intelligence going mad. We keep helping ourselves to power and influence with APIs, exposing more of the wires of society. When, finally, it is all rigged like a Christmas tree, it might only take for a child to tug the wrong wire to unleash a catastrophe.

In Star Trek the Motion Picture, the angry cloud computer V'GER (which spanned a solar system) returns in search of its creator, with only the intelligence of a child: "Kirk unit! V'GER must join with the creator!" What happened instead was that the human `Dekker unit' got machine envy, and joined with it instead (apparently this Dekker unit was not an expert on human error, but he was a willing cyborg). This is what we do, because (at least for now) it is humans who are impulsive.

Some try to argue that robots (hands-free automation) are simply too dumb to trust, and lead to more problems than solutions, making good-old fashioned human decision-making the only answer. This is clearly as false as crying human error when machine systems fail. The story of King Canute illustrates perfectly why humans cannot manage every possible challenge. The real problem is that we are not designing safe human-machine systems.

We are naturally predisposed towards cyborg technologies, rather than independent machines, because we are control freaks. Our intentionality drives us to seek control, not give it up.

Computer science and its "ballistic reasoning", essentially formalized storytelling. We call it logic. This is not intelligence. So why, then, are we so obsessed with the kind of linear reasoning? The answer is surely obvious: because that is the Cartesian Theatre we perceive to be going on in our heads, and that is what we have invested 50 years trying to build. But therein lies the confusion: that reasoning is only our perception of intelligence, not the process that goes on underneath.

We need to sharpen our understanding of both intelligence and mechanical behaviour: what is smart behaviour, and who says so? Only then can we have a proper discussion about when and how to work with machines. This is not a simple matter, which is why misconceptions arise.

It is not what AI will do to us that should bother us; it's about what we might do to ourselves, through the neglect of human values. If we are not making the future for ourselves, then we will lose ourselves to it. Someone will exploit the weakness to their own advantage.

Read more on this topic in In Search of Certainty, and The Gamesters of Transmogrification.

MB Oslo, Sun Jun 7 13:04:59 CEST 2015