The Gamesters of Transmogrification
Some mischief recalling when "awesome" meant something to be fearedAre APIs opening up IT services to a free and open dialogue with users, or are we on a collision course with complexity? The rise in programmer-centric thinking, pushing responsibility to program with APIs onto end-users, has had me worried for some time. Are we offering flexibility or, in fact, turning IT into an expensive game? And who is being served by this in the end? There seems to be a hubris that programmers can and should be able to do anything from anywhere today, but is a low level imperative interface a defensible approach?
At a talk I gave in Silicon Valley recently, where I was waxing critically on the explosion of API thinking in IT, one person in the audience muttered `does he even write code'? In that moment, I realized how much cool-aid the IT industry has gulped about how we are supposed to work with IT, almost as though we have regressed somehow into a world of programmable calculators when you had to write a step-by-step program for everything. I had a sudden vision about writing an essay like this one, wondering if daring to speak out on this would provide a counterpoint for discussion, or merely cause for extraordinary rendition by industry men in black. Well, here goes anyway.
A Game Of Throws
Software keeps changing the way we interact with the world. Its developers help us to shape lives and magically transform our environments, as well as capture them and interpret them for business and pleasure. Superficially, this seems to be a force purely for good. Software has become the universal polymer of modern design, as pervasive and alluring as the nylon creations of the petrochemical era.
But it also makes a great game for software developers -- and never more so than when you have remote access to practically anything.
The APIzation of things, and the Software Defined Everything (SDx) movements worry me. These philosophies seem to say here's a bunch of complex tools that you might need, which are going to take you a while to learn, and which could get you into trouble if you use them wrong. The responsibility is yours to learn how to use this well, after all, with great power comes great responsibility. But this is the future, so get with the program! Best of luck.
What's wrong with APIs, you say? Isn't it a good thing that we are opening up closed software to allow customization? That depends on whom they are for and what they allow. I would like to suggest that it is not the panacea that we currently believe -- and that there is a good measure of hubris we need to swallow, especially in the area of Software Defined "Everything".
Interfaces are omnipresent. At some level, anything that relates to anything requires an interface, but such an interface does not generally expose the vital functions of its inner workings. What we do internally behind the user-facing interface boundary is our own business, but when the user interface becomes a programming interface for internals, it is potentially much more dangerous. We communicate with friends and colleagues, but we don't control their heart rate, cortisol and testosterone levels, nor can we suddenly demand the use of their arms and legs. If we could, there would be interesting contention in the world. There is good reason for autonomy in nature.
But this is what SDN, SDDC, Cloud, and any number of other services seems to be proposing. To cynical eyes, the programmability of interfaces shows more of the hallmarks of programmer self-interest, without supporting the kind of social responsibility to make a usable service that is a force for good. I see a hubris: all programmers are assumed to be good enough to solve any issue that might arise, so if we just expose everything and push the responsiblity onto them (as end users) then we've made progress. If they aren't good enough -- well, then they aren't good enough! Alas, not all programmers patrol the streets as KickAss at night. Many are simple ordinary, and need help to get started even with familiar services. Rather than taming automation for human benefit, we seem to be gamifying operations for the fun and pleasure of a technocratic elite. If it ends up being too complex, let's slap a quick GUI on it and make it consumable without worrying too much what's going on underneath (and leaving everything exposed).
Remarkably, we tell ourselves this is all perfectly awesome. We should all agree that this is a Good Thing. After all, what could possibly go wrong? Are we not invincible programmers? Are we not Klingons!? (Klingons do not fear death or restoration from backup.) It's a new playground. We love our jobs when we can control the entire world from github.
This new Cult Of Awesome parades along the Industry's red carpet fashionably wrapped in the Emperor's best Victoria Secret, with a `ping' at its fingers and alarm bells on its toes, riding atop its white horse fitted with unicorn marketing stickers. It all seems awesome because it is alluring and engaging, and it plays to what everyone wants to believe: that IT owns the world now. But I worry that we view this not as an effort to improve the world, but more as a test -- a rite of passage for programmers to prove their mettle and show that they too deserve to live in the New Age of Software Defined Everything, replete with Klingon pain sticks. Is it merely laziness, i.e. not bothering to create a usable product interface, and pushing responsibility onto the end user? And is there an alternative?
Magnification or Gamification? The programming imperative!
Why do we think exposing detailed internals is a good idea? After all, this is the direct transcription of the very micro-management which we loathe and detest in the human realm. Perhaps the answer is because it is a great game that challenges us to test our skills in combat.
Exposing every detail to a remote control framework might seem awesome (the risk certainly fills me with awe), but the laws of probability and computational complexity should hint that the more decision logic one exposes to a complex situation, the more likely that situation is to locate a failure mode. This is why biology invented skin and membranes (even though only 1 in 1012 pathogens ever makes it past these defenses, humans still get sick and die).
In my book In Search of Certainty, I argued that, as society becomes more reliant on IT services, our responsibility to make them safe and responsible becomes more urgent. This is not just a technical issue, but an imperative for society itself. That which becomes public is no longer the province of the technocratic elite -- just as with electricity, nuclear energy, pharmaceuticals, or any other life-changing technology. Society's users need to be able to trust technologists, to subject them to public regulation, and to feel safe. But we don't see this happening.
So why are we now propelling ourselves from the previous era of heroic command-line dragon-slaying into an era of do-it-yourself robotic surgery?
No one can deny that there is a strong gaming culture amongst IT workers. It stands to reason that we might gravitate towards methods that foster this sense of identity. I don't believe the industry is even fully aware of the issue, but I do believe that it is real. The very last thing gamers want is a policy-based platform (what they might call North Bound API's or policy abstractions such as PaaS) that would take away the challenge of that manual combat, and sense of skill, and replace it with sensible town planning agenda.
Back in the late 1990s, I described the principles of autonomous, self-maintaining operation in Computer Immunology, or When Machines Get Sick, and over the years have researched and implemented many of these (especially through the vehicle of an evolving CFEngine). By making autonomous policy-based self-repairing adaptive systems to maintain a desired (healthy) states in a hands-free manner, the entire need for an API went away -- replaced by some "cached" policy decisions.
This idea has been used to quickly grow massive systems with impressive stability. It is a way of taking away meaningless relationships from humans and giving them to machines. Indeed, many of the large web players (including Amazon, Facebook and LinkedIn) have told me that they could not have grown to their current size without this. These principles are now being rediscovered by a small few (as everything old is new again, e.g. Google's Cloud Platform). But surely that was because we didn't have enough programmers back then. Today, in the magical world of the web, anyone who is anyone is a programmer, nez pah? And the companies pushing this are strong in programming talent. It's just that not everyone has those talent pools to draw on. How will they fare on the red carpet?
The declarative policy approach is thus being pushed aside by a return to the imperative Application Programmer Interface (API), based solely on skilled micro-management or combat of trained warriors. It is a dose of pure interaction complexity served up on a fast-food bun (or occasionally on a rack of lambda calculus).
I say tomato, and you say protocol instantiated RESTful API as a service
API-oriented tools today can lure users into situations of artificial complexity they did not expect. Imagine the RESTaurant that makes you do your own cooking and washing up (after you've paid the bill up front). Instead of presenting you with a menu of choices (which corresponds to a declarative way of approaching a restaurant), you pay for a set of strings to remote-control the kitchen in every detail, moving arms and legs to chop vegetables and stir sauces. This might be all fine if you are KickAss (Gordon Ramsay), but what if you merely and ordinary cook, used to buying your food in body bags, for the microwave?
The RESTaurant tells you: once you've learnt how to do it, it will be easier next time and you can advance to the next level of the game (jingles not included). Perhap's we'll add a punishment function for when the service doesn't do what you wanted (as in the illustration from Star Trek, the Gamesters of Triskelion).
"You can do anything you want to me with your API call, just don't hurt me, okay mister?"
(In the 1990s, security software was the snake-oil of its day, promising unlimited protection as a service. In the new-millennium teenage years, we are bottling it (often from pure python) in 57 ruby-red varieties, as liquid razor blades for DIY surgery, while security takes a RESTful vacation on the island of Kerberos.)
I've argued often that maintaining a close relationship with systems is indeed what we need to do in order to understand them. But there are productive relationships, and there are destructive ones. We have to cater to all levels of ability and responsibility. There are internals, and there are things we want to expose to the world. What kind of relationship serves user need and is not just a patch for bad design? I am convinced that opening up every parts of software entrails as a service is not the answer. Cue re-runs of the Sarah Conner chronicles.
A single user API does not abstract at the right level for distributed systems. Imagine this:
`Our task is to steer this herd of horses across the country.
Ladies and cowboys, I offer you the bridle and stirrup!'
`But how can we ride all the horses at the same time?'
`Obviously, we stack them in a 3 tier architecture.'
The expected solution, with a load balancer! And it was foreseen by the Final Five, so say we all! How hard could it be?
Did you say the lieu?
Reversing this notion that we have to program our way out of every paper bag is not going to be an easy pill to swallow now that we've elevated sofware-defined to godlike status. The paradigm of programmability is winning ground, and well -- are we not Klingons?! Although DevOps has famously championed the need for a greater cultural understanding (not to say maturity) in a user-oriented service space over its five year campaign, I know many DevOps evangelists who still lament the fact that a majority is still singing "You say culture, and I say awesome new monitoring interface".
On social responsibility, I first talked about Asimov's laws in my essay on computer immunology, and how Asimov famously foresaw the need for a simple set of meta-promises to keep people and their assets safe. Asimov, with his editor John Campbell Jr, together invented the Three Laws of Robotics. The laws went like this:
- A robot may not injure a human being, or through inaction allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov's laws were supposed to protect humans from complex machinery in unpredictable circumstances. It was clear to Asimov that people could quickly become dependent on machines, and thus there was a need to be able to trust them implicitly. Law 1 is about human safety, law 2 about preserving the meaning of our lives, and law 3 is about protecting our investment in expensive kit.
Now, simply replace "robot" with "software", and where are the laws or even the promises that play the analogous role? In fact, I have a law of my own which might speak to the economist in us.
Mark's Law: any technology that forces programmatic reasoning onto its end user is very unlikely to reduce their Total Cost of Ownership.
Soon, when we are all buying SaaS on the shopping channel, software APIs will lie along side the complete set of knives for surgical intervention, followed by: Has any part of your app been involved in a collision with complexity? We'll get you money for your dead child process! Side effects include high blood pressure or vomiting, with irritating rashes of decision making. Shyster & Shyster can get you what you deserve!
SDx and Wanting to be the Machine
Asimov imagined humaniform robots that drove cars, mowed the lawn, or used a vacuum cleaner. We've been smarter than that with automated vacuum cleaners and self-driving cars. One service interface driving another using human facsimilie is nonsense. There is no reason to imitate a human way of doing something. I described the progression of thinking in In Search of Certainty:
With remote-control APIs were are opting for something even more primitive than that: actual Pinocchio strings. What use is a robot butler if you have to operate its arms and legs yourself?
After years lagging behind server automation, I was hugely optimistic when networking companies finally considered bringing modern autonomous automation methods to network equipment. When Software Defined Networking was proposed, its proponents were touting all the right words: separation of control from delivery, intelligence at the edge, abstraction, etc. But then they built OpenFlow, a Rube Goldberg machine with SDN to operate it. Don't worry though, it's open source so you can see all the moving parts!
Instead of offering a transparent and user-oriented solution, they came up with a low level programming playground, complete with swings and roundabouts, to imitate a 1930s phone operator patchboard for flows. Then there is OpenStack, a cloud deployment playground that changes shape faster than you can find the needle inside, let alone move it.
What about configuring the boxes? Everyone knows that, when in doubt, all problems can be solved with SSH or SSL throwing commands over the wall. After all, if it starts with an S it's all secure, right? Or did the S mean scalable?
We have spent a decade giving people remote controls to play with. Instead of taking axes and hammers away from manual workers, we've now equipped them with catapults for flinging their axes over the wire. Is this really what we need for precision reliability? Is it even ethical to try this kind of hit and miss form of management? Of course, we know what we're doing. We are klingons!
Programmers are taught from college to get involved in processes, to insert themselves into the logic, like craftsmen. They don't just want to use the machine, they want to be the machine, to fight the machine, and then stand next to it on the scoreboard. "There is only one way to do this! If I don't have complete micro-control of every stage in the process, I can't guarantee the outcome. And no, I will not wear my pager at the weekend when the house of cards all comes tumbling down."
The future of medicine lies in pills that are easy to swallow
There is an alternative that is less risky, however.
In the early days of medicine, blood letting and cutting into people was a popular approach, but once we began to research what went on inside the body, we discovered that it was already well-engineered to adapt and repair itself with just a little push here and there. So we invented pills and potions to bring us to some policy-prescribed end-state. And with a mixture of behavioural norms and the occasional medication intervention, we find an equilibrium, without Klingon pain sticks.
A pill is piece of non-invasive autonomous policy-prescribed automation. It can be mass produced, distributed and taken in parallel by thousands, without queueing up for the operating theatre. A surgical procedure is a risky last resort that requires downtime. Even if the result is not perfect, a pill offers so much less risk, and is so much cheaper, that there is little reason to contemplate surgery if it can be avoided. CFEngine was designed as a policy-controlled pill that works from the inside out. We are talking policy agents (a la promise theory, if you like) providing an expert platform.
Today's IT interfaces are made for surgeons, not for general practitioners. They ask for expensive procedures, not two tablets and don't call me in the morning.
But how do you innovate?
An argument for APIs is that, without them, you cannot innovate. If people can't experiment, they can't move that needle. There is some truth to this, but not really much. After all, this was the argument for Open Source too.
Open source software has brought very little innovation, but plenty of imitation. That's because it is a sport rather than a calling for most. People like Open Source for the inclusiveness, for the outreach, the tribal sense of community, and for the easy (free beer) availability, not for their ability to contribute to it. Innovation happens in the resulting communities because people have a voice in a public forum. This has nothing to do with declarative versus imperative: one has at least as much of a voice about CFEngine (a declarative system) as one has about Amazon's cloud API.
In practice, innovation happens in the same way for open APIs as it does for closed: you complain to the service provider or author, and they innovate from within on your behalf. Expertise and caring are not as widespread as we imagine. The openness is more a seed for community building, and the formation of a culture.
It's culture ain'tcha
DevOps has put its finger on a number of cultural disfunctions over the past five years. At some level, gamification has to be seen as arising from a lack of empathy for the actual users who are far away from the system engineer's situation awareness. Are we now writing software for ourselves and our own comfort levels? And this is a cultural issue too: it is about where we, as an industry, place our investment in ethics. Jeff Sussna has argued this point eloquently.
For a while, open source, has played a role in helping people to feel good about the choices they made. As long as the software is open, we are doing our civic duty, right?
What few see is that Free and Open Source (FOSS) itself is intensely gamified -- and this drives gamification of the APIs too. FOSS acts now more like major league sport than a research community. It has become tribal. Do you support Debian or Fedora? Puppet or Chef? Openstack or Cloudstack? We see technology actually being held back by allegiance to this cult of awesome. Open Source fandom is increasingly about filling a stadium (LISA, Velocity, OSCON, PuppetCon, NagiosCon, etc) not about who can hit the ball. For that we can partly blame the glowing brains of Casino Silicon Valley's Venture Capitalists, who place bets to prop up (or tip the balance of) the natural selection process in the tribal space. They nurture the Cult Of Awesome.
There is not an infinite pool of programmers for every company to draw on. If we are only making DIY tools, is the programmer of tomorrow just a hired handyman, encouraged to tinker forth solutions without architectural planning and approval. Remember those Industrial Age building-contractors who delivered questionable work late (in between frequent coffee breaks) and then ran off with your money to build the same kind of home for themselves in Ibiza? Placing these side by side with APIs handymen, memories of leaking rooves start coming back to me. Perhaps IT workers are slightly better dressed.
Platform or Policy as a service?
Why am I being this hard? Was I not complicit in inventing this automation? Am I perfect? Of course not, and there is no one-size-fits-all answer here. But we must ask the questions. Did we open up the technology or simply create a bigger playground? Why are we doing it? Is it to offer flexibility, or is it laziness? And why do we support SDx? To bring fun or meaning to our jobs? Or to make the world a better place? If it is the former, then perhaps we need to go back to the lessons of DevOps and think about the culture of empathy -- thinking less of ourselves and more of the innocent bystanders.
Those who are privileged enough to become doctors, engineers and the builders of society have to show a responsibility to those who are merely users, who cannot protect themselves. Users trust us to make their world safe, not to win points in the game. The trouble is, "IT" does not fundamentally want to help society. The cult of awesome wants to be the machine, not build on top of it for someone else.
The DIY culture is upon us. That's fine, in the privacy of one's own home. We don't want a cheat sheet or platform to tell us how to do it -- we want to forage around in the ecosystem and build our own. Evolution will eventually find the truth, but the search might be NP hard and take a very long time. We should not use the culture of introversion as a way of shirking social responsibility.
What transformed western health was not surgery, but pharmaceutical technology. Pharma is the equivalent of policy-based repair from within.
The cloud, has taken on the mantle as the primordial battleground for business today, and with few exceptions, the cloud hordes are wielding primitive axes and catapults to get their data into the afterlife, not the kind of futuristic `platform' automation that is wearing the Emperor's new clothes. So innovation has not taken great strides. Personally, I don't need a remote controlled carsey with an API, even if the Japanese do. I just want a simple maintenance free service. I am happy to swallow a pill without all that invasive surgery, or just rely on internal mechanisms. There are other challenges to test our mettle than micromanaging everything.
The industry presents this current software-defined mind-warp on a silk cushion of optimism as though we are progressing into a radical New Age of Enlightenment, but isn't that exactly what those shepherds of dogma told us before plunging humanity into a Next Dark Age. Enlightenment is a time of asking hard questions, of appealing to science. A dark age is an age of faith and clinging to belief, (you march into a holy war unprepared for the scale of what you've taken on). Ask yourself, which of these do you really see? Do we really have what it takes to micro-manage every battle with a remote oneline service? What happens when we need to ask for help? The Klingons don't always win, in spite of their bravado.
We gotta ask ourselves a question: Is SDx going to be just a vanity extension to World of Warcraft (with ruby charms and magical snakes) or can it be a stepping stone to something better?
MB Oslo Thu Jul 31 21:49:52 CEST 2014
Acknowledgement: I'm grateful to Mike Loukides, John Willis, Simon Wardley for reading. Also thanks to Dave McCrory for permission to use the twitter excerpt.