Promises of virtualization

Seeing shapes in the Cloud...

Cfengine today has developed a broad range of virtualization techniques that simplify the packaging and configuration of computer systems. Still, there is a lot of confusion about what virtualization is and is for. Promise Theory (on which Cfengine is based) could be seen as a form of virtualization itself, so how does Promise Theory help us to understand and manage virtualization and its Cloud Computing dreams?

What is virtualization?

Virtualization is that vague and diffuse word that was on everyone's lips before the equally nebulous `Cloud' stole the attention of the weberazzi. The term `virtualization' can and does mean many different things to different people. Today, virtualizations exist for platforms (operating systems), storage, networking and entire multi-user systems -- but for the most part the virtualization du jour means providing a conveniently packaged simulation of one computer system using another computer system.

Why on Earth would anyone want to do this? There are, of course, several answers. For some, a simulation creates an illusion of added security or an additional layer of control; for others it is about power saving or lower capital investment; in the cloud it is about ease of packaging (commerce) -- making computers into separate saleable units that can be dialled up on demand. Basically all the reasons are about simplifying the management of something -- but exactly what that `something' might be is usually not discussed.

A quick and dirty answer that has elevated virtualization in the general consciousness is that we can no longer afford all the physical machines that are piling up in datacentres. The power requirements and the heat generated are so excessive that we have to find a way of sharing the existing resources better. But the use of virtualization for power saving is ironic, as a virtual machine is always slower and more cumbersome than a real computer, just as any simulation is slower than the real thing -- because there is overhead. So how can we save resources by using more? We can't. The point is that real computers are so inefficient, and so poorly utilized in most cases, that almost 80% of physical computer resources are just thrown away through inefficiency. So, even with the overhead, repackaging machine units using virtualization, and squeezing these into fewer physical machines, easily increases the total efficiency.

This cannot be the main reason for virtualization though. If we wanted to save energy, there are better ways, simply by running parallel processes under a single operating system. Multi-tasking operating systems were, after all, designed to virtualize computers, giving users the illusion of having their own single user machine to play with. Each program on a multi-user, multi-tasking system runs as an independent process, under a kernel monitor which manages the walls between them. This was good enough for thirty years, but today this is considered risky, as we have lost our confidence in being able to manage the complexity of coexisting users. One user might, in principle, be able to look over the shoulder of user if the system is not properly configured. Thus the fear of not being in control of configuration has a cost. This is a is a management problem: we build a wall (at one cost) rather than pay the cost of policing the state of the physical machine.

Today we have simply moved up the hierarchy to virtualizing whole collections of users processes, as independent virtual computers, to share computer resources between possibly different organizations with a more substantial `Chinese Wall' between them. It's a matter of trust.

Still going up, there is the Grid, joining isolated machines together into a world-spanning global computer, and the Cloud which simulates whole networks of multi-user virtual machines. Will there be an end to how many levels of virtualization we will support? Who knows.

Testing the armour

Why do we need all these levels? Have we squandered the opportunity to manage resources somehow? I believe the answer is to some extent yes, we have squandered the opportunity by letting a cycle of virtualization spiral out of control -- but the main reason today has to do with packaging.

Virtual machines are here to stay, at least for the foreseeable future. We are piling complexity on complexity, and inefficiency on inefficiency, but we can just about cope with it, provided we have some kind of automation. But then we have two options:

  1. We strong-arm the implementation, by simplistic brute force.
  2. We look to ways of understanding our systems better and reduce their complexity before automating.

This is where Cfengine trumps the management issue with a convergent, Promise Theory approach.

You should now ask why should Promise Theory make any difference? That is the essence of what I want to talk about in this essay, because there is a forgotten side to virtualization that is of enormous importance, but today no one seems to be writing about it: it is really a strategy for Knowledge Management. Promise Theory makes the choices clear.

What is virtualization really?

Virtualization is, of course, a buzzword, pregnant with marketing potential. As a technology, a virtual machine is simply a simulation that creates the appearance of a service by some underlying magic. We use virtualization often to make awkward technical details go away, or to create portable services (like mobile homes) that can be relocated, not tied to a particular foundation.

We also use virtualization to sell packaged services (like mobile homes). Today we talk `platform as a service' where we sell both entire computers simulated in Amazon's Elastic Cloud, and simpler `managed services' where web services are sold as a commodity, without seeing the details behind them. Virtualization is a form of interface design at the technology level, little different from the use of windows and menus on your PC, just intended for under the bonnet/hood of the computer. It is therefore, like all interface design, a strategy for Knowledge Management, embodying the key aspects: separation of issues, the de-coupling of design from execution, and hiding of oppressive detail.

Virtualization is Knowledge Management

So virtualization is packaging:

  • An interface, or simplified gateway to a technical implementation.
  • A layer that makes appearance independent of the underlying implementation.
  • The `containment' of implementation that isolates the details from direct contact with users or other virtual machines.

These are all the requirements to turn computing into a service product, so the rise of Cloud Computing was an inevitable consequence of the rise of virtual machines.

Virtualization is thus part of a long tradition of information hiding in technology design. It is motivated by the desire to manage complexity. Being creative and commercial animals by nature, humans naturally see ways to exploit the properties of the resulting packages to even greater advantage: interfaces can be moved around, underlying resources can be replaced or upgraded, independently of what is living on top of them, and solutions can be packaged for zero-thought installation.

Flexible packaging has a cost though -- which is why we usually have to pay for it as a service. The trouble with packaged mobile homes on a camping site, for instance, or a room in a hotel, as compared to a fixed abode, is that there is a lot of extra work to do providing the necessary utilities to the small units as they come and go and move around. The lack of permanence has an overhead. The same thing happened in network technologies, when packet based virtualization of data enabled fault tolerant routing and recovery, with coexistent independent streams. The motor car packaged transport of people, providing greater flexibility than point-to-point (circuit-switched) trains. Greater flexibility, but greater overhead. (For all the flexibility of the motor car, the train keeps a much more reliable schedule.)

In IT services, we (that is, modern system administrators anyway) manage complexity make things pluggable, automated and self repairing, using technologies like Cfengine. But the implementation overhead is only half the story. A harder problem is our inability to comprehend.

Promises virtualize management knowledge

Parallel to the process of virtualizing systems is the way we choose to virtualize our knowledge about them -- i.e. the abstractions and patterns we use to talk about them. This is where Promise Theory enters the picture, and reveals what virtualization is really about.

Promise Theory is a model for describing interacting resources. It tells of system components (usually called agents) that work together to maintain their promised state and behaviour. Originally designed to represent Cfengine's configuration management behaviour, promises clearly also result in a model of system knowledge.

Promises bring both clarity and certainty to our expectations of a system's outcomes, so they are useful for planning, and they signal intentions. (The promise "it will be a cake" is more useful for planning a party than a linear recipe "add flour, sugar, eggs, bake for 20 minutes, etc..", though of course it is no guarantee that the cake will be a success.)

In Promise Theory, promises are idealised statements that are made `autonomously', i.e. each agent makes promises about the things it can actually control (parts of itself). The opposite of a promise would be an obligation or even a command -- an attempt to imply control or power over something external. Interestingly, this is also the definition of an attack. Management by attack has been the norm for thirty years.

It is not hard to see that agents making promises are already like virtual machines in the sense that they conceal inner workings and communicate by providing promises as a management interface to the outside world. Promises are therefore a natural partner for talking about virtualization. Promise Theory is essentially model or `virtualization' of virtualization itself, but it is more than just an arbitrary name for an old story. It is based on principles that optimize for knowledge and repairability.

Realistic best effort is encoded

The promise concept encapsulates key principles that are useful for management

  • Can only make promises about self (causal boundaries)
  • Signal expectation of outcome

Making a promise about something we have no chance of influencing is foolish, and this tells us how to partition the management of packaged entities, like virtual machines:

Any system may be partitioned into an internal environment and an external environment. We control the internal environment (like planting a garden), but forces beyond our control govern the external environment (like the weather). We turn this into an acid test:

  • Can we change it?
    Yes (we own it) => we can promise it.
  • Does it merely happen to us?
    No (like the weather) => we have to use it or work around it, and repair its effects.

At the boundary between internal and external there is an interface where promises can be made. If we imagine that the internal environment is a box within another box (the external environment), then the smaller box can promise its contents and what goes on inside it, and the external box the same. Since the external box `owns the internal box' it can change anything at the boundary, like what goes into the smaller box.

The files and processes inside box (the virtual machine) can make promises that look indistinguishable from a physical machine. The box itself can promise to use the resources available to it: a real or virtual CPU, a maximum amount of available memory, etc. The physical machine or hypervisor can promise to deliver a fraction of its own resources to the box inside it.

For a virtual machine inside a physical machine, we now see how promises can be used across this interface. The external environment can promise to provide power, memory and CPU resources to the smaller box, and the smaller box can promise to use these and do work, whose details are hidden inside it.

  • Virtual host: promises to manage the environment of the guests, that they cannot change themselves: setting the dimensional limits f on CPU time, memory, storage, etc.
    
      # Promise made by physical machine about the state of its internal virtual machine
    
      environments:
    
         physical_host::
    
           "virtual_host_id"
    
              memory => "1GB";
    
    
  • Virtual guest: promises to manage its `virtual resources': files, processes, data, to run programs etc.
    
      # Promise made by virtual machine about the state of its internal file
    
      files:
    
         virtual_host::
    
           "/path/file"
    
              perms => owner("myuser");
    
    

These are very simplistic examples, but they illustrate the clear boundaries. Each environment aims to keep promises of the appropriate level (in this case with the help of Cfengine).

Promise Theory predicts (trivially) that virtual machines need to have unique identifiers in order to be able to make promises. Thus in order to have distinct and repairable properties, they need a unique name. It is not sufficient to just make a number of them all the same.

One doesn't have to use promise notation, or even Cfengine notation to have promises. An XML notation, or any domain specific language would do. What is important is that the management model satisfies the requirements of a promise model (see the Appendix below). Indeed, the libvirt XML format seems to satisfy most of the requirements, and Cfengine is able to integrate directly with this to add its convergent repair promises.

The outcome of each promise should then be a verifiable property of an agent (like a check-box in a Service Level Agreement), and we expect it to last for a specified duration. Promises then describe epochs of expected stability. They tell a story of predictable knowledge -- and this is the hidden secret of promises, and hence virtualization: they are really strategies for Knowledge Management, because knowing relies on certainty over extended periods of time -- not just at the moment of deployment of `provisioning' as one sees with many current methods.

Patterns and the scalability problem

Talking about promises rather than recipes is not enough to claim significant headway in Knowledge Management, although the promise concept itself is a step towards virtualization of knowledge (hiding cumbersome and changeable implementation details in favour of advertised outcomes). Why? Because it doesn't necessarily do anything to prevent complexity from growing.

Even if one squirrels away promises inside virtual machine packages, what happens when the number of promises grows large? Then we are back in the mud. There has to be some way to stave off the complexity, and this is where patterns come in. A pattern, after all, is a compressed representation of information. (This is what led me to talk about molecular computing at ICAC a few years ago, which is about re-using patterns, by the way. The importance of patterns was also pointed out rather early by Kent Skaar, at the LISA Configuration Management Workshop in the mid 2000s.)

The simplest pattern is to make everything the same -- this is still the approach many IT managers aim to `keep it simple' and keep the cost of management low, but it is a steamroller approach, and it is too simple, seldom the answer to any real problem.

The key to Knowledge Management is in designing patterns that recognize both compressible repetition and meaningful diversity and to package the repetitive pattern as a new box (a new virtual entity), give it a name, and thus be able to make a single meta-promise of a new type; in other words, a new interface to a new level of hidden detail (like a subroutine). Indeed, I often tell people that Cfengine is simply the fusion of two concepts: promises and patterns.

Isn't this just what programming hierarchies do already? Yes, but with a difference. The focus of promises is on outcomes, or desired states that are independent of where you started from, not on an assumed set of changes needed to get there. This makes the description simpler and even optimum in an information theoretic sense. A single promise `symbol' represents a persistent, error-correctable outcome, that does not need to be repeated or woven into a computer program to be implemented or later repaired.

It is easy to show that the representation of the problem cannot be made simpler than a convergent promise model -- not without throwing something away. Moreover, the promise relationship encodes the interactions between entities in the system, with clearly defined boundaries, so it is clean and based on reason rather than ad hoc scripting. It is a step towards real Knowledge optimization.

So whatever we might think about virtualization's rise to fame, what promises tell us is that it is not only the simulation technology that brings major if paradoxical savings, but also the Knowledge Management win: the packaging, the interface abstraction. In the long term, I think this latter detail will survive beyond the virtualization technologies themselves.

When I look at the way IT virtualization is handled by other software systems currently boasting their mastery of the Cloud, I have to shake my head in wonder. They might automate a few things, but they do nothing to simplify the knowledge issue -- they attack the complexity by brute force, brushing the responsibility of Knowledge Management under the rug. But, if something goes wrong, or something needs customizing, all of the ugly details burst into view once again, and the user will feel the full impact of the chaos.

I am grateful to Nakarin Phooripoom, Eystein Måløy Stenberg and Dong Wu for discussions.

Appendix

What qualifies as a promise model?

Promise theory, as I have described it in literature, has a certain notation and aesthetic that I prefer, but the notation is not unique. Many things can be viewed as promises, as long as they fulfill a few criteria:

  • Autonomous specification: promises are made by the affected agent, never by a third party on their behalf, and are a publication of its intention, i.e. they are about something the object has control of. So the model has to realistically describe what agents control and what is beyond their control.
  • The promise documents the intended outcome of the promiser's actions. They are Knowledge.
  • Promises describe a successful outcome, not a recipe like a computer program typically does. They describe what not how.
  • It is possible to verify that a promise has been kept by some kind of assessment.
  • Any stakeholder (promiser, promisee, or other observer) can attribute a value to the outcome of a promise being kept, or not kept.

Method for constructing a promise representation

  • Make a list of possible capabilities and states that each agent or virtual entity can be in. This tells us the promise outcomes that are possible, or what can truthfully be promised. In other words, ask: can I change it? Then I can make a promise about it. If not, then the promise belongs to something else.
  • Turn the possible outcomes into types and specifications of promise, e.g. files, processes, storage, virtual_environment, ... This leads to an ontology or data model for the promise's intention.
  • Finally document who makes which promise to whom, so that the promise model is a data model plus a network of `access rules'.

The complete information is a knowledge map of resources, their intended properties and the relationships between them.