Smarter Spaces
On a scale from Smart Cities to the Intercloud of Things..., how much will your space think for itself?
Thu Feb 11 14:05:13 CET 2016
In information technology (IT), we think we know a thing or two about scaling systems. When we say scaling, we mean something different than what the physicists mean when they say it: we mean that we are trying to increase the performance or throughput of a machine by adding more components---or, by adding more horses, to paraphrase Henry Ford. In physics, scaling is about how system behaviours depend (in the most general way) on dimensionless variables, like the number of components, or ratios of scales, that measure the macroscopic state of the system. It is related to notions like self-similarity, which can sometimes be observed to span certain ranges of the parameters, and `universality' which tells us how the broad behaviours are somehow inevitable, and don't depend on the low level details. How these two viewpoints are related has long been of interest to me.
This article introduces a paper on scaling. On the scaling of functional spaces, from smart cities to cloud computing, which is inspired by work on scaling in cities.
The meaning of scale, for CS and Phys
When I started studying computers in the 1990s, 100s of machines were a lot; then it was 1000s, and 10,000s, and so on. Each few years, we seem to add a power of ten. As demand increases, we have to think about how to scale not only size but functionality to cope with growth. This is a special concern for IT: functionality, or intent of outcome, is a separate dimension to performance, that is not obviously related to size. What happens, in practice, is that we tend to start out by designing a functional system to service a small scale, and later try to photographically enlarge the whole thing, assuming that the outcome will be inevitable without necessarily rethinking the approach. But what if that is wrong?
For the possibly the first time, there are data from an unexpected source, to shed light on whether this makes sense.
Datacentres are still relatively `small'. If we were to compare the size of an average datacentre (measured in number of servers) with the size of a city (measured in the number of inhabitants), we would find that only the largest datacentres are beginning to rival the scale of human populations. A city is basically an operating system, for providing and sharing resources---surely there is something we stand to learn by studying what is known of cities, both as mechanical infrastructure, and social communities.
Recently I stumbled across some fascinating work started at the Sante Fe Institute by Luis Bettencourt and collaborators, on how the properties of cities scale with population. This grabbed my attention immediately, because it deals with many issues for which we have no data in IT. It was a chance to apply promise theory to exactly the kind of problem it was intended for.
IT, thinking in straight lines - queues
Knowledge of scaling (aka `scalability'), in IT, is based roughly on a hammer and nails version of queueing theory. We think of batch computations which can be parallelized, and interactive services that can be fielded with load balancers. A load balancer, which is essentially a router, serializes traffic from disparate sources and distributes it to a backend of dutiful servers, that may grow and shrink elastically on demand.
As long as traffic is sparse (the arrivals are low in density compared to the service rate), queueing theory tells us that there is marginal efficiency to be gained from this load balancing arrangement (like an M/M/n queue). However, this marginal efficiency, at low load, is prosecuted at the expense of a gross inefficiency at high load, which is congestion, or the need for a large waiting buffer, where the queue waits for service (not to mention the processing capacity needed to marshal arrivals into line). Our obsession with this very one dimensional arrangement surely persists from the original underlying electrical circuitry, which made computers from components, connected by wires. Very little re-evaluation of the design has taken place; indeed, there are even plans to emulate the physical model in virtual form, with exactly the same limitations (see Network Function Virtualization, etc). But, if we think more carefully, there is reason to suppose that this needs to change. Take the lines at the supermarket, for instance: they self-organize somewhat differently.
What if we look at the problem again? What happens in supermarkets, when people go to the checkouts; or, in a city, when people flock to find a hotel, a cinema, or enter a supermarket? They don't all stand in a very long line, at the edge of the region, waiting to be dispatched by some controller to a private handler; rather, they flood to the locations, from all directions, funnelling and interleaving at random, attracted by the services, and repelled by one another. The functional services act like central `gravitational' forces, attracting a fluid mass of users, whose density builds up around the promise of service. Now it starts to sound a lot more like physics than computing, even though the transactions are all pure information.
Is spacetime all it's networked up to be?
Spacetime is a subject we normally associate with astrophysics, and Einstein. Sometimes we can disregard it, in the virtual world, by inventing artificial spaces, that draw attention away from what is happening in reality. Sometimes we can't.
You might think you know what spacetime is. The average physicist takes spacetime for granted, as a background in which all activity is embedded; it comes with established scales of measurement, and it is `always on'. Physicists seldom think about its relationship to networks. The networks of interactions between point sources are too complicated to deal with at a large scale, so physicists renormalize them away. The average computer scientist doesn't really know that spacetime exists at all, and would prefer to ignore it anyway. He or she is only concerned with the point-to-point networks, between things, not the region in which they are embedded. Everything that happens in a computer can surely be represented by partial orderings of symbols, and branchings of logic, in one dimension, like a tape or a queue.
This parodic juxtaposition highlights why computer science and physics need to reconnect. It turns out, that to understand processes, especially those constrained by intent and functional outcome, we need to go back to basics of what space and time are about, and accept that the role of spacetime is more subtle than most of us realize. And if you are sceptical that space means anything to functional systems, you only need to look at the most complex functional machinery around: biology.
What are space and time?
- Space is a set of states that may or may not be occupied by `things'. We've all learned about dimensions in school, and we are used to living in 3 dimensions---though sometimes we use effectively 2 dimensions (e.g. when drawing) or even 1 dimension (e.g. when driving on the freeway or writing on paper).
- Technically, the dimension of space is related to the number of `degrees of freedom' or possibile ways the `things' can move, at each step. In other words a dimension is a freedom to change location in a distinguishable direction. This, in turn, might be limited by constraints, such as walls and barriers.
- We are used to thinking about spaces that a `homogeneous and isotropic', where the dimension of space is the same everywhere; but, in a network, the degrees of freedom may change from point to point. Weird!
- Moreover, the dimension of a network (call it inner space) is different from the dimension of a volume in which is is embedded (call it outer space).
- Time is something else that we take for granted. Time is what clocks measure. Global time is the sequence of changes to the configurations of these states (it forms a giant clock), but it is hard to measure, because no agent truly has a view of the whole system. So local time, which is based only on the changes we can see, appears to move at a different rate than global time. Time is compressible!
- Space and time belong together. In a one dimensional world, internally, space and time are the same.
Spacetime involvement in scaling is subtle. We are simply not used to thinking about systems in more than one dimension, and our views of space and time are based on everyday life, not on the interactions of specifically labelled parts.
From an information perspective, the familiar uniform spacetime we are used to taking for granted is only a large scale continuum approximation that belies the true nature of the network (this is probably true in quantum theory too). The effective dimensionality of a network is only related to the dimension of the space, in which is it embedded, if it interacts with it. A particle in 22 dimensions may as well be zero dimensional unless it can interact with the volume around it. A pipe or a sentence is one dimensional, as long as the small cross-sectional incursion into neighbouring dimensions is unimportant to its dynamics.
Universal scaling...it's not what you think
I have made it my mission, over the past two decades, to work on the microscopic properties of information systems (from analytical studies to Promise Theory). But it is only recently that I began to see a need to develop a specific functional notion of space and time, or what I call `smart semantic spacetime' (you can read about this work in these papers).
When I heard about the work studying cities, which started not from microscopic but macroscopic bulk quantities, I was immediately excited for two reasons: we don't have large scale data on any other kind of functional system, where humans play a role (though we do have biology, which is functional), and there is an opportunity to understand functional scaling, using promise theory, as I have tried to do in semantic spacetimes. How scale impacts on functional outcomes, and how those outcomes might fail (broken promises) is one of the key questions is systems analysis.
In IT, the best undestanding of scale comes from parallel computation, which is a one dimensional processing model, that acknowledges that there can be speedup due to multiplexing and compression of a task through parallel processing. Like handwriting, or a pipe, the incursion into more than one dimension through parallelization is basically trivial. Scaling laws in IT (like the Gunther's `universal scaling law' and Amdahl's scaling law) are not as universal as they claim to be, because they relate only to one dimensional processes. The dimension of a functional system is not the dimension of the space in which it is composed. But, wait... What if it filled that space so pervasively that this were no longer true?
Computing in another dimension
This is where information technology is going: whether in centralized datacentres (currently called `cloud computing') or in the fledgling embedded spaces of environmental computing (currently called the Internet of Things). The density of point sources is becoming pervasive throughout volumes, so that counting workflows and resources in terms of their averages throughout an embedding space is the only practical way to account for them statistically.
Modelling of cities is interesting precisely because it demonstrates that we cannot understand the observed data without understanding the dimensionality of the embedding space. Computer scientists and physicist both need to pay attention to this. It bridges two disparate worlds, that barely understand one another. Spatial constraints balance the degrees of freedom due to close-packing within an embedding dimension, much like material physics.
Now here is the pinch. Does this mean that we could expect to see the same behaviours observed for cities in IT systems at scale? In IT, scale today means cloud computing and massive datacentres. It does not currently pay attention to dimensionality. However, changes are afoot, with network fabrics that are increasingly two or more dimensional. The problem is that there are little or no data in the public domain to go on. Amazon, Facebook, and Google, no doubt have access to data at this scale, but this is not public domain, so for all intents and purposes it does not exist.
The next information age is one in which the old dream of pervasive computing becomes a reality. Make no mistake, the picture we have of cloud computing today, i.e. renting temporary virtual machines half way across the planet is not the future of information technology. Amazon and Google and their offspring might manage and even own the cloud of the future, but it will be in substations in your home and bottled in your pocket, and in your town centre, like other utilities are today. The use of wireless technologies already allow computers to fill space, in a very real sense. Mobile ad hoc networks discovered this a decade ago.
The bottom line is that information service interactions will fill the space in which they are embedded like never before. Like a city. It's not just parallel, it's bulk, with degrees of freedom in all directions! New structures to scale information flows, like Clos architectures networks. I wrote about these in In Search of Certainty. By understanding space and dimensionality, we can unravel the folded structures of networking, which are drawn awkwardly in the image of classical one dimensional hierarchies.
The same diagram can become a much simpler, three dimensional, unravelled structure without any crossed wired!
So much for space. What about time?
Time moves faster in the a busy system, because the more you fill your time, the more ticks of your brain clock happen per clock time, and the more changes you can notice and fit in. In the world of embedded devices, utilization is very low. Our smartphones and home devices spend most of their time asleep. This limits their usefulness too. But the phones do something more valuable: they move in space!
- In a world dominated by time, flows are serial, with a little parallelism.
- In a world dominated by space there is multi-dimensionality.
- In the interaction between them, all possibilities are on the table.
Thanks to the ingenious work studying cities, by Bettencourt et al, we can see the effects of these processes in a very universal way, leading to measurable effects on the performance of city behaviours. The behaviour in other kinds of information systems might not be identical, but the principles are likely to be universal, as I point out in my paper.
Scaling semantics: The modular specialist gets lonely without users
So much for scaling of dynamical behaviour. A question that physisicts rarely ask is: will a system continue to keep its promised function when there is a change of scale? This includes the modelling of functional relationships, which cannot be derived from a dynamical theory; multi-tenancy, multiplexing, and multi-dimensionality also play a role too, as a promise theoretical description helps to reveal. Intentional structural organization is present in biology, of course. Organisms exhibit a rich variety of structural adaptations, perhaps the most striking of which is specialization. In the realm of human society and technology, modular specialization is the force that has driven urban living and the economy of the western world.
Modularity is universally presented as a `good thing' in IT too. It is thought that, if we break up a problem into tidy parts, and reconnect them through tidy interfaces, the problem will be more manageable. This is not a scientific evaluation, but more folklore. Modularity is about independence and dependence of functional agencies, coupling, and economic sustainability of knowledge and its application. Unchecked, modularity has been blamed for the collapse of entire civilizations by Joseph Tainter in his book The Collapse of Complex Societies. But, in many spheres modularity is simply considered to be a form of order, which is viewed as the opposite of the popular understanding of `entropy' and decay. So much so, that garden cities, like Brasilia, which were designed for tidy organization, failed as designs, because they placed aesthetic values above actual science.
Today, another version of the modularity argument is resurfacing, as we reach a capacity limit in human-computer systems. The concept of microservices has been gestating for some time, but recently it has been pushed as a new panacea in software development. Because humans have a limited valency for attention (cf the Dunbar hierarchy), the microservice philosophy says that we should break up complex problems into manageable pieces and reconnect them through networks. Human attention has a hard limit, but technology can cope with the extra cost of scaling interactions between modules. But, putting all of this speculation aside, can we actually understand how modularity can affect the scaling of a large system and its produce?
Microservices cannot be a panacea. What we see clearly from cities is that they can be semantically valuable, but they can be economically expensive, scaling with superlinear cost. They are expensive in one sense, and cheap in another. Like all tradeoffs, the economic balance is tipped by changing supply and demand. With a proper theoretical understanding of functional space and time, we can properly understand the economics of centralization and decentralization.
The case for semantic spaces
Is there a bigger picture here to understanding what goes in large scale networks? Information technology is pushing us in the direction of ever more pervasive networks. What if space itself could be smart? What would that mean? It is certainly a dream of the Internet of Things. That is the question behind the semantic spacetime project.
A functional spacetime, like a city, or supermarket, or even a warehouse, is a partially ordered collection of labelled things. Today, applying information technology to the scale of society we have already enabled the better use of space and resources. Smart taxi apps, Uber, self-optimizing robot-driven warehouses, and smart stores, smart roads, personalization of living spaces, and many other applications allow what we consider to be inanimate infrastructure to become quasi-intelligent and adaptive.
What about the brain itself? Surely the most famous `smart' network we know of. Our understanding of the brain is limited, but what we do know is that it is a network of structures, with functional behaviour. The columnar structures in the neo-cortex, suggest a partial serialization and hierarchy in the structure of brain activity. The cortex fits the description of a regular array of cellular locations that interact with a complex network. If we can understand self-organizing arrays of things, with agents (cells, micro-organisms, robots, software agents), then can such networks also think? What does that even mean? After the brain, the human immune system is one of the most intelligent networks at work in biology. Cephalization in biology is a case of smart adaptation of a functional mass of cells into a differentiated, modular, semantic space. The formation of organs, like the eye, the heart, and of course the brain, could all potentially be explained based on a common set of principles.
When transport is a negligeable cost, the centralization of specialist agents brings advantages including economies of scale: collation of experience, and learning leads to possibilty of agile innovation and improvement, as well as queue management at light load. But economies of scale are an illusion, generated by the inefficiency of centralization to begin with. When there are dependencies involved, total efficiency may start out even poorer, but the compounded economies (recoveries) of scale can give the appearance of superlinear scaling improvements, as observed by Bettencourt et al. But we should not forget that, if we could fully distribute a capability to all agents equally (although we would lose collated learning and innovation), scaling would always be linear.
Back from superscale city
Apart from the dramatic potential of functional and even `thinking' spacetime, i.e. to unify smart human networks with technology, the simple lesson that we can draw from a quite independent study of cities is that the geometry of a space affects its functionality and performance in ways that have not previously been considered in It. It is at the heart of the accounting of how value is created, and how costs are measured.
Promise theory provides a framework for asking some of these questions, bridging computer science and physics, and showing an approach that could potentially turn coarse grained observation into a design methodology.
We need to start thinking about all kinds of interactions (both physical, technological, and social) with the same tools that we use in the natural sciences. Promise theory shows how this can be done, in terms of superagency and dependence. In short, we need to talk about scaling.
This article introduces a paper on scaling. On the scaling of functional spaces, from smart cities to cloud computing, which is inspired by work on scaling in cities.
MB Oslo Thu Feb 11 15:42:14 CET 2016