Artificial reasoning in non-intelligent systems

When spacetime itself becomes smart (part 2)

Smart materials, smart houses, smart cities: are these things related to artificial intelligence? I argue that they are, but that our notion of AI, shaped by the desire to ape human capabilities, is currently too narrow in scope and scale to understand the connection. Studying systems that we don't normally consider to be smart is the route to understanding what smart means, and understanding how intelligence scales is the key to understanding how its scope and limitations.

This essay discusses aspects of Semantic Spacetimes (III): The Structure of Functional Knowledge Representation and Artificial Reasoning.

Cells and circus tricks

My favourite example of an intelligent dumb system is the immune system. In spite of consisting of a plethora of seemingly uncoordinated cells, it is in fact a complex reasoning engine based on an ecosystem of almost completely distributed, fluid-phase cells. It has no wires, no CPU, and yet it can decide what is friend and what is foe, albeit with a few bugs from time to time (no pun intended); it can remember past foes, it can work with the brain to manage the body's larger ambient respiratory complex; it can mass-produce silver bullets that single out specific molecular signatures (even ones that have never existed before in the history of the world). And, most importantly, like all superior intelligences, it knows when to shut up.

Even more startling are the brain and nervous system, which we all associate with intelligence. At one scale it appears as a central cauliflower of raw processing, consuming 10-20% of our energy. At another scale it is a pretty homogeneous grey goo that powers our sensory and introspective thinking processes, without substantial differentiation of functional parts. Magic! It regulates a process of learning and adaptation to an external environment. We have only snippets of understanding about how it works; yet, such is our respect for this mysterious central structure that we cannot imagine intelligence without something similar.

The narrative that the brain is a computer (and often vice versa) has led to much mythology and many misunderstandings. It is hard to deny that it computes, but it is nothing at all like what we think of as a computer. It can simulate algorithms, and perform reasoning in trains of logic, but there are no `logic cells'. The capacity for logic is not hardwired into it in the same way as in a digital computer. It works in a fundamentally different way, because it combined hardwiring with `virtual' emergence. When we talk of increasing the smartness of dumb things, such as in building `smart houses' and `smart cities', our thoughts turn to this particular modern prejudice: that it is our dalliance with computers, software, and central brains that makes things smart.

But let's not be too sure. The modern narrative of Turing machines and von Neumann architectures has dazzled us into thinking that there is intelligence in software (not just behind it). AI researchers have expended much effort to make computers play games, and perform circus tricks. But do we really consider algorithmic searching and arithmetic calculation intelligent? Clearly some intelligences do, but I am not sure I agree. Rather, I suspect that they reflect more our own human capacity, as children of the industrial age, to anthropomorphize machinery, and conversely to set aside our own intelligence on demand in order to mimic machinery. Since the industrial revolution, our entire notion of who we are and what we are has taken on a narrative of being part of the machinery of economic production.

Smart or well trained?

The learning of procedures and other spacetime patterns (playing games, by cribbing from lookups, simulating distributions, or emulating processes) does not feel to me worthy of the name intelligence. These tricks are surely mainfestations of the tools of intelligence. They add up to a battery of capabilities that feed into the behaviours that intelligence works with. But we don't respect a student who simply memorizes a book and is able to repeat it on demand; playback (even with random access) is not our idea of smart, and the indirect counter-visual way in which neural networks store their memories should not distract us from seeing their basically mechanical nature.

Generalized adaptability (problem solving) would be a step further than this, in which `intelligence' reveals its capacity for innovation and imagination. That aspect of intelligence surprises us not only in brains, but also in immune systems, and even in the most primitive of slime mould behaviours. It may also be interpreted algorithmically, up to a point. However, it turns out that even dumb spatial structures, like `smart materials' exhibit these qualities. The idea of swarm intelligence, in colonies of individually dumb insects or animal herds, is equally well popularized. I call all such structures semantic spacetimes (no matter what material phase they embody). By this criterion, almost anything might as well be called intelligent (not just `smart').

The point of departure, for a more satisfying and recognizable intelligence, more likely lies in the integration of what we think of as emotions with introspectove intent i.e. motivation, feeding back into all of the techniques, in real time. The kind of intelligence we believe we possess reaches its potential through contextual prioritization, mixing sensory perception and introspection. The kind of emotions, which humans feel, need not be the only kind (ours require a very sophisticated nervous system indeed to express the ghostly sensations we feel in an emotional response, with many more sensors than any modern infrastructure possesses), but an analogous function is easily imagined. The gap has to do with scale and sensory diversity.

Pattern recognition currently takes centre stage in AI research, based on the success of Artificial Neural Networks and so-called Deep Learning. Pattern recognition is nonetheless but one stage in an AI input system, not AI itself (like the development of a vision system to rival the human eye and its pre-cortical processing). A bigger neural network would not suddenly become fully intelligent, without new systemic processes, because its spacetime structure may not have the boundary conditions or processes to support the feedbacks that enable unsupervised adaptation.

Identifying the structures and processes by which self-adaptation occurs, in a scalable way, is where I believe a proper investigation of scaling is still forthcoming. As a physicist, by training, I come back to the subject of scaling frequently, because it is so poorly understood in computer science. The limitations of space and time are at the heart of scaling, and the juxtaposition of semantics and dynamics (see my book In Search of Certainty) defines the behaviours of a system. With this in mind, since 2014 I chose to spend time developing a basic understanding of how spacetime and scaling issues impact on knowledge representation and they key functions of artificial reasoning, in its most general form. I hope this may be of some use in unravelling the issues.

Patterns pin spacetime scales

Specially evolved organs, with singular focus (like eyes, ears, or trained pattern recognition networks), dominate the discussion of AI today. They solve a crucial part of the puzzle. Focusing on these technologies is functionally efficient (as a silo strategy), but also rigid, and the results necessarily interface to the world at a fixed spacetime scale (the scale of the sensor).

Would it suffice to increase intelligence by boosting the power of these partial cognitive faculties, with faster processing, more memory, etc? This is an argument offered by AI commentators, appealing to Moore's law as the limitation on AI. However, it seems far from clear. If you double the size of an ear, it will not respond to the same frequency range. If you double its processing/sampling rate, you can increase its resolution, but you may only double the expense of processing with no new information (Nyquist's law). If you double the size of an eye, it will not necessarily see twice as far. The same is true of actuators or limbs, used to apply the results of learning. Such specialized appendages have evolved, in nature, to a fixed scale, with a narrow focus in order to adapt to their expected environment. A single eye cannot tell the difference between a small cat and a very large cat that is farther away. The same applies to capabilities. If facilitated by a sudden imbalance of power, a scale mismatch could also be potentially dangerous, because throwing unfair resources at the single-minded and self-interested behaviours of a mere component is not necessarily good for the whole (imagine using a chain-saw to cut your steak, or a bacterial strain with an imbalanced advantage).

Of intelligence, we expect not only increased capacity, but also a wider appreciation of holistic considerations, including safety, and a prioritization of competing interests, replete with ideas like ethics, empathy, and even altruism (networked self-interest, in an evolutionary selfish-gene picture).

Suppose we try to transform our understanding of intelligence to the scale of a house or a city. The system now spans processes that we do not easily see, because they happen at a scale that is too small or too large for our human senses. Each of those can also be optimized, but such optimization may in fact sometimes harmful to a person or an ant, and to the environment. So what is smart for a single person, or a partial process, is not necessarily smart for the larger system.

Currently we speak of smart cities as information sharing exercises, complementary to our human intelligence, but we could also imagine a smart city to mean a city that autonomously behaves in a self-interested adaptive manner, reasoning in response to its environment, and indeed to other cities. Would this be intelligent? From the earlier examples, it seems plausible that we might extend the notion of intelligence to such a case.

In my work, on semantic spaces, I have tried to ask the question: how could we understand any space, any material, or any city as a scaled version of some entity that we believe exhibits intelligent behaviour? Would that make sense? In a series of semi-formal studies, based on a model of semantically active space and time, it is possible to see that there are definite parallels (see Semantic Spacetimes III) between, say, cities and brains. However, as we scale up the size, the functionality is affected, scaled complexity is reduced, and what we perceive of intelligent behaviours peter out. Nevertheless, it remains a common perception in the world of Information Technology (especially since the branding of `Elastic Scaling') to think that scaling will have no detrimental impact on functionality.

How intelligence, function, and computation `scale'

In the real world, our intuition of scaling is well developed. We don't expect the inflation of size to occur with constant semantics. There is always some penalty. We know this best from looking at the animal kingdom, where say a blue whale does not behave like a very big dolphin. It is much slower and its capabilities are very different, in spite of its very similar anatomy. Driving a cart is not like driving a car is not like driving a bus, because the parts do not scale in relative proportion. We say that the semantics are not scale invariant (see my blog The Making of a Software Wind Tunnel).

These considerations are obvious to us in the physical world, but in the world of computer science, we are dazzled by unreality, hypnotized perhaps by the narrative of creating our own worlds. We don't always try to understand functional scaling. IT thinks in terms of queues. Twice the number of computers does not necessarily give you twice the result. IT still holds a mystery for us: we can't see the moving parts, so we believe that anything is possible. But this is not the case. IT happens in spacetime, and spacetime semantics are what we use to construct systems. Instead of throwing `more power to the shields', we have to develop our understanding of the behaviours of the architectural substrate or semantic space, in order to predict scaling. This is what I have set out to explore.

In my work on semantic spaces, I ask how the semantics of spacetime are exploited to build learning, knowledge representing, reasoning systems, to come a step closer to understanding how first systems and thence intelligence may scale. This, in turn, allows us to ask (in full seriousness) the question: to what extent can a smart material, or a smart city actually perform the functions of an intelligent entity? This high level question has been the goal of my detailed exploratory notes on semantic spacetimes.

Is Moore's law the relevant scaling law?

When the big names talk about the future of AI, they often argue in terms of computational capacity, as if intelligence were mainly a problem of brute force. Intelligence is discussed as if it were a batch process to be number crunched, and that it can be scaled simply by using more processors. But this is not straightforward. It does not make scaling sense

In fact, our notion of intelligence is pinned to a very particular scale, by the need for it to interact with the world at a scale we (the human judges) understand. Our expectation that AI will deal with the same issues that we do is based entirely on the idea that it will interact at the same scale as humans do. Nevertheless, we want it to work orders of magnitude faster, using orders of magnitude more data. But, as pointed out above, scaling does not work like that. It may just end up costing twice as much. The problem with simply connecting agents together into a network is that an intelligence built for scale S will not be an accurate facsimile of any other scale, from the viewpoint of its members. If you don't scale everything by the same amount, it is not the same system.

More than Moore is Metcalfe's law
In network value we trust...

The scaling of networks is potentially quadratic in costs and benefits, and with sparse infrastructure, we know from studying cities that all kinds of scaling laws can emerge to reflect different functional relationships (see my work on the Santa Fe city studies). At the bottom line, none of these laws are exponential in growth potential, because they are pinned by spacetime interaction scales, not by computation. I believe that this is an important consideration that computer science needs to confront.

Should smart behaviour be centralized (like a brain) or decentralized (like an immune system or society)? I considered what this means in The Brain Horizon. Society's role has emerged, since our hunter gathering past, to act as a stabilizing reservoir for norms and for knowledge, not least for the continuity of natural language. Natural language can be thought of as a very particular sensory input channel, highly tokenized, and quite unlike vision, whose supervised training is buffered by the societal memory for each new generation. Languages are the most powerful bridge we have between mind and the world around it, because they can represent the results of cognition and reasoning in a simplified and compressed form, then transmit this simplified experience onto others. Language is not equivalent to thought, but it is a stunning invention for expanding the reach of intelligence, which is essential to the way we reason and rationalize. The usefulness of language to reasoning makes it sensible to ask whether any great intelligence could emerge and evolve without communicating with a sufficient number of peers to bring robustness and diversity to its thinking.

What if humans were the sensors?

If intelligence is an individual, centric brain viewpoint, with a few benefits from linking in networks, what might be the complement of that intelligence? Stimulation? Enablement? For humans, it is the interaction between our nervous system and our environment. If a city is smart, at the scale of the city, what of its inhabitants? Is it the propensity for pattern recognition, rather than independent agency, that we really want from `smart systems'? Then, we too, could still be a part of the system, enhancing rather than supplanting our human and societal intelligence. For intelligence is useless without its muse.

Many thoughts and ideas about this are being developed in the area of Smart Cities. They include issues like the capacity for mixing of ideas, i.e. for innovation, but conversely the need for diversity and isolation to allow ideas to gestate and diverge. Even though very smart, capable, but narrow agents are economically attractive for industrial production, they are not good for aiding innovation and evolution. The separation of concerns is an economic imperative, not an intelligent one. Smart should not be incapable of adaptation. It must embrace sufficient diversity too, which arises from dissolving isolation and partitioning. There is a link to the CAP hypothesis here.

My own work on `dumb' systems goes back to CFEngine, which has a rudimentary kind of cognition and feedback, qualifying it as an adaptive system with emergent reasoning. This made it robust to its simplistic goals, but it also meant that users did not always agree with its decisions! Smart behaviour is in the eye of the beholder!

Will we recognize it, will it recognize us?

The idea that the emergence of a super-intelligence through AI could spiral out of control and ride roughshod over humanity continues to fascinate. Aside from the obvious research challenge, and exploratory curiosity, we need to find a good ethical answer for pursuing the AI goals. Heaven forbid that weaponization of AI (for high speed economic warfare, or for cyber warfare) could be the principal motivation. Could, for instance, a smart city reach a point at which is started to adapt to goals that did not favour its occupants?

On the subject of intelligence, and superiority, we stumble into all kinds of ethical questions, from slavery to eugenics that can't be avoided. Obviously, the building of a massive superior singular brain is neither good for the future of humanity, whether in a house, a company, a city, or a country; nor is it a particularly efficient way of constructing functional systems. The poor AI may not appreciate its superior solitude (finding Tinder unsatisfactory), and fail to develop altogether, raised alone in a jungle of us chimps. Its interactions with the world would not resemble ours, so why would it behave like us?

Unless the scales were similar or coupled, there is no reason to suppose that we would even interact strongly with an artificial intelligence, whether it be a city or a nano-scale adaptive material. If the scales did interact, then the pinning of that scale by sensory interaction would inevitably limit the growth of that intelligent entity. Once again, scale could be the limiting factor between dumb and smart.

So there is a final question: we can definitely improve cognitive faculties in a multitude of ways, but how do we know that we have not already reached the limit of what intelligence can do? Well, that seems very unlikley, but it seems clear that some limit there must be. I suspect that a semantic spacetimes approach will eventually be able to tell us.

Sat Jul 23 16:01:20 CEST 2016

Thanks to Mike Loukides for reading and commenting on a draft of this.

See also

Video of my talks about semantic spaces:

  1. Functional spaces - 25 yr anniversary keynote (see list selectionon page)
  2. Brains, Societies, and Semantic Spaces (related to blog post).
  3. Thinking in Promises for the Cyborg Age - Percolate Transition conference NYC 2015
  4. InfoQ on Computer Immunology and Configuration Management (from CraftConf 2016)

See also the work on semantic spaces, and workspaces.