When does a physical system compute

Many discussions of computation begin with a tempting slogan: the brain computes, cells compute, proteins compute, perhaps even the universe computes. The slogan is attractive because every physical system evolves from one state to another. But if every physical evolution is already computation, then the word “computation” loses its function. It no longer tells us what is special about a computer, a nervous system, or an information-processing device.

Horsman et al.’s paper, When Does a Physical System Compute?, resists this temptation. It asks not whether physical systems evolve, but under what conditions physical evolution can be treated as computation. Their answer is that computation requires a representation relation: a stable way to connect abstract states with physical states. A physical system computes only when an abstract input can be encoded into the system, transformed by its physical dynamics, and decoded as an abstract output.

This distinction is necessary because lawful physical evolution is not enough. A stone, a soap bubble, a protein, a photon, and a laptop all undergo physical change, but they are not computers in the same sense. To call all of them computers merely because they evolve would make computation indistinguishable from physical process itself. Computation requires more: the physical dynamics must be used to stand in for an abstract evolution.

In physics, we often represent a physical system by an abstract model. The model is not the physical system itself; it is a structured description that allows prediction, comparison, and reasoning. Computation reverses and extends this relation. Instead of using an abstract model to predict a physical process, we use a physical process to obtain the result of an abstract transformation. In this sense, computation is neither purely physical nor purely abstract. It is a structured coupling between the two.

A voltage level, for example, has no computational meaning by itself. A high voltage may stand for 1, and a low voltage may stand for 0, but only within a stable representational scheme. Outside that scheme, the voltage is just a physical quantity. The same applies to a dial position, a punch card, a transistor state, or any other physical configuration. Physical states become computational states only when they are made to correspond reliably to abstract distinctions.

The minimal structure of physical computation can therefore be represented as:

abstract input
    ↓ encoding
physical initial state
    ↓ physical evolution
physical final state
    ↓ decoding
abstract output

This encode–evolve–decode structure is the core of the paper. Without encoding and decoding, there is only physical evolution. With them, physical evolution can be used to produce an abstract result.

The authors summarize this point in one central definition:

Physical computing is the use of a physical system to predict the outcome of an abstract evolution.

This definition prevents the collapse of computation into physical change. A computer is not simply a physical object that evolves. It is a physical system whose dynamics can be trusted as a substitute for carrying out an abstract process directly. This trust depends on a sufficiently reliable theory of the device: we must know how its physical states represent abstract states, how its dynamics transform those states, and how the output should be read.

This is why ordinary digital computers are reliable. Their physical substrates are highly engineered. We have strong theories of transistors, circuits, voltage levels, logic gates, memory, and composition. The relation between physical state and abstract symbol is not invented after the result appears. It is stabilized by design.

The same criterion also clarifies why many claims about unconventional computation are delicate. A soap film may settle into a low-energy shape. A slime mould may form efficient paths. A biological or chemical system may produce interesting input-output behavior. But resemblance to a solution is not enough. To call such a system a computer, we need a stable way to encode the problem, a reliable account of how the physical dynamics transform that encoding, and a stable way to decode the result.

This is where the paper’s criticism of post hoc computation becomes important. Given enough freedom, an observer can often choose a representation after observing a physical process so that the process appears to have computed something. But if the final state is needed in order to decide what computation was performed, then the system has not predicted the result of an abstract evolution. The interpretation has merely been fitted afterward. The system has not computed; it has only been described.

The relevant distinction must therefore be available before the process runs. Computation requires a stable operational relation between abstract distinctions and physical distinctions. It cannot depend on retrospective redescription.

The authors formalize this idea using commuting diagrams. There are two paths from an initial abstract state to a final abstract state. One path remains abstract: apply the abstract rule and obtain the abstract result. The other path passes through the physical system: encode the abstract input, let the physical system evolve, and decode the final physical state. A physical system computes when these two paths arrive at the same result, within an acceptable tolerance.

This tolerance matters. Real computation, like real experiment, does not require perfect identity between abstract prediction and physical outcome. It requires sufficient reliability for the purpose at hand. A classroom calculator, a flight-control system, a quantum computer, and a biological computing substrate may each require different standards. But in every case, there must be a stable enough correspondence between physical evolution and abstract evolution.

The broader message is that computation is not a free label for any physical process. Complexity is not enough. Lawful dynamics are not enough. Similarity to a solution is not enough. A system computes only when its physical distinctions are made to correspond to abstract distinctions in a stable, usable, and predictive way.

The value of Horsman et al.’s paper is that it makes this boundary explicit. It explains why a laptop computes, why a calculator computes, why a physical simulator may compute, and why a randomly evolving physical object does not compute merely because it can be described afterward. Computation begins when physical evolution is placed inside a stable representational scheme.

Reference Horsman, D., Stepney, S., Wagner, R. C. & Kendon, V. When does a physical system compute? Proc. R. Soc. A. 470, 20140182 (2014).




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Communication Problems and Intelligent Systems
  • What Makes Inference Possible?
  • The Decoupling of Information Complexity and Energy Cost
  • What Makes Semantics Semantic?