Communication Problems and Intelligent Systems
Relative to the broad subject of communication, there seem to be problems at three levels. Thus it seems reasonable to ask, serially:
- LEVEL A. How accurately can the symbols of communication be transmitted? (The technical problem.)
- LEVEL B. How precisely do the transmitted symbols convey the desired meaning? (The semantic problem.)
LEVEL C. How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem.)
- Shannon & Weaver
Shannon’s information theory can be understood as a complete and elegant solution to the first of these questions. By reducing communication to the reliable transmission of symbols through a noisy channel, it deliberately abstracts away the problems of meaning and use, treating information as statistical structure alone. This abstraction is not a limitation but a design choice—it renders Level A closed, formal, and solvable. However, Levels B and C cannot be defined within the same framework. Signals do not come with meaning attached, and representations do not specify their own use. Both depend on how a system interacts with its environment—what distinctions it can make and what actions it can take. It is therefore not surprising that the central challenges in artificial intelligence and neuroscience arise precisely at these remaining levels. Systems do not merely transmit signals; they must determine what those signals are about and how they should guide action. Understanding intelligence thus requires moving beyond transmission, toward a theory of how structure becomes meaning, and how meaning becomes use. Recent approaches in AI and neuroscience can be seen as attempts to introduce such principles, ranging from predictive world models and embodied interaction to free energy minimization.
Recent work on world models, most prominently advocated by Yann LeCun, proposes that intelligent systems should learn internal representations that capture the structure of the external world through prediction. By training models to anticipate future observations or latent states, this approach treats predictable regularities as the foundation of meaningful representation. In the context of communication, this can be understood as an attempt to address Level B: meaning is approximated by those aspects of the signal that can be consistently inferred from data. However, while prediction provides a powerful internal principle for organizing structure, it does not determine which of these structures should matter for behavior. A system may model large portions of its environment with high fidelity, yet lack any principled way to select what is relevant for action, leaving Level C unresolved.
The free energy principle, developed by Karl Friston, offers a more unified framework in which perception and action are treated as parts of a single inferential process. By minimizing variational free energy, a system maintains consistency between its internal model and sensory inputs while actively selecting actions that reduce uncertainty. This effectively couples representation and behavior, providing a bridge from Level B to Level C: structures become meaningful insofar as they contribute to the system’s persistence. Yet this formulation depends on prior assumptions about which states are expected or desirable. In this sense, the problem of meaning is not eliminated but encoded into these priors, leaving open the question of how such preferences are determined.
A different line of thought, often grouped under the umbrella of NeuroAI and associated with researchers such as Terrence Sejnowski, emphasizes that intelligence must be understood in the context of a system’s interaction with its environment. Rather than treating representation as a self-contained object, this perspective highlights embodiment, action, and multi-scale dynamics as essential components of cognition. From the standpoint of communication, this shifts the focus from signals themselves to the processes through which signals are used in closed-loop interaction, bringing Level C to the forefront. Meaning is no longer defined internally, but emerges from the coupling between system and world. However, while this perspective clarifies where meaning arises, it does not yet provide a precise criterion for determining which structures, among many possible interactions, become functionally significant.
Finally, the notion of mortal computation, proposed by Geoffrey Hinton, challenges the assumption that representations are stable entities that persist independently of the physical processes that generate them. Instead, computation is viewed as inherently transient, with representations existing only as momentary patterns within ongoing activity. This reframing shifts attention away from static symbols toward the dynamics of computation itself, suggesting that meaning cannot be separated from the processes that realize it. In terms of communication, this weakens the idea that Level B can be grounded in fixed representational structures, emphasizing instead the role of temporal dynamics. However, while this view dissolves the notion of immutable symbols, it does not by itself specify how transient patterns become meaningful or how they guide behavior, leaving both Level B and Level C only partially addressed.
Taken together, these approaches do not so much compete as illuminate different aspects of the same underlying gap. World models show how structure can be learned, but not how it is selected; the free energy principle couples representation and action, but relies on prior assumptions about what should be preserved; NeuroAI situates meaning in interaction, but leaves its criteria implicit; and mortal computation reframes representation as process, without specifying how such processes acquire significance. What remains unresolved across all of them is not how structure is represented, predicted, or even acted upon, but how a system determines which structures are worth using in the first place. This suggests that the central problem of Levels B and C is not one of representation or inference alone, but of constraint: meaning and use arise only for those structures that can be distinguished and stabilized under the limitations of a system’s interface, noise, and resources. In this light, the difficulty of understanding intelligence may stem not simply from the complexity of the brain, but from treating it in isolation. Neural activity, taken on its own, does not uniquely determine what it is about or why it matters; these emerge through interaction with the world. Intelligence, in this sense, may not reside in the brain alone, but in the relation it maintains with its environment.
Enjoy Reading This Article?
Here are some more articles you might like to read next: