Zhichao Zhu
Affiliations. Address. Contacts. Motto. Etc.
Institute of Science and Technology for Brain-Inspired Intelligence
Fudan University
Shanghai, China
I study the foundations of brain-inspired intelligence from a theory-driven perspective. My primary research question is how physical neural systems realize computable representations, decision-making and learning under constraints of noise and energy. Rather than treating perception, inference and action as separate modules, I model intelligent systems as constrained observers—physical systems whose internal representations, decisions, and learning dynamics are shaped by noise, energetic limits, and the selective accessibility of statistical structure in the world.
A central theme of my research is that meaning and functionality do not arise from raw information alone, but from selection processes that determine which statistical structures can persist and be acted upon. From this viewpoint, learning is inseparable from interaction: perception is already shaped by action, and efficient computation often relies on outsourcing complexity to environmental dynamics rather than resolving it internally. My work aims to clarify why correlation, nonlinearity, and embodiment are not implementation details, but necessary conditions for scalable and energy-efficient intelligence.
By synthesizing ideas from neuroscience, machine learning, and theoretical physics, I seek to provide a unifying conceptual framework for understanding biological and artificial learning systems—one that explains not only how systems learn, but why certain forms of learning are possible at all.
news
| Oct 22, 2015 | A simple inline announcement. |
|---|
latest posts
| Mar 15, 2015 | a post with formatting and links |
|---|
selected publications
-
- Toward a Free-Response Paradigm of Decision-Making in Spiking Neural NetworksNeural Computation, Jan 2025
- Learning to integrate parts for whole through correlated neural variabilityPLOS Computational Biology, Sep 2024
- Stochastic Forward-Forward Learning through Representational Dimensionality CompressionarXiv preprint arXiv:2505.16649, Sep 2025