Neural manifolds describe the low-dimensional geometric structures that emerge when large populations of neurons are recorded simultaneously. Even when hundreds or thousands of neurons are active, the collective patterns of firing tend to lie on smooth, lower-dimensional surfaces in the high-dimensional space of possible activity patterns. These manifolds reflect the underlying computational constraints of neural circuits — the coordinated dynamics that govern how populations of neurons represent and transform information during behaviour.

In the context of brain-computer interfaces, neural manifolds have become a foundational concept for decoder design and optimisation. Research has shown that constraining BCI decoders to operate within the intrinsic manifold of motor cortex activity — rather than treating each neuron independently — can substantially improve decoding accuracy and information transfer rates. Manifold-aligned decoders exploit the natural covariance structure of neural populations, reducing noise and enabling more robust, stable control signals. This approach has proved particularly valuable for intracortical BCIs, where day-to-day neural drift can shift individual neuron tuning while preserving the overall manifold geometry.

Beyond BCI applications, the study of neural manifolds has reshaped understanding of motor planning, decision-making, and learning. Population-level analyses using dimensionality reduction techniques such as PCA, GPFA, and latent factor models have revealed that neural trajectories during movement preparation follow stereotyped paths through manifold space, and that learning new skills involves reorganising activity within or across existing manifold dimensions rather than creating entirely new patterns of coordination. These insights bridge computational neuroscience and neural engineering, providing both a theoretical framework and practical tools for interpreting large-scale neural recordings.