Speech bci-and-neural-decoding aim to restore communication by decoding speech-related neural signals directly from the brain. These systems target individuals who have lost the ability to speak due to conditions such as amyotrophic lateral sclerosis, brainstem stroke, or severe spinal cord injury, offering the potential for naturalistic, high-bandwidth communication that surpasses letter-by-letter BCI spelling approaches.

Early speech BCI research established that neural signals associated with speech production, including articulatory movements and phonemic representations, could be recorded from speech motor cortex and decoded into text or synthesized speech. Both intracortical microelectrode arrays and high-density ecog grids have been used to capture the distributed neural activity underlying speech, with decoding vocabulary growing from individual phonemes to sentences of increasing complexity.

Key technical challenges include achieving real-time decoding speeds that match natural speech rates, handling the variability of attempted speech in paralyzed users who cannot produce overt movements, and building robust language models that integrate neural and linguistic information. The convergence of high-channel-count neural recording, deep learning sequence models, and large language models is accelerating progress toward clinical speech BCI systems.