The brain processes speech in parallel with other sounds

The brain processes speech in parallel with other sounds

Jordana Cepelewicz writes:

Hearing is so effortless for most of us that it’s often difficult to comprehend how much information the brain’s auditory system needs to process and disentangle. It has to take incoming sounds and transform them into the acoustic objects that we perceive: a friend’s voice, a dog barking, the pitter-patter of rain. It has to extricate relevant sounds from background noise. It has to determine that a word spoken by two different people has the same linguistic meaning, while also distinguishing between those voices and assessing them for pitch, tone and other qualities.

According to traditional models of neural processing, when we hear sounds, our auditory system extracts simple features from them that then get combined into increasingly complex and abstract representations. This process allows the brain to turn the sound of someone speaking, for instance, into phonemes, then syllables, and eventually words.

But in a paper published in Cell in August, a team of researchers challenged that model, reporting instead that the auditory system often processes sound and speech simultaneously and in parallel. The findings suggest that how the brain makes sense of speech diverges dramatically from scientists’ expectations, with the signals from the ear branching into distinct brain pathways at a surprisingly early stage in processing — sometimes even bypassing a brain region thought to be a crucial stepping-stone in building representations of complex sounds.

The work offers hints of a new explanation for how the brain can unbraid overlapping streams of auditory stimuli so quickly and effectively. Yet in doing so, the discovery doesn’t just call into question more established theories about speech processing; it also challenges ideas about how the entire auditory system works. Much of the prevailing wisdom about our perception of sounds is based on analogies to what we know about computations performed in the visual system. But growing evidence, including the recent study on speech, hints that auditory processing works very differently — so much so that scientists are starting to rethink what the various parts of the auditory system are doing and what that means for how we decipher rich soundscapes. [Continue reading…]

Comments are closed.