Friday, January 31, 2014

New Study Reveals How the Brain Recognizes Speech Sounds

January 30, 2014. Francisco researchers are reporting a detailed account of how speech sounds are identified by the human brain, offering an unprecedented insight into the basis of human language.
The finding, they said, may add to our understanding of language disorders, including dyslexia.
Scientists have known for some time the location in the brain where speech sounds are interpreted, but little has been discovered about how this process works.
The UCSF team reports that the brain does not respond to the individual sound segments known as phonemes – such as the b sound in “boy” – but is instead exquisitely tuned to detect simpler elements, which are known to linguists as “features.”
This organization may give listeners an important advantage in interpreting speech, the researchers said, since the articulation of phonemes varies considerably across speakers, and even in individual speakers over time.
The work may add to our understanding of reading disorders, in which printed words are imperfectly mapped onto speech sounds. But because speech and language are a defining human behavior, the findings are significant in their own right, said UCSF neurosurgeon and neuroscientist Edward F. Chang, MD, senior author of the new study.
“This is a very an intriguing glimpse into speech processing,” said Chang, associate professor of neurological surgery and physiology. “The brain regions where speech is processed in the brain had been identified, but no one has really known how that processing happens.”

Breaking Down Speech into Acoustic Features

Although we usually find it effortless to understand other people when they speak, parsing the speech stream is an impressive perceptual feat.
Speech is a highly complex and variable acoustic signal, and our ability to instantaneously break that signal down into individual phonemes and then build those segments back up into words, sentences and meaning is a remarkable capability.
The previous studies have analyzed brain responses to just a few natural or synthesized speech sounds, but the new research employed spoken natural sentences containing the complete inventory of phonemes in the English language.
To capture the very rapid brain changes involved in processing speech, the UCSF scientists gathered their data from neural recording devices that were placed directly on the surface of the brains of six patients as part of their epilepsy surgery.

 The patients listened to a collection of 500 unique English sentences spoken by 400 different people while the researchers recorded from a brain area called the superior temporal gyrus (STG; also known as Wernicke’s area), which previous research has shown to be involved in speech perception. The utterances contained multiple instances of every English speech sound.
Many researchers have presumed that brain cells in the STG would respond to phonemes. But the researchers found instead that regions of the STG are tuned to respond to even more elemental acoustic features that reference the particular way that speech sounds are generated from the vocal tract. “These regions are spread out over the STG,” said Nima Mesgarani (PhD, who was a postdoctoral fellow in Chang’s laboratory.) “As a result, when we hear someone talk, different areas in the brain ‘light up’ as we hear the stream of different speech elements.”

Reference:

News Reported the Journal 'SCIENCE EXPRESS'





No comments: