Researchers are exploring a range of topics at the physical/mental interface of speech, including:
- How are speech signals processed in the brain?
- How does our native language shape the way we perceive speech and limit the range of sounds we can hear and reproduce?
- Why do native speakers pronounce words differently from non-native speakers?
Speech perception is one of our most valuable social skills. It is also a remarkably flexible process, as demonstrated by our ability to understand speakers in noisy environments and language spoken with a wide variety of accents. Both of these factors affect the speech signals our brain must decode. Exactly how does the human brain process language and does that process differ from how it handles other sounds? NSF-supported researcher Josef Rauschecker of Georgetown University strives to answer those and related questions.
In previous research, Rauschecker discovered that separate areas in primate brains control the processing of different sounds. For example, a particular region handles sounds used for communication. To determine whether human brains function similarly, Rauschecker is using non-invasive magnetic resonance imaging (MRI) techniques to observe which parts of volunteers’ brains are stimulated by speech. MRI measures increases or decreases in brain blood flow, which indicates changes in brain activity. Through this work, he has located areas of the brain that are stimulated by language as opposed to experimental control sounds with similar complexity. His work reveals new details about the organization of the brain’s hearing and language processing regions.
In this sound file, a native English speaker repeats words produced by a Slovak speaker. She cannot accurately reproduce words containing non-native consonant sequences such as /vd/ and /zn/ ("vdalay" and "znasho"). However, she has no trouble pronouncing the middle word "zegano" because a vowel sound follows the first z. This pattern is acceptable in English.
Credit: Lisa Davidson, New York University.
Cues From Foreign Words
Few people master the accent and pronunciation of foreign words—often despite years of input and training. Why do foreign language learners and those who “borrow” phrases from foreign languages pronounce words differently from native speakers? NSF-sponsored researcher Lisa Davidson of New York University thinks language-specific differences in the timing patterns of speech production are part of the answer.
To pronounce words properly, speakers must learn the timing of speech in a particular language—including the duration of consonant and vowel sounds and coordination between adjacent sounds. Some mispronunciations likely involve perception. A person’s brain is “tuned” to recognize the familiar—in this case, the subtleties and patterns of their native language. Non-native speakers often can’t hear or will misinterpret the differences between sounds in foreign languages.
For example, non-native combinations such as the /vl/ of “Vlasic” present a substantial challenge for native English speakers. Because the /vl/ sequence isn’t found at the beginnings of words in English, they might compensate by dropping a consonant or inserting a vowel sound between the first two letters, pronouncing the word as “Velasic”. In other cases, according to Davidson, non-native speakers know how a foreign word should be pronounced, but can’t achieve the proper timing and coordination to do so because they haven’t mastered the tongue motions used by native speakers. She has verified this by comparing the tongue motions of native and foreign language speakers using ultrasound. Davidson is interested in learning how speakers incorporate “borrowed” foreign words into their own language. She is investigating how mispronounced words get passed on through generations, and how this influences language change.