text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
Introduction
 
Speech Is Physical
Exploring The Interface
Language Learning
 
Language Change
Paths of Change
Endangered Language
 
Dialects
 
Sign Language
Examples
Classroom Resources
 
 
 

Speech is Physcial and Mental >>  Exploring The Interface
Screenshot of ultrasound analysis video. Click to see video.
In this movie clip, an ultrasound captures a native English speaker’s tongue motions as he pronounces the nonsense word “zgomu.” Because “zg” sequences are not permitted in English, he doesn’t master the proper coordination to pronounce the unfamiliar sound sequence properly. Instead, he inserts a vowel-like sound between the z and g, forming “zegomu.” Ultrasound shows improper tongue motions frequently cause this type of mispronunciation.

Credit: Lisa Davidson. This movie was recorded in the lab of Dr. Maureen Stone at the University of Maryland, Baltimore.

Researchers are exploring a range of topics at the physical/mental interface of speech, including:

  • How are speech signals processed in the brain?
  • How does our native language shape the way we perceive speech and limit the range of sounds we can hear and reproduce?
  • Why do native speakers pronounce words differently from non-native speakers?

Speech Perception

Speech perception is one of our most valuable social skills. It is also a remarkably flexible process, as demonstrated by our ability to understand speakers in noisy environments and language spoken with a wide variety of accents. Both of these factors affect the speech signals our brain must decode. Exactly how does the human brain process language and does that process differ from how it handles other sounds? NSF-supported researcher Josef Rauschecker of Georgetown University strives to answer those and related questions.

In previous research, Rauschecker discovered that separate areas in primate brains control the processing of different sounds. For example, a particular region handles sounds used for communication. To determine whether human brains function similarly, Rauschecker is using non-invasive magnetic resonance imaging (MRI) techniques to observe which parts of volunteers’ brains are stimulated by speech. MRI measures increases or decreases in brain blood flow, which indicates changes in brain activity. Through this work, he has located areas of the brain that are stimulated by language as opposed to experimental control sounds with similar complexity. His work reveals new details about the organization of the brain’s hearing and language processing regions.


In this sound file, a native English speaker repeats words produced by a Slovak speaker. She cannot accurately reproduce words containing non-native consonant sequences such as /vd/ and /zn/ ("vdalay" and "znasho"). However, she has no trouble pronouncing the middle word "zegano" because a vowel sound follows the first z. This pattern is acceptable in English.

Credit: Lisa Davidson, New York University.

Cues From Foreign Words

Few people master the accent and pronunciation of foreign words—often despite years of input and training. Why do foreign language learners and those who “borrow” phrases from foreign languages pronounce words differently from native speakers? NSF-sponsored researcher Lisa Davidson of New York University thinks language-specific differences in the timing patterns of speech production are part of the answer.

To pronounce words properly, speakers must learn the timing of speech in a particular language—including the duration of consonant and vowel sounds and coordination between adjacent sounds. Some mispronunciations likely involve perception. A person’s brain is “tuned” to recognize the familiar—in this case, the subtleties and patterns of their native language. Non-native speakers often can’t hear or will misinterpret the differences between sounds in foreign languages.

For example, non-native combinations such as the /vl/ of “Vlasic” present a substantial challenge for native English speakers. Because the /vl/ sequence isn’t found at the beginnings of words in English, they might compensate by dropping a consonant or inserting a vowel sound between the first two letters, pronouncing the word as “Velasic”. In other cases, according to Davidson, non-native speakers know how a foreign word should be pronounced, but can’t achieve the proper timing and coordination to do so because they haven’t mastered the tongue motions used by native speakers. She has verified this by comparing the tongue motions of native and foreign language speakers using ultrasound. Davidson is interested in learning how speakers incorporate “borrowed” foreign words into their own language. She is investigating how mispronounced words get passed on through generations, and how this influences language change.

Language and Linguistics A Special Report