Effects of Phonetic Context on Audio-Visual Intelligibility of French Bimodal perception leads to better speech understanding than auditory perception alone. We evaluated the overall benefit of lip-reading on natural utterances of French produced by a single speaker. Eighteen French subjects with good audition and vision were administered a closed set identification test of VCVCV nonsense words consisting of three ... Research Article
Research Article  |   October 01, 1994
Effects of Phonetic Context on Audio-Visual Intelligibility of French
 
Author Affiliations & Notes
  • Christian Benoît
    Institut de la Communication Parlée, URA CNRS n° 368 INPG-ENSERG/Université Stendhal, Grenoble, France
  • Tayeb Mohamadi
    Institut de la Communication Parlée, URA CNRS n° 368 INPG-ENSERG/Université Stendhal, Grenoble, France
  • Sonia Kandel
    Institut de la Communication Parlée, URA CNRS n° 368 INPG-ENSERG/Université Stendhal, Grenoble, France
  • Contact author: Christian Benoît, PhD, Institut de la Communication Parlée, URA CNRS n° 368, INPG-ENSERG/Université Stendhal, BP 25X-38040. Grenoble Cedex 9, France.
Article Information
Speech, Voice & Prosodic Disorders / Cultural & Linguistic Diversity / Hearing / Research Articles
Research Article   |   October 01, 1994
Effects of Phonetic Context on Audio-Visual Intelligibility of French
Journal of Speech, Language, and Hearing Research, October 1994, Vol. 37, 1195-1203. doi:10.1044/jshr.3705.1195
History: Received August 16, 1993 , Accepted May 3, 1994
 
Journal of Speech, Language, and Hearing Research, October 1994, Vol. 37, 1195-1203. doi:10.1044/jshr.3705.1195
History: Received August 16, 1993; Accepted May 3, 1994

Bimodal perception leads to better speech understanding than auditory perception alone. We evaluated the overall benefit of lip-reading on natural utterances of French produced by a single speaker. Eighteen French subjects with good audition and vision were administered a closed set identification test of VCVCV nonsense words consisting of three vowels [i, a, y] and six consonants [b, v, z, 3, r, l]. Stimuli were presented under both auditory and audio-visual conditions with white noise added at various signal-to-noise ratios. Identification scores were higher in the bimodal condition than in the auditory-alone condition, especially in situations where acoustic information was reduced. The auditory and audio-visual intelligibility of the three vowels [i, a, y] averaged over the six consonantal contexts was evaluated as well. Two different hierarchies of intelligibility were found. Auditorily, [a] was most intelligible, followed by [i] and then by [y]; whereas visually [y] was most intelligible, followed by [a] and [i]. We also quantified the contextual effects of the three vowels on the auditory and audio-visual intelligibility of the consonants. Both the auditory and the audio-visual intelligibility of surrounding consonants was highest in the [a] context, followed by the [i] context and lastly the [y] context.

Acknowledgments
We gratefully acknowledge Christian Abry, Marie-Agnès Cathiard, Tahar Lallouache, Tom Sawallis, Shelley Peery for friendly, technical, scientific, and/or linguistic support, and Rebecca Eilers and an anonymous reviewer for enlightening comments on an earlier version of this manuscript and for the many improvements they kindly brought to our article throughout the reviewing process.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access