Attention to Facial Regions in Segmental and Prosodic Visual Speech Perception Tasks Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories ... Research Article
Research Article  |   June 01, 1999
Attention to Facial Regions in Segmental and Prosodic Visual Speech Perception Tasks
 
Author Affiliations & Notes
  • Charissa R. Lansing
    University of Illinois at Urbana-Champaign
  • George W. McConkie
    University of Illinois at Urbana-Champaign
  • Contact author: Charissa R. Lansing, PhD, Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 South Sixth Street, Champaign, IL 61821.
    Contact author: Charissa R. Lansing, PhD, Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 South Sixth Street, Champaign, IL 61821.×
  • Corresponding author: Email: crl@uiuc.edu
Article Information
Audiologic / Aural Rehabilitation / Speech, Voice & Prosody / Hearing / Research Articles
Research Article   |   June 01, 1999
Attention to Facial Regions in Segmental and Prosodic Visual Speech Perception Tasks
Journal of Speech, Language, and Hearing Research, June 1999, Vol. 42, 526-539. doi:10.1044/jslhr.4203.526
History: Received August 12, 1998 , Accepted January 8, 1999
 
Journal of Speech, Language, and Hearing Research, June 1999, Vol. 42, 526-539. doi:10.1044/jslhr.4203.526
History: Received August 12, 1998; Accepted January 8, 1999

Two experiments were conducted to test the hypothesis that visual information related to segmental versus prosodic aspects of speech is distributed differently on the face of the talker. In the first experiment, eye gaze was monitored for 12 observers with normal hearing. Participants made decisions about segmental and prosodic categories for utterances presented without sound. The first experiment found that observers spend more time looking at and direct more gazes toward the upper part of the talker's face in making decisions about intonation patterns than about the words being spoken. The second experiment tested the Gaze Direction Assumption underlying Experiment 1—that is, that people direct their gaze to the stimulus region containing information required for their task. In this experiment, 18 observers with normal hearing made decisions about segmental and prosodic categories under conditions in which face motion was restricted to selected areas of the face. The results indicate that information in the upper part of the talker's face is more critical for intonation pattern decisions than for decisions about word segments or primary sentence stress, thus supporting the Gaze Direction Assumption. Visual speech perception proficiency requires learning where to direct visual attention for cues related to different aspects of speech.

Acknowledgments
Charissa R. Lansing, Department of Speech and Hearing Science. George W. McConkie, Department of Educational Psychology and Beckman Institute of Advanced Science and Technology. Portions of this research were presented at the 130th meeting of the Acoustical Society of America, St. Louis, MO, November 1995, and at the Annual Convention of the American-Speech-Language-Hearing Association, Orlando, FL, December 1995. This work was supported in part by Research Grants 1 R29 DC 022050 and 1 R03-DC01600 from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health. The authors would like to thank Heather Minch for her assistance in data collection and figure preparation. The authors are grateful to L. E. Bernstein, K. W. Grant, and D. W. Massaro for their comments on an earlier version of this manuscript.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access