Linking Visual and Kinesthetic Imagery in Lipreading Instruction The purpose of this study was to replicate van Uden’s (1983) finding that watching oneself speak improves lipreading of visually confusable nonsense words. Specifically, this replication focused on an older group of subjects whose educational experience varied widely in the emphasis given to spoken communication. Four groups of 12 young-adult ... Research Article
Research Article  |   February 01, 1995
Linking Visual and Kinesthetic Imagery in Lipreading Instruction
 
Author Affiliations & Notes
  • Carol Lee De Filippo
    Department of Communication Research National Technical Institute for the Deaf Rochester Institute of Technology Rochester, NY
  • Donald G. Sims
    Department of Communication Research National Technical Institute for the Deaf Rochester Institute of Technology Rochester, NY
  • Linda Gottermeier
    Department of Audiology National Technical Institute for the Deaf Rochester Institute of Technology Rochester, NY
Article Information
Audiologic / Aural Rehabilitation / Speech, Voice & Prosody / Hearing / Research Articles
Research Article   |   February 01, 1995
Linking Visual and Kinesthetic Imagery in Lipreading Instruction
Journal of Speech, Language, and Hearing Research, February 1995, Vol. 38, 244-256. doi:10.1044/jshr.3801.244
History: Received September 16, 1993 , Accepted September 22, 1994
 
Journal of Speech, Language, and Hearing Research, February 1995, Vol. 38, 244-256. doi:10.1044/jshr.3801.244
History: Received September 16, 1993; Accepted September 22, 1994

The purpose of this study was to replicate van Uden’s (1983) finding that watching oneself speak improves lipreading of visually confusable nonsense words. Specifically, this replication focused on an older group of subjects whose educational experience varied widely in the emphasis given to spoken communication. Four groups of 12 young-adult subjects who are deaf participated in evaluating two aspects of training: (a) source of video feedback (self or trainer), and (b) timing of feedback (during speech production or after speech production). Mean posttest results indicated significantly increased accuracy in identifying items that had been trained. The group that viewed self-speech after speech-production practice also demonstrated generalization to test items that were not trained. On the combined list of both trained and untrained items, both groups that viewed their own speech achieved significant gains compared to pretest scores, but those that viewed the trainer’s speech did not. Response time (RT) during pre- and posttesting was measured using a computer-generated waveform display to calculate the interval between stimulus offset and response onset. Results are reported for 13 subjects with ≥ 50% speech intelligibility for words in sentences. Although there were no differences attributable to training conditions, there was an overall increase in the regularity of the identification responses after training (measured by the standard deviation of RTs) and a generalization of the improvement to the untrained items. The results of this study substantiate the beneficial effects of multisensory feedback by practicing lipreading of one’s own speech production. This finding appears to apply even to young-adult subjects who are deaf and whose habituated speech patterns may be quite distinct from those of talkers with normal hearing.

Acknowledgments
Part of this work was supported through an agreement between the Rochester Institute of Technology and the U.S. Office of Education. Grateful acknowledgment is made to Vincent J. Samar and Sr. Margaret Walubuka for their assistance in analyzing the data generated in this study.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access