Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined ... Research Article
Research Article  |   April 01, 2003
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
 
Author Affiliations & Notes
  • Adam R. Kaiser
    Indiana University School of Medicine, Indianapolis
  • Karen Iler Kirk
    Indiana University School of Medicine, Indianapolis
  • Lorin Lachs
    Indiana University, Bloomington
  • David B. Pisoni
    Indiana University, Bloomington
  • Contact author: Karen Iler Kirk, Department of Otolaryngology-Head and Neck Surgery, Indiana University School of Medicine, 699 West Drive, RR 044, Indianapolis, IN 46202. E-mail kkirk@iupui.edu
Article Information
Hearing Aids, Cochlear Implants & Assistive Technology / Hearing / Research Articles
Research Article   |   April 01, 2003
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Journal of Speech, Language, and Hearing Research, April 2003, Vol. 46, 390-404. doi:10.1044/1092-4388(2003/032)
History: Received August 7, 2001 , Accepted December 3, 2002
 
Journal of Speech, Language, and Hearing Research, April 2003, Vol. 46, 390-404. doi:10.1044/1092-4388(2003/032)
History: Received August 7, 2001; Accepted December 3, 2002
Web of Science® Times Cited: 56

The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

Acknowledgments
This work was supported by National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grants K23 DC00126, R01 DC00111, and T32 DC 00012. Support also was provided by Psi Iota Xi National Sorority. We thank Marcia Hay-McCutcheon and Stacey Yount for their assistance in data collection and management. We also are grateful to Luis Hernandez and Marcelo Areal for their development of the software used for stimulus presentation and data collection. Finally, we thank Sujuan Gao for her assistance with the power analyses reported here.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access