Auditory Speech Recognition and Visual Text Recognition in Younger and Older Adults: Similarities and Differences Between Modalities and the Effects of Presentation Rate Purpose To examine age-related differences in auditory speech recognition and visual text recognition performance for parallel sets of stimulus materials in the auditory and visual modalities. In addition, the effects of variation in rate of presentation of stimuli in each modality were investigated in each age group. Method ... Research Article
Research Article  |   April 01, 2007
Auditory Speech Recognition and Visual Text Recognition in Younger and Older Adults: Similarities and Differences Between Modalities and the Effects of Presentation Rate
 
Author Affiliations & Notes
  • Larry E. Humes
    Indiana University, Bloomington
  • Matthew H. Burk
    Indiana University, Bloomington
  • Maureen P. Coughlin
    Indiana University, Bloomington
  • Thomas A. Busey
    Indiana University, Bloomington
  • Lauren E. Strauser
    Indiana University, Bloomington
  • Contact author: Larry E. Humes, Department of Speech and Hearing Sciences, Indiana University, Bloomington, IN 47405. E-mail: humes@indiana.edu.
Article Information
Hearing & Speech Perception / Hearing Disorders / Special Populations / Older Adults & Aging / Hearing / Research Articles
Research Article   |   April 01, 2007
Auditory Speech Recognition and Visual Text Recognition in Younger and Older Adults: Similarities and Differences Between Modalities and the Effects of Presentation Rate
Journal of Speech, Language, and Hearing Research, April 2007, Vol. 50, 283-303. doi:10.1044/1092-4388(2007/021)
History: Received November 23, 2005 , Revised June 12, 2006 , Accepted August 11, 2006
 
Journal of Speech, Language, and Hearing Research, April 2007, Vol. 50, 283-303. doi:10.1044/1092-4388(2007/021)
History: Received November 23, 2005; Revised June 12, 2006; Accepted August 11, 2006
Web of Science® Times Cited: 36

Purpose To examine age-related differences in auditory speech recognition and visual text recognition performance for parallel sets of stimulus materials in the auditory and visual modalities. In addition, the effects of variation in rate of presentation of stimuli in each modality were investigated in each age group.

Method A mixed-model design was used in which 3 independent groups (13 young adults with normal hearing, 10 elderly adults with normal hearing, and 16 elderly hearing-impaired adults) listened to auditory speech tests (a sentence-in-noise task, time-compressed monosyllables, and a speeded-spelling task) and viewed visual text-based analogs of the auditory tests. All auditory speech materials were presented so that the amplitude of the speech signal was at least 15 dB above threshold through 4000 Hz.

Results Analyses of the group data revealed that when baseline levels of performance were used as covariates in the group analyses the only significant group difference was that both elderly groups performed worse than the young group on the auditory speeded-speech tasks. Analysis of individual data, using correlations, factor analysis, and linear regression, was generally consistent with the group data and revealed significant, moderate correlations of performance for similar tasks across modalities, but stronger correlations across tasks within a modality. This suggests that performance on these tasks was mediated both by a common underlying factor, such as cognitive processing, as well as modality-specific processing.

Conclusion Performance on measures of auditory processing of speech examined here was closely associated with performance on parallel measures of the visual processing of text obtained from the same participants. Young and older adults demonstrated comparable abilities in the use of contextual information in each modality, but older adults, regardless of hearing status, had more difficulty with fast presentation of auditory speech stimuli than young adults. There were no differences among the 3 groups with regard to the effects of presentation rate for the visual recognition of text, at least for the rates of presentation used here.

Acknowledgments
Portions of the data presented in this article were presented by Larry E. Humes at “Aging and Speech Communication: An International and Interdisciplinary Research Conference” at Indiana University, Bloomington, in October 2005 and at the 2005 annual meeting of the American Speech-Language-Hearing Association in San Diego, California.
This work was supported, in part, by Research Grant R01 AG08293 awarded by the National Institute on Aging to Larry E. Humes. The authors express their gratitude to Paul Bauer for his help in developing some of the test materials and procedures used in this study.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access