The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences Purpose The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method We tested 30 young adults and 30 older adults. Participants heard lists of sentences ... Research Article
Research Article  |   March 15, 2018
The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences
 
Author Affiliations & Notes
  • Margaret A. Koeritzer
    Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
  • Chad S. Rogers
    Department of Otolaryngology, Washington University in St. Louis, MO
  • Kristin J. Van Engen
    Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
  • Jonathan E. Peelle
    Department of Otolaryngology, Washington University in St. Louis, MO
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Dr. Jonathan Peelle: jpeelle@wustl.edu
  • Editor-in-Chief: Frederick (Erick) Gallun
    Editor-in-Chief: Frederick (Erick) Gallun×
  • Editor: Daniel Fogerty
    Editor: Daniel Fogerty×
Article Information
Hearing & Speech Perception / Acoustics / Hearing Disorders / Attention, Memory & Executive Functions / Hearing / Research Articles
Research Article   |   March 15, 2018
The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences
Journal of Speech, Language, and Hearing Research, March 2018, Vol. 61, 740-751. doi:10.1044/2017_JSLHR-H-17-0077
History: Received February 27, 2017 , Revised August 28, 2017 , Accepted September 20, 2017
 
Journal of Speech, Language, and Hearing Research, March 2018, Vol. 61, 740-751. doi:10.1044/2017_JSLHR-H-17-0077
History: Received February 27, 2017; Revised August 28, 2017; Accepted September 20, 2017
Web of Science® Times Cited: 1

Purpose The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension.

Method We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously.

Results Recognition memory (indexed by d′) was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise.

Conclusions Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences.

Supplemental Materials https://doi.org/10.23641/asha.5848059

Acknowledgments
The work reported here was supported by NIH Grant R01DC014281 and the Dana Foundation, both awarded to Jonathan E. Peelle. We thank Antje Heinrich for providing the multitalker babble. We are grateful to Brianne Noud, Sarah McConkey, Carol Iskiwitch, and Nina Punyamurthy for their help in data collection and to our volunteers for their participation.
Margaret A. Koeritzer, Chad S. Rogers, Kristin J. Van Engen, and Jonathan E. Peelle designed the study. Margaret A. Koeritzer collected the data. Chad S. Rogers performed the statistical analyses. Margaret A. Koeritzer and Jonathan E. Peelle drafted the manuscript with input from Chad S. Rogers and Kristen J. Van Engen. All authors discussed the results and implications and provided critical input to the manuscript at all stages.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access