Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional ... Research Article
Research Article  |   August 16, 2017
Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss
 
Author Affiliations & Notes
  • Christi W. Miller
    Department of Speech and Hearing Sciences, University of Washington, Seattle
  • Erin K. Stewart
    Department of Speech and Hearing Sciences, University of Washington, Seattle
  • Yu-Hsiang Wu
    Department of Communication Sciences and Disorders, University of Iowa, Iowa City
  • Christopher Bishop
    Department of Speech and Hearing Sciences, University of Washington, Seattle
  • Ruth A. Bentler
    Department of Communication Sciences and Disorders, University of Iowa, Iowa City
  • Kelly Tremblay
    Department of Speech and Hearing Sciences, University of Washington, Seattle
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Christi W. Miller: christim@u.washington.edu
  • Editor: Frederick Gallun
    Editor: Frederick Gallun×
  • Associate Editor: Mitchell Sommers
    Associate Editor: Mitchell Sommers×
Article Information
Hearing & Speech Perception / Acoustics / Hearing Disorders / Attention, Memory & Executive Functions / Hearing / Research Articles
Research Article   |   August 16, 2017
Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss
Journal of Speech, Language, and Hearing Research, August 2017, Vol. 60, 2310-2320. doi:10.1044/2017_JSLHR-H-16-0284
History: Received July 12, 2016 , Revised September 23, 2016 , Accepted February 4, 2017
 
Journal of Speech, Language, and Hearing Research, August 2017, Vol. 60, 2310-2320. doi:10.1044/2017_JSLHR-H-16-0284
History: Received July 12, 2016; Revised September 23, 2016; Accepted February 4, 2017

Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues.

Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions.

Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect.

Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

Acknowledgments
We would like to thank the community practitioners for advertising our study, the participants for their time, and our funding sources for making this work possible: National Institute on Deafness and Other Communication Disorders Grants R01 DC012769-04 (awarded to Kelly Tremblay and Ruth A. Bentler) P30 DC004661 (awarded to the Core Center, NIDCD Research Core Center P30 [Rubel, PI]), and the Institute for Clinical and Translational Science at the University of Iowa is supported by the National Institutes of Health (NIH) Clinical and Translational Science Award (CTSA) program, grant U54TR001356. We also thank Ashley Moore, Kelley Trapp, Elizabeth Stangl, and Kelsey Dumanch for data collection and entry and Xuyang Zhang for statistical assistance.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access