Ranking Hearing Aid Input–Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble Purpose To determine the rankings of 6 input–output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss. Method Peak clipping, compression limiting, and 4 wide dynamic range ... Research Article
Research Article  |   April 01, 2007
Ranking Hearing Aid Input–Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble
 
Author Affiliations & Notes
  • King Chung
    Northwestern University
  • Mead C. Killion
    Etymotic Research and Northwestern University and Rush University and City University of New York
  • Laurel A. Christensen
    Etymotic Research and Northwestern University
  • Contact author: King Chung, who is now with the Department of Speech, Language, and Hearing Sciences, Purdue University, Heavilon Hall B32, West Lafayette, IN 47907. E-mail: kingchung@purdue.edu.
  • Mead C. Killion is no longer affiliated with Rush University. Laurel A. Christensen is no longer affiliated with Etymotic Research. Her current affiliations include GN ReSound, Glenview, IL; Northwestern University; and Rush University.
    Mead C. Killion is no longer affiliated with Rush University. Laurel A. Christensen is no longer affiliated with Etymotic Research. Her current affiliations include GN ReSound, Glenview, IL; Northwestern University; and Rush University.×
Article Information
Hearing & Speech Perception / Hearing Aids, Cochlear Implants & Assistive Technology / Normal Language Processing / Hearing / Research Articles
Research Article   |   April 01, 2007
Ranking Hearing Aid Input–Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble
Journal of Speech, Language, and Hearing Research, April 2007, Vol. 50, 304-322. doi:10.1044/1092-4388(2007/022)
History: Received April 27, 2004 , Revised October 24, 2005 , Accepted August 16, 2006
 
Journal of Speech, Language, and Hearing Research, April 2007, Vol. 50, 304-322. doi:10.1044/1092-4388(2007/022)
History: Received April 27, 2004; Revised October 24, 2005; Accepted August 16, 2006
Web of Science® Times Cited: 2

Purpose To determine the rankings of 6 input–output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss.

Method Peak clipping, compression limiting, and 4 wide dynamic range compression (WDRC) input–output functions were compared in a repeated-measure design. Interactions among the compression characteristics were minimized. Speech and babble were processed and recorded at 3 input levels: 45, 65, and 90 dB sound pressure level. Speech recognition of 3 groups of listeners (n = 6/group) was tested for speech processed by each input–output function and at each input level.

Results Input–output functions that made low-level speech audible and high-level speech less distorted by avoiding peak clipping or severe compression yielded higher speech recognition scores. These results are consistent with previous findings in the literature.

Conclusion WDRCs with the low compression ratio region extended to a high input level or with a high compression limiting threshold were the best for speech recognition in babble when the hearing aid user cannot or does not want to manipulate the volume control. Future studies on subjective preferences of different input–output functions are needed.

Acknowledgments
This study was conducted at Northwestern University in partial fulfillment of the doctoral degree requirements for the first author. We would like to thank Etymotic Research for sponsoring the equipment and providing technical support to make this project possible and the American Academy of Audiology for the Student Investigator Grant. We would also like to thank Larry Revit for his guidance in making recordings of the speech testing materials and Greg Shaw and Dan Mapes-Riordan for programming support. In addition, thanks go to Rachael Fischer and Tarez Graban for editorial help.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access