Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms ... Research Article
Research Article  |   October 17, 2017
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid
 
Author Affiliations & Notes
  • Gerald Kidd, Jr.
    Department of Speech, Language, and Hearing Sciences and Hearing Research Center, Boston University, MA
  • Disclosure: The author has declared that no competing interests existed at the time of publication.
    Disclosure: The author has declared that no competing interests existed at the time of publication. ×
  • Presented at the ASHA Research Symposium, November 19, 2016, Philadelphia, PA
    Presented at the ASHA Research Symposium, November 19, 2016, Philadelphia, PA×
  • Correspondence to Gerald Kidd, Jr.: gkidd@bu.edu
  • Editor-in-Chief: Frederick (Erick) Gallun
    Editor-in-Chief: Frederick (Erick) Gallun×
  • Editor: Karen Helfer
    Editor: Karen Helfer×
Article Information
Hearing & Speech Perception / Acoustics / Hearing Disorders / Hearing Aids, Cochlear Implants & Assistive Technology / Attention, Memory & Executive Functions / Research Forum: Advances in Research on Auditory Attention and the Processing of Complex Auditory Stimuli / Research Article
Research Article   |   October 17, 2017
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid
Journal of Speech, Language, and Hearing Research, October 2017, Vol. 60, 3027-3038. doi:10.1044/2017_JSLHR-H-17-0071
History: Received February 22, 2017 , Revised July 28, 2017 , Accepted July 31, 2017
 
Journal of Speech, Language, and Hearing Research, October 2017, Vol. 60, 3027-3038. doi:10.1044/2017_JSLHR-H-17-0071
History: Received February 22, 2017; Revised July 28, 2017; Accepted July 31, 2017

Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953)  that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article.

Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources.

Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations.

Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation.

Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621

Acknowledgments
The author acknowledges NIH/NIDCD Grant Awards DC013286 and DC004545 and AFOSR Award FA9550-16-1-0372 for supporting portions of the work described here. The author is grateful to Christine R. Mason for her contributions to this work and to the preparation of this article. He also is grateful to his other colleagues for their collaborations on much of the research described here. The Research Symposium is supported by the National Institute On Deafness and Other Communication Disorders of the National Institutes of Health under Award Number R13DC003383. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access