Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing PurposeImproved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and ... Article
Article  |   June 01, 2011
Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
 
Author Affiliations & Notes
  • Ying-Yee Kong
    Northeastern University, Boston, MA
    Northeastern University, Boston, MA
  • Louis D. Braida
    Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA
    Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA
  • Correspondence to Ying-Yee Kong: yykong@neu.edu
  • Editor: Robert Schlauch
    Editor: Robert Schlauch×
  • Associate Editor: Christopher Turner
    Associate Editor: Christopher Turner×
Article Information
Hearing & Speech Perception / Hearing Aids, Cochlear Implants & Assistive Technology / Hearing
Article   |   June 01, 2011
Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing
Journal of Speech, Language, and Hearing Research, June 2011, Vol. 54, 959-980. doi:10.1044/1092-4388(2010/10-0197)
History: Received July 14, 2010 , Accepted October 13, 2010
 
Journal of Speech, Language, and Hearing Research, June 2011, Vol. 54, 959-980. doi:10.1044/1092-4388(2010/10-0197)
History: Received July 14, 2010; Accepted October 13, 2010
Web of Science® Times Cited: 19

PurposeImproved speech recognition in binaurally combined acoustic–electric stimulation (otherwise known as bimodal hearing) could arise when listeners integrate speech cues from the acoustic and electric hearing. The aims of this study were (a) to identify speech cues extracted in electric hearing and residual acoustic hearing in the low-frequency region and (b) to investigate cochlear implant (CI) users' ability to integrate speech cues across frequencies.

MethodNormal-hearing (NH) and CI subjects participated in consonant and vowel identification tasks. Each subject was tested in 3 listening conditions: CI alone (vocoder speech for NH), hearing aid (HA) alone (low-pass filtered speech for NH), and both. Integration ability for each subject was evaluated using a model of optimal integration—the PreLabeling integration model (Braida, 1991).

ResultsOnly a few CI listeners demonstrated bimodal benefit for phoneme identification in quiet. Speech cues extracted from the CI and the HA were highly redundant for consonants but were complementary for vowels. CI listeners also exhibited reduced integration ability for both consonant and vowel identification compared with their NH counterparts.

ConclusionThese findings suggest that reduced bimodal benefits in CI listeners are due to insufficient complementary speech cues across ears, a decrease in integration ability, or both.

Acknowledgments
This work was supported by the National Organization for Hearing Research Foundation (Principal Investigator [PI]: Ying-Yee Kong) and National Institute on Deafness and Other Communication Disorders Grants R03 DC009684-01 (PI: Ying-Yee Kong) and R01 DC007152-02 (PI: Louis D. Braida). We are grateful to all subjects for their participation in these experiments. We would like to thank Ken Grant and Joshua Bernstein for their helpful comments and suggestions. We also thank Qian-Jie Fu for allowing us to use his MATLAB programs for performing information transmission analysis.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access