Spontaneous Otoacoustic Emissions Reveal an Efficient Auditory Efferent Network Purpose Understanding speech often involves processing input from multiple modalities. The availability of visual information may make auditory input less critical for comprehension. This study examines whether the auditory system is sensitive to the presence of complementary sources of input when exerting top-down control over the amplification of speech stimuli. ... Research Note
Research Note  |   November 08, 2018
Spontaneous Otoacoustic Emissions Reveal an Efficient Auditory Efferent Network
 
Author Affiliations & Notes
  • Viorica Marian
    Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
  • Tuan Q. Lam
    Department of Psychological Sciences, Loyola University, New Orleans, LA
  • Sayuri Hayakawa
    Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
  • Sumitrajit Dhar
    Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Viorica Marian: v-marian@northwestern.edu
  • Editor-in-Chief: Frederick (Erick) Gallun
    Editor-in-Chief: Frederick (Erick) Gallun×
  • Editor: Steve Aiken
    Editor: Steve Aiken×
Article Information
Hearing & Speech Perception / Hearing Disorders / Hearing / Research Note
Research Note   |   November 08, 2018
Spontaneous Otoacoustic Emissions Reveal an Efficient Auditory Efferent Network
Journal of Speech, Language, and Hearing Research, November 2018, Vol. 61, 2827-2832. doi:10.1044/2018_JSLHR-H-18-0025
History: Received January 24, 2018 , Revised April 30, 2018 , Accepted June 5, 2018
 
Journal of Speech, Language, and Hearing Research, November 2018, Vol. 61, 2827-2832. doi:10.1044/2018_JSLHR-H-18-0025
History: Received January 24, 2018; Revised April 30, 2018; Accepted June 5, 2018

Purpose Understanding speech often involves processing input from multiple modalities. The availability of visual information may make auditory input less critical for comprehension. This study examines whether the auditory system is sensitive to the presence of complementary sources of input when exerting top-down control over the amplification of speech stimuli.

Method Auditory gain in the cochlea was assessed by monitoring spontaneous otoacoustic emissions (SOAEs), which are by-products of the amplification process. SOAEs were recorded while 32 participants (23 women, nine men; M age = 21.13) identified speech sounds such as “ba” and “ga.” The speech sounds were presented either alone or with complementary visual input, as well as in quiet or with 6-talker babble.

Results Analyses revealed that there was a greater reduction in the amplification of noisy auditory stimuli compared with quiet. This reduced amplification may aid in the perception of speech by improving the signal-to-noise ratio. Critically, there was a greater reduction in amplification when speech sounds were presented bimodally with visual information relative to when they were presented unimodally. This effect was evidenced by greater changes in SOAE levels from baseline to stimuli presentation in audiovisual trials relative to audio-only trials.

Conclusions The results suggest that even the earliest stages of speech comprehension are modulated by top-down influences, resulting in changes to SOAEs depending on the presence of bimodal or unimodal input. Neural processes responsible for changes in cochlear function are sensitive to redundancy across auditory and visual input channels and coordinate activity to maximize efficiency in the auditory periphery.

Acknowledgments
This project was funded in part by the National Institute on Deafness and Other Communication Disorders Training Grant T32-DC009399-04 to Tuan Lam and by Grant RO1HD059858 to Viorica Marian. The authors thank Ken Grant for sharing his audiovisual speech stimuli and Jungmee Lee for her input on the initial design of this study. The authors would also like to thank Peter Kwak and Jaeryoung Lee for their assistance in recruiting participants and collecting data for this experiment.
Author Contributions
V. M., S. D., and T. L. designed the study. T. L. collected the data. S. H. and T. L. analyzed the data and drafted the research note. V. M., S. D., and S.H. edited and finalized the research note. All authors contributed to the interpretation of the results.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access