Reliability and Repeatability of the Speech Cue Profile Purpose Researchers have long noted speech recognition variability that is not explained by the pure-tone audiogram. Previous work (Souza, Wright, Blackburn, Tatman, & Gallun, 2015) demonstrated that a small number of listeners with sensorineural hearing loss utilized different types of acoustic cues to identify speechlike stimuli, specifically the extent to ... Research Article
Research Article  |   August 08, 2018
Reliability and Repeatability of the Speech Cue Profile
 
Author Affiliations & Notes
  • Pamela Souza
    Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
    Knowles Hearing Center, Northwestern University, Evanston, IL
  • Richard Wright
    Department of Linguistics, University of Washington, Seattle
  • Frederick Gallun
    National Center for Rehabilitative Auditory Research, Portland VA Medical Center, Oregon
    Otolaryngology–Head and Neck Surgery, Oregon Health and Science University, Portland
  • Paul Reinhart
    Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Pamela Souza: p-souza@northwestern.edu
  • Editor-in-Chief: Julie Liss
    Editor-in-Chief: Julie Liss×
  • Editor: Megan McAuliffe
    Editor: Megan McAuliffe×
Article Information
Hearing / Research Articles
Research Article   |   August 08, 2018
Reliability and Repeatability of the Speech Cue Profile
Journal of Speech, Language, and Hearing Research, August 2018, Vol. 61, 2126-2137. doi:10.1044/2018_JSLHR-H-17-0341
History: Received September 9, 2017 , Revised January 13, 2018 , Accepted April 8, 2018
 
Journal of Speech, Language, and Hearing Research, August 2018, Vol. 61, 2126-2137. doi:10.1044/2018_JSLHR-H-17-0341
History: Received September 9, 2017; Revised January 13, 2018; Accepted April 8, 2018

Purpose Researchers have long noted speech recognition variability that is not explained by the pure-tone audiogram. Previous work (Souza, Wright, Blackburn, Tatman, & Gallun, 2015) demonstrated that a small number of listeners with sensorineural hearing loss utilized different types of acoustic cues to identify speechlike stimuli, specifically the extent to which the participant relied upon spectral (or temporal) information for identification. Consistent with recent calls for data rigor and reproducibility, the primary aims of this study were to replicate the pattern of cue use in a larger cohort and to verify stability of the cue profiles over time.

Method Cue-use profiles were measured for adults with sensorineural hearing loss using a syllable identification task consisting of synthetic speechlike stimuli in which spectral and temporal dimensions were manipulated along continua. For the first set, a static spectral shape varied from alveolar to palatal, and a temporal envelope rise time varied from affricate to fricative. For the second set, formant transitions varied from labial to alveolar and a temporal envelope rise time varied from approximant to stop. A discriminant feature analysis was used to determine to what degree spectral and temporal information contributed to stimulus identification. A subset of participants completed a 2nd visit using the same stimuli and procedures.

Results When spectral information was static, most participants were more influenced by spectral than by temporal information. When spectral information was dynamic, participants demonstrated a balanced distribution of cue-use patterns, with nearly equal numbers of individuals influenced by spectral or temporal cues. Individual cue profile was repeatable over a period of several months.

Conclusion In combination with previously published data, these results indicate that listeners with sensorineural hearing loss are influenced by different cues to identify speechlike sounds and that those patterns are stable over time.

Acknowledgments
This work was supported by National Institutes of Health Grant NIH R01 DC006014 (awarded to P. Souza). The research reported in this publication was supported, in part, by the National Center for Advancing Translational Sciences Grant UL1TR001422, which supports the Northwestern Biostatistics Cores. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors thank Laura Mathews and Rachel Ellinger for their assistance in data collection and Lauren Balmert for her guidance regarding data analysis.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access