Spectral Contributions to the Benefit From Spatial Separation of Speech and Noise Speech recognition in noise improves when speech and noise sources are separated in space. This benefit has two components whose effects are strongest in different frequency regions: (1) interaural level differences (e.g., head shadow), which are largest at higher frequencies, and (2) interaural time differences, which have their greatest contribution ... Research Article
Research Article  |   December 01, 2002
Spectral Contributions to the Benefit From Spatial Separation of Speech and Noise
 
Author Affiliations & Notes
  • Judy R. Dubno, PhD
    Medical University of South Carolina Charleston
  • Jayne B. Ahlstrom
    Medical University of South Carolina Charleston
  • Amy R. Horwitz
    Medical University of South Carolina Charleston
  • Contact author: Judy R. Dubno, PhD, Department of Otolaryngology–Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, P.O. Box 250550, Charleston, SC 29425. E-mail: dubnojr@musc.edu
Article Information
Hearing & Speech Perception / Acoustics / Hearing Disorders / Audiologic / Aural Rehabilitation / Hearing / Research Articles
Research Article   |   December 01, 2002
Spectral Contributions to the Benefit From Spatial Separation of Speech and Noise
Journal of Speech, Language, and Hearing Research, December 2002, Vol. 45, 1297-1310. doi:10.1044/1092-4388(2002/104)
History: Received May 1, 2001 , Accepted April 16, 2002
 
Journal of Speech, Language, and Hearing Research, December 2002, Vol. 45, 1297-1310. doi:10.1044/1092-4388(2002/104)
History: Received May 1, 2001; Accepted April 16, 2002
Web of Science® Times Cited: 40

Speech recognition in noise improves when speech and noise sources are separated in space. This benefit has two components whose effects are strongest in different frequency regions: (1) interaural level differences (e.g., head shadow), which are largest at higher frequencies, and (2) interaural time differences, which have their greatest contribution at lower frequencies. Binaural interactions enhance the separation of signals from noise through the use of these interaural differences. Here, the benefit attributable to spatial separation was measured as a function of the low- and high-pass cutoff frequency of speech and noise. Listeners were younger adults with normal hearing, older adults with normal hearing, and older adults with hearing loss. Binaural thresholds for narrowband noises were measured in quiet and in a speech-shaped masker as a function of masker low-pass cutoff frequency. Speech levels corresponding to 50% correct recognition of sentences from the Hearing in Noise Test (HINT) were measured in a 65-dB SPL speech-shaped noise. Thresholds for narrowband noises and for speech were measured with two loudspeaker configurations: (1) signals and speech-shaped noise at 0° azimuth (in front of the listener) and (2) signals at 0° azimuth and speech-shaped noise at 90° azimuth (at the listener's side). The criterion measure was spatial separation benefit, or the difference in thresholds for the two conditions. Benefit of spatial separation for unfiltered speech averaged 6.1 dB for younger listeners with normal hearing, 4.9 dB for older listeners with normal hearing, and 2.7 dB for older listeners with hearing loss. Benefit was differentially affected by low-pass and high-pass filtering, suggesting a trade-off of the contributions of higher frequency interaural level differences and lower frequency interaural timing cues. As expected, older listeners with hearing loss benefited little from the improved signal-to-noise ratios in the higher frequencies resulting from head shadow, but showed some benefit from lower frequency cues. Spatial benefit for older listeners with normal hearing was reduced relative to benefit for younger listeners. This result may be related to older listeners' elevated thresholds at frequencies above 6.0 kHz.

Acknowledgments
This work was supported (in part) by grants P50 DC00422 and R01 DC00184 from NIH/NIDCD and from the MUSC General Clinical Research Center (M01 RR 01070). The authors thank Chris Ahlstrom for computer and signal-processing support, Johanna Larsen for assistance with data collection, and John H. Mills for editorial comments.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access