Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method Twenty ... Research Article
Research Article  |   January 01, 2017
Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension
 
Author Affiliations & Notes
  • Linda Drijvers
    Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
    Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
  • Asli Özyürek
    Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
    Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
    Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Linda Drijvers: linda.drijvers@mpi.nl
  • Editor: Nancy Tye-Murray
    Editor: Nancy Tye-Murray×
  • Associate Editor: Karen Kirk
    Associate Editor: Karen Kirk×
Article Information
Hearing & Speech Perception / Acoustics / Normal Language Processing / Attention, Memory & Executive Functions / Speech, Voice & Prosody / Hearing / Research Articles
Research Article   |   January 01, 2017
Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension
Journal of Speech, Language, and Hearing Research, January 2017, Vol. 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101
History: Received March 14, 2016 , Revised June 22, 2016 , Accepted June 22, 2016
 
Journal of Speech, Language, and Hearing Research, January 2017, Vol. 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101
History: Received March 14, 2016; Revised June 22, 2016; Accepted June 22, 2016

Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.

Conclusions When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

Acknowledgments
This research was supported by Gravitation Grant 024.001.006 of the Language in Interaction Consortium from the Netherlands Organization for Scientific Research. We thank two anonymous reviewers for their helpful comments and suggestions that helped to improve the article. We are very grateful to Nick Wood, for helping us in editing the video stimuli, and to Gina Ginos, for being the actress in the videos.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access