Article  |   February 2010
Evaluating the Effort Expended to Understand Speech in Noise Using a Dual-Task Paradigm: The Effects of Providing Visual Speech Cues
Author Affiliations & Notes
  • Sarah Fraser
    Concordia University and the Center for Research in Human Development, Montréal, Québec, Canada
  • Jean-Pierre Gagné
    Université de Montréal and the Centre de Recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
  • Majolaine Alepins
    Université de Montréal and the Centre de Recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
  • Pascale Dubois
    Université de Montréal and the Centre de Recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
  • Contact author: Sarah Fraser, Department of Psychology, Concordia University, 7141 Sherbrooke Street West, Montreal, Quebec H4B 1R6, Canada. E-mail: sfraser@live.concordia.ca.
Hearing & Speech Perception / Acoustics / Hearing Disorders / Speech, Voice & Prosody / Hearing
Article   |   February 2010
Evaluating the Effort Expended to Understand Speech in Noise Using a Dual-Task Paradigm: The Effects of Providing Visual Speech Cues
Journal of Speech, Language, and Hearing Research February 2010, Vol.53, 18-33. doi:10.1044/1092-4388(2009/08-0140)
History: Accepted 05 Jun 2009 , Received 10 Jul 2008 , Revised 23 Dec 2008
Journal of Speech, Language, and Hearing Research February 2010, Vol.53, 18-33. doi:10.1044/1092-4388(2009/08-0140)
History: Accepted 05 Jun 2009 , Received 10 Jul 2008 , Revised 23 Dec 2008

Purpose: Using a dual-task paradigm, 2 experiments (Experiments 1 and 2) were conducted to assess differences in the amount of listening effort expended to understand speech in noise in audiovisual (AV) and audio-only (A-only) modalities. Experiment 1 had equivalent noise levels in both modalities, and Experiment 2 equated speech recognition performance levels by increasing the noise in the AV versus A-only modality.

Method: Sixty adults were randomly assigned to Experiment 1 or Experiment 2. Participants performed speech and tactile recognition tasks separately (single task) and concurrently (dual task). The speech tasks were performed in both modalities. Accuracy and reaction time data were collected as well as ratings of perceived accuracy and effort.

Results: In Experiment 1, the AV modality speech recognition was rated as less effortful, and accuracy scores were higher than A only. In Experiment 2, reaction times were slower, tactile task performance was poorer, and listening effort increased, in the AV versus the A-only modality.

Conclusions: At equivalent noise levels, speech recognition performance was enhanced and subjectively less effortful in the AV than A-only modality. At equivalent accuracy levels, the dual-task performance decrements (for both tasks) suggest that the noisier AV modality was more effortful than the A-only modality.

Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access

Related Articles

Binaural Advantage for Younger and Older Adults With Normal Hearing
Journal of Speech, Language, and Hearing Research April 2008, Vol.51, 539-556. doi:10.1044/1092-4388(2008/039)
Coordinator’s Column
SIG 6 Perspectives on Hearing and Hearing Disorders: Research and Diagnostics February 2009, Vol.13, 2-3. doi:10.1044/hhd13.1.2
Visual Cues and Listening Effort: Individual Variability
Journal of Speech, Language, and Hearing Research October 2011, Vol.54, 1416-1430. doi:10.1044/1092-4388(2011/10-0154)
Contribution of High Frequencies to Speech Recognition in Quiet and Noise in Listeners With Varying Degrees of High-Frequency Sensorineural Hearing Loss
Journal of Speech, Language, and Hearing Research August 2007, Vol.50, 819-834. doi:10.1044/1092-4388(2007/057)
Cortical Mechanisms of Speech Perception in Noise
Journal of Speech, Language, and Hearing Research August 2008, Vol.51, 1026-1041. doi:10.1044/1092-4388(2008/075)