The Identification of Affective-Prosodic Stimuli by Left- and Right-Hemisphere-Damaged Subjects All Errors Are Not Created Equal Research Article
Research Article  |   October 01, 1992
The Identification of Affective-Prosodic Stimuli by Left- and Right-Hemisphere-Damaged Subjects
 
Author Affiliations & Notes
  • Diana Van Lancker
    Veterans Affairs Outpatient Clinic and Department of Neurology University of Southern California Los Angeles
  • John J. sidtis
    Department of Neurology Univesity of Minnesota Medical School Minneapolis
  • Contact Author: Diana Van Lancker, PhD, Audiology and Speech Pathology (126), VA Outpatient Clinic, 425 South Hill Street, Los Angeles, CA 90013.
Article Information
Speech, Voice & Prosody / Speech / Research Articles
Research Article   |   October 01, 1992
The Identification of Affective-Prosodic Stimuli by Left- and Right-Hemisphere-Damaged Subjects
Journal of Speech, Language, and Hearing Research, October 1992, Vol. 35, 963-970. doi:10.1044/jshr.3505.963
History: Received June 25, 1991 , Accepted December 12, 1991
 
Journal of Speech, Language, and Hearing Research, October 1992, Vol. 35, 963-970. doi:10.1044/jshr.3505.963
History: Received June 25, 1991; Accepted December 12, 1991

Impairments in listening tasks that require subjects to match affective-prosodic speech utterances with appropriate facial expressions have been reported after both left- and right-hemisphere damage. In the present study, both left- and right-hemisphere-damaged patients were found to perform poorly compared to a nondamaged control group on a typical affective-prosodic listening task using four emotional types (happy, sad, angry, surprised). To determine if the two brain-damaged groups were exhibiting a similar pattern of performance with respect to their use of acoustic cues, the 16 stimulus utterances were analyzed acoustically, and the results were incorporated into an analysis of the errors made by the patients. A discriminant function analysis using acoustic cues alone indicated that fundamental frequency (FO) variability, mean FO, and syllable durations most successfully distinguished the four emotional sentence types. A similar analysis that incorporated the misclassifications made by the patients revealed that the left-hemisphere-damaged and right-hemisphere-damaged groups were utilizing these acoustic cues differently. The results of this and other studies suggest that rather than being lateralized to a single cerebral hemisphere in a fashion analogous to language, prosodic processes are made up of multiple skills and functions distributed across cerebral systems.

Acknowledgments
We appreciate the assistance of Anne Curry in recording the stimuli, Nancy Monson and Sara Jensen-Fritz in statistical analysis, Kris DeBruin for assistance with the acoustic analysis, and Tami Ballew and Sandy Dooley in manuscript preparation. The acoustic analysis was conducted at the Laboratory of Quantitative Neurology, Department of Neurology, University of Minnesota. The assistance of John Mertus in providing and implementing the BLISS system is gratefully acknowledged, as is the assistance of Philip Lieberman for sharing the initial version of BLISS. The critical comments of Jack Gandour and Donald A. Robin are also appreciated. This study was supported in part by the Veterans Administration.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access