Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds PurposePrevious research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined ... Article
Article  |   April 01, 2010
Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds
 
Author Affiliations & Notes
  • Yukari Hirata
    Colgate University, Hamilton, NY
  • Spencer D. Kelly
    Colgate University, Hamilton, NY
  • Contact author: Yukari Hirata, Department of East Asian Languages and Literatures, 9B Lawrence Hall, Colgate University. E-mail: yhirata@mail.colgate.edu.
Article Information
Special Populations / Cultural & Linguistic Diversity / Attention, Memory & Executive Functions / Speech, Voice & Prosody / Speech
Article   |   April 01, 2010
Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds
Journal of Speech, Language, and Hearing Research, April 2010, Vol. 53, 298-310. doi:10.1044/1092-4388(2009/08-0243)
History: Received November 18, 2008 , Accepted August 24, 2009
 
Journal of Speech, Language, and Hearing Research, April 2010, Vol. 53, 298-310. doi:10.1044/1092-4388(2009/08-0243)
History: Received November 18, 2008; Accepted August 24, 2009
Web of Science® Times Cited: 21

PurposePrevious research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts.

MethodSixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /katoː/).

ResultsAlthough all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not.

ConclusionsSeeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

Acknowledgments
This study was supported by Harvey M. Picker Institute for Interdisciplinary Studies in the Sciences and Mathematics at Colgate University. We thank Emily Cullings, Jason Demakakos, Jackie Burch, Jen Simester, and Grace Baik for their involvement at various stages this project. Portions of this study were presented at the conferences Acoustics '08 in Paris and WorldCall 2008 in Fukuoka as well as invited colloquiums at Advanced Telecommunications Research Institute International in Kyoto in 2008 and the Max Planck Institute for Psycholinguistics in Nijmegen (the Netherlands) in 2009.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access