Visual Phonemic Ambiguity and Speechreading Purpose To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme ... Research Article
Research Article  |   November 14, 2016
Visual Phonemic Ambiguity and Speechreading
 
Author Affiliations & Notes
  • Björn Lidestam
    Linköping University, Linköping, Sweden
  • Jonas Beskow
    Centre for Speech Technology, KTH, Stockholm, Sweden
  • Contact author: Björn Lidestam, Department of Behavioural Sciences, Linköping University SE-581 83 Linköping, Sweden. E-mail: bjli@ibv.liu.se
Article Information
Audiologic / Aural Rehabilitation / Speech, Voice & Prosody / Hearing / Research Articles
Research Article   |   November 14, 2016
Visual Phonemic Ambiguity and Speechreading
Journal of Speech, Language, and Hearing Research, November 2016, Vol. 49, 835-847. doi:10.1044/1092-4388(2006/059)
History: Received September 21, 2004 , Revised April 18, 2005 , Accepted January 21, 2006
 
Journal of Speech, Language, and Hearing Research, November 2016, Vol. 49, 835-847. doi:10.1044/1092-4388(2006/059)
History: Received September 21, 2004; Revised April 18, 2005; Accepted January 21, 2006
Web of Science® Times Cited: 16

Purpose To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals.

Method Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses.

Results Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the synthetic talker. The participants were more sensitive to the clusters of the least visible consonants with the human talker than with the synthetic talker.

Conclusions It is suggested that ability to distiguish between clusters of the least visually distinct phonemes is important in speechreading. Specifically, it reduces the number of candidates, and thereby facilitates lexical identification.

Acknowledgments
This study was in part funded by a grant from the Swedish Transport and Communication Research Board (1997-0603), awarded to Björn Lyxell.
We thank Björn Lyxell, Ulrich Olofsson, Jerker Rönnberg, Ulf Andersson, and Henrik Danielsson for comments on the manuscript; Mary Rudner for editing, help with phonetic transcriptions, and translation of stimuli into English; and all participants for their kind participation.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access