Perception of Synthetic Visual Consonant-Vowel Articulations Synthetic speech-like articulations were presented to adult subjects via the visual modality, following the classic categorical perception experimental paradigm (Liberman, Harris, Hoffman, & Griffith, 1957). Animations were generated on a computer-based graphics system. Stimuli consisted of representations of the syllables /b/,/v b/, and/w b/; as well as 6 linearly interpolated ... Research Note
Research Note  |   September 01, 1987
Perception of Synthetic Visual Consonant-Vowel Articulations
 
Author Affiliations & Notes
  • Brian E. Walden
    Army Audiology and Speech Center, Walter Reed Army Medical Center, Washington, DC
  • Allen A. Montgomery
    Army Audiology and Speech Center, Walter Reed Army Medical Center, Washington, DC
  • Robert A. Prosek
    Army Audiology and Speech Center, Walter Reed Army Medical Center, Washington, DC
Article Information
Research Notes
Research Note   |   September 01, 1987
Perception of Synthetic Visual Consonant-Vowel Articulations
Journal of Speech, Language, and Hearing Research, September 1987, Vol. 30, 418-424. doi:10.1044/jshr.3003.418
History: Received July 10, 1986 , Accepted March 2, 1987
 
Journal of Speech, Language, and Hearing Research, September 1987, Vol. 30, 418-424. doi:10.1044/jshr.3003.418
History: Received July 10, 1986; Accepted March 2, 1987

Synthetic speech-like articulations were presented to adult subjects via the visual modality, following the classic categorical perception experimental paradigm (Liberman, Harris, Hoffman, & Griffith, 1957). Animations were generated on a computer-based graphics system. Stimuli consisted of representations of the syllables /b/,/v b/, and/w b/; as well as 6 linearly interpolated intermediate stimuli between each of the possible exemplar pairs, resulting in three 8-item continua. Three sets of observations were obtained for these stimuli. First, for each continuum, labeling data were obtained in which the subject assigned one or the other exemplar label to each of the stimuli. Next, ABX discrimination data were obtained for each continuum. In the final task, subjects assigned a rating of one through nine to each animation indicating the extent to which it was like the exemplar syllables. Although the labeling functions showed rather abrupt transitions from one response category to the other, the peaks in the discrimination functions did not coincide with the category boundaries. Further, the mean rating functions were relatively linear, and the distribution of rating responses revealed unimodal distributions whose peak locations differed depending on the stimulus.

Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access