Speechreading Supplemented by Single-Channel and Multichannel Tactile Displays of Voice Fundamental Frequency The benefits of two tactile codes of voice fundamental frequency (Fo) were evaluated as supplements to the speechreading of sentences in two short-term training studies, each using 12 adults with normal hearing. In Experiment 1, a multichannel spatiotemporal display of Fo, known as Portapitch, was used to stimulate the index ... Research Article
Research Article  |   June 01, 1995
Speechreading Supplemented by Single-Channel and Multichannel Tactile Displays of Voice Fundamental Frequency
 
Author Affiliations & Notes
  • Robin S. Waldstein
    Center for Research in Speech and Hearing Sciences Graduate School and University Center City University of New York
  • Arthur Boothroyd
    Center for Research in Speech and Hearing Sciences Graduate School and University Center City University of New York
  • Contact author: Robin S. Waldstein, PhD, Center for Research in Speech and Hearing Sciences, Graduate School and University Center, City University of New York, 33 West 42nd Street, New York, NY 10036.
    Contact author: Robin S. Waldstein, PhD, Center for Research in Speech and Hearing Sciences, Graduate School and University Center, City University of New York, 33 West 42nd Street, New York, NY 10036.×
Article Information
Audiologic / Aural Rehabilitation / Hearing / Research Articles
Research Article   |   June 01, 1995
Speechreading Supplemented by Single-Channel and Multichannel Tactile Displays of Voice Fundamental Frequency
Journal of Speech, Language, and Hearing Research, June 1995, Vol. 38, 690-705. doi:10.1044/jshr.3803.690
History: Received December 6, 1993 , Accepted January 9, 1995
 
Journal of Speech, Language, and Hearing Research, June 1995, Vol. 38, 690-705. doi:10.1044/jshr.3803.690
History: Received December 6, 1993; Accepted January 9, 1995

The benefits of two tactile codes of voice fundamental frequency (Fo) were evaluated as supplements to the speechreading of sentences in two short-term training studies, each using 12 adults with normal hearing. In Experiment 1, a multichannel spatiotemporal display of Fo, known as Portapitch, was used to stimulate the index finger. In an attempt to improve on past performance with this display, the coding scheme was modified to better cover the Fo range of the talker in the training materials. For Experiment 2, to engage kinesthetic/proprioceptive pathways, a novel single-channel positional display was built, in which Fo was coded as the vertical displacement of a small finger-rest. Input to both displays consisted of synthesized replicas of the Fo contours of the sentences, prepared and perfected off-line. Training with the two tactile Fo displays included auditory presentation of the synthesized Fo contours in conjunction with the tactile patterns on alternate trials. Speechreading enhancement by the two tactile Fo displays was compared to the enhancement provided when auditory Fo information was available in conjunction with the tactile patterns, by auditory presentation of a sinusoidal indication of the presence or absence of voicing, and by a single-channel tactile display of the speech waveform presented to the index finger. Despite the modified coding strategy, the multichannel Portapitch provided a mean tactile speechreading enhancement of 7 percentage points, which was no greater than that found in previous studies. The novel positional Fo display provided only a 4 percentage point enhancement. Neither Fo display was better than the simple single-channel tactile transform of the full speech waveform, which gave a 7 percentage point enhancement effect. Auditory speechreading enhancement effects were 17 percentage points with the voicing indicator and approximately 35 percentage points when the auditory Fo contour was provided in conjunction with the tactile displays. The findings are consistent with the hypothesis that subjects were not taking full advantage of the Fo variation information available in the outputs of the two experimental tactile displays.

Acknowledgments
We thank Gary Chant, Charlie Chen, Nina Guerrero, Anita Haravon, Mark Weiss, and Eddy Yeung for their invaluable contributions to the completion of this work. This research was supported by NIH Program Project Grant #5P50DC00178 from the National Institute of Deafness and Other Communication Disorders to the City University of New York.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access