A Comparison of the Speech Understanding Provided by Acoustic Models of Fixed-Channel and Channel-Picking Signal Processors for Cochlear Implants Vowels, consonants, and sentences were processed by two cochlear-implant signal-processing strategies—a fixed-channel strategy and a channel-picking strategy—and the resulting signals were presented to listeners with normal hearing for identification. At issue was the number of channels of stimulation needed in each strategy to achieve an equivalent level of speech recognition ... Research Article
Research Article  |   August 01, 2002
A Comparison of the Speech Understanding Provided by Acoustic Models of Fixed-Channel and Channel-Picking Signal Processors for Cochlear Implants
 
Author Affiliations & Notes
  • Michael F. Dorman, PhD
    Arizona State University Tempe and University of Utah Health Sciences Center Salt Lake City
  • Philipos C. Loizou
    University of Texas at Dallas
  • Anthony J. Spahr
    Arizona State University Tempe
  • Erin Maloff
    Arizona State University Tempe
  • Contact author: M. F. Dorman, PhD, Department of Speech and Hearing Science, Arizona State University, Tempe, AZ 85287-0102. E-mail: mdorman@asu.edu
Article Information
Hearing & Speech Perception / Acoustics / Hearing Aids, Cochlear Implants & Assistive Technology / Hearing / Research Articles
Research Article   |   August 01, 2002
A Comparison of the Speech Understanding Provided by Acoustic Models of Fixed-Channel and Channel-Picking Signal Processors for Cochlear Implants
Journal of Speech, Language, and Hearing Research, August 2002, Vol. 45, 783-788. doi:10.1044/1092-4388(2002/063)
History: Received July 25, 2001 , Accepted February 8, 2002
 
Journal of Speech, Language, and Hearing Research, August 2002, Vol. 45, 783-788. doi:10.1044/1092-4388(2002/063)
History: Received July 25, 2001; Accepted February 8, 2002
Web of Science® Times Cited: 22

Vowels, consonants, and sentences were processed by two cochlear-implant signal-processing strategies—a fixed-channel strategy and a channel-picking strategy—and the resulting signals were presented to listeners with normal hearing for identification. At issue was the number of channels of stimulation needed in each strategy to achieve an equivalent level of speech recognition in quiet and in noise. In quiet, 8 fixed channels allowed a performance maximum for the most difficult stimulus material. A similar level of performance was reached with a 6-of-20 channel-picking strategy. In noise, 10 fixed channels allowed a performance maximum for the most difficult stimulus material. A similar level of performance was reached with a 9-of-20 strategy. Both strategies are capable of providing a very high level of speech recognition. Choosing between the two strategies may, ultimately, depend on issues that are independent of speech recognition—such as ease of device programming.

Acknowledgment
This work was supported by grants from the NIDCD to the first author (RO1 DC00654-9) and the second author (RO1 DC 03421-2). The data on sentence recognition described in this article were presented at the Ninth IEEE DSP workshop (Dorman, Loizou, Spahr, & Maloff, 2000).
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access