Corrected High–Frame Rate Anchored Ultrasound With Software Alignment PurposeTo improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method.MethodA production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous data collection paths, using dominant ultrasound technology ... Article
Article  |   April 01, 2011
Corrected High–Frame Rate Anchored Ultrasound With Software Alignment
 
Author Affiliations & Notes
  • Amanda L. Miller
    The Ohio State University, Columbus
    The Ohio State University, Columbus
  • Kenneth B. Finch
    Ithaca, New York
    Ithaca, New York
  • Correspondence to Amanda L. Miller: amiller@ling.osu.edu
  • Editor: Anne Smith
    Editor: Anne Smith×
  • Associate Editor: Maureen Stone
    Associate Editor: Maureen Stone×
Article Information
Hearing & Speech Perception / Acoustics / Research Issues, Methods & Evidence-Based Practice / Speech, Voice & Prosody / Speech
Article   |   April 01, 2011
Corrected High–Frame Rate Anchored Ultrasound With Software Alignment
Journal of Speech, Language, and Hearing Research, April 2011, Vol. 54, 471-486. doi:10.1044/1092-4388(2010/09-0103)
History: Received May 29, 2009 , Revised December 22, 2009 , Accepted September 9, 2010
 
Journal of Speech, Language, and Hearing Research, April 2011, Vol. 54, 471-486. doi:10.1044/1092-4388(2010/09-0103)
History: Received May 29, 2009; Revised December 22, 2009; Accepted September 9, 2010
Web of Science® Times Cited: 10

PurposeTo improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method.

MethodA production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous data collection paths, using dominant ultrasound technology and the CHAUSA method, are compared. The probe stabilization and head movement correction paradigm is demonstrated.

ResultsThe CHAUSA method increases the frame rate from the standard National Television System Committee (NTSC) video rate (29.97) to the ultrasound internal machine rate—in this case, 124 frames per second (fps)—by using Digital Imaging and Communications in Medicine (DICOM; National Electrical Manufacturers Association, 2008) data transfer. DICOM avoids spatiotemporal inaccuracies introduced by dominant ultrasound export techniques. The data display alignment of the acoustic and articulatory signals to the correct high–frame rate (FR) frame (±4 ms at 124 fps).

ConclusionsCHAUSA produces high-FR, high-spatial-quality ultrasound images, which are head corrected to 1 mm. The method reveals tongue dorsum retraction during the posterior release of the alveolar click and tongue tip recoil following the anterior release of the alveolar click, both of which were previously undetectable. CHAUSA visualizes most of the tongue in studies of dynamic consonants with a major reduction in field problems, opening up important areas of speech research.

Acknowledgments
Development of the CHAUSA method was supported by National Science Foundation grants BCS-0726200 (Amanda Miller, principal investigator) and BCS-0726198 (Bonny Sands, principal investigator), titled “Collaborative Research: Phonetic and Phonological Structures of Post-Velar Constrictions in Clicks and Laterals” to Cornell University and Northern Arizona University. Any opinions, findings, and conclusions or recommendations expressed in this material are ours and do not necessarily reflect the views of the National Science Foundation. We thank Abigail Scott, who assisted with the tri-modal proof-of-alignment data collection. We also thank our IsiXhosa speaker, Luxolo Lengs, and our Mangetti Dune !Xung speaker, Jenggu Rooi Fransisko.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access