Enhancing Intervention for Residual Rhotic Errors Via App-Delivered Biofeedback: A Case Study Purpose Recent research suggests that visual-acoustic biofeedback can be an effective treatment for residual speech errors, but adoption remains limited due to barriers including high cost and lack of familiarity with the technology. This case study reports results from the first participant to complete a course of visual-acoustic biofeedback using ... Technical Report
Technical Report  |   June 22, 2017
Enhancing Intervention for Residual Rhotic Errors Via App-Delivered Biofeedback: A Case Study
 
Author Affiliations & Notes
  • Tara McAllister Byun
    Steinhardt School of Culture, Education, and Human Development, New York University
  • Heather Campbell
    Steinhardt School of Culture, Education, and Human Development, New York University
  • Helen Carey
    Tandon School of Engineering, New York University
  • Wendy Liang
    Steinhardt School of Culture, Education, and Human Development, New York University
  • Tae Hong Park
    Steinhardt School of Culture, Education, and Human Development, New York University
  • Mario Svirsky
    Langone Medical Center, New York University
  • Disclosure: The authors have declared that no competing interests existed at the time of publication.
    Disclosure: The authors have declared that no competing interests existed at the time of publication. ×
  • Correspondence to Tara McAllister Byun: tara.byun@nyu.edu
  • Editor: Yana Yunusova
    Editor: Yana Yunusova×
  • Associate Editor: Ignatius Nip
    Associate Editor: Ignatius Nip×
Article Information
Hearing & Speech Perception / Acoustics / Research Issues, Methods & Evidence-Based Practice / Special Issue: Selected Papers From the 2016 Conference on Motor Speech—Basic and Clinical Science and Technology / Technical Reports
Technical Report   |   June 22, 2017
Enhancing Intervention for Residual Rhotic Errors Via App-Delivered Biofeedback: A Case Study
Journal of Speech, Language, and Hearing Research, June 2017, Vol. 60, 1810-1817. doi:10.1044/2017_JSLHR-S-16-0248
History: Received June 15, 2016 , Revised September 29, 2016 , Accepted November 16, 2016
 
Journal of Speech, Language, and Hearing Research, June 2017, Vol. 60, 1810-1817. doi:10.1044/2017_JSLHR-S-16-0248
History: Received June 15, 2016; Revised September 29, 2016; Accepted November 16, 2016

Purpose Recent research suggests that visual-acoustic biofeedback can be an effective treatment for residual speech errors, but adoption remains limited due to barriers including high cost and lack of familiarity with the technology. This case study reports results from the first participant to complete a course of visual-acoustic biofeedback using a not-for-profit iOS app, Speech Therapist's App for /r/ Treatment.

Method App-based biofeedback treatment for rhotic misarticulation was provided in weekly 30-min sessions for 20 weeks. Within-treatment progress was documented using clinician perceptual ratings and acoustic measures. Generalization gains were assessed using acoustic measures of word probes elicited during baseline, treatment, and maintenance sessions.

Results Both clinician ratings and acoustic measures indicated that the participant significantly improved her rhotic production accuracy in trials elicited during treatment sessions. However, these gains did not transfer to generalization probes.

Conclusions This study provides a proof-of-concept demonstration that app-based biofeedback is a viable alternative to costlier dedicated systems. Generalization of gains to contexts without biofeedback remains a challenge that requires further study. App-delivered biofeedback could enable clinician–research partnerships that would strengthen the evidence base while providing enhanced treatment for children with residual rhotic errors.

Supplemental Material https://doi.org/10.23641/asha.5116318

Acknowledgments
This project was supported by NIH NIDCD grant R03DC012883 and by funding from the American Speech-Language-Hearing Foundation (Clinical Research Grant), New York University (Research Challenge Fund), and Steinhardt School of Culture, Education, and Human Development (Technology Award). The authors gratefully acknowledge the contributions of the following individuals: Gui Bueno, R. Luke DuBois, Jonathan Forsyth, Timothy Sanders, and Nikolai Steklov.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access