Semantic and Phonological Encoding in Adults Who Stutter: Silent Responses to Pictorial Stimuli Purpose Research on language planning in adult stuttering is relatively sparse and offers diverging arguments about a potential causative relationship between semantic and phonological encoding and fluency breakdowns. This study further investigated semantic and phonological encoding efficiency in adults who stutter (AWS) by means of silent category and phoneme identification, ... Research Article
Research Article  |   September 18, 2017
Semantic and Phonological Encoding in Adults Who Stutter: Silent Responses to Pictorial Stimuli
 
Author Affiliations & Notes
  • Irena Vincent
    State University of New York College at Cortland
  • Disclosure: The author has declared that no competing interests existed at the time of publication.
    Disclosure: The author has declared that no competing interests existed at the time of publication. ×
  • Correspondence to Irena Vincent: irena.vincent@cortland.edu
  • Editor: Julie Liss
    Editor: Julie Liss×
  • Associate Editor: Susanne Fuchs
    Associate Editor: Susanne Fuchs×
Article Information
Speech, Voice & Prosodic Disorders / Fluency Disorders / Speech, Voice & Prosody / Speech / Research Articles
Research Article   |   September 18, 2017
Semantic and Phonological Encoding in Adults Who Stutter: Silent Responses to Pictorial Stimuli
Journal of Speech, Language, and Hearing Research, September 2017, Vol. 60, 2537-2550. doi:10.1044/2017_JSLHR-S-16-0323
History: Received August 9, 2016 , Revised March 14, 2017 , Accepted May 13, 2017
 
Journal of Speech, Language, and Hearing Research, September 2017, Vol. 60, 2537-2550. doi:10.1044/2017_JSLHR-S-16-0323
History: Received August 9, 2016; Revised March 14, 2017; Accepted May 13, 2017

Purpose Research on language planning in adult stuttering is relatively sparse and offers diverging arguments about a potential causative relationship between semantic and phonological encoding and fluency breakdowns. This study further investigated semantic and phonological encoding efficiency in adults who stutter (AWS) by means of silent category and phoneme identification, respectively.

Method Fifteen AWS and 15 age- and sex-matched adults who do not stutter (ANS) participated. The groups were compared on the basis of the accuracy and speed of superordinate category (animal vs. object) and initial phoneme (vowel vs. consonant) decisions, which were indicated manually during silent viewing of pictorial stimuli. Movement execution latency was accounted for, and no other cognitive, linguistic, or motor demands were posed on participants' responses. Therefore, category identification accuracy and speed were considered indirect measures of semantic encoding efficiency and phoneme identification accuracy and speed of phonological encoding efficiency.

Results For category decisions, AWS were slower but not less accurate than ANS, with objects eliciting more errors and slower responses than animals in both groups. For phoneme decisions, the groups did not differ in accuracy, with consonant errors outnumbering vowel errors in both groups, and AWS were slower than ANS in consonant but not vowel identification, with consonant response time lagging behind vowel response time in AWS only.

Conclusions AWS were less efficient than ANS in semantic encoding, and they might harbor a consonant-specific phonological encoding weakness. Future independent studies are warranted to discover if these positive findings are replicable and a marker for persistent stuttering.

Acknowledgments
This research was funded by the State University of New York College at Cortland 2010-2011 Faculty Research Program Grant. The author would like to thank Michaela Granato for assistance with stimulus selection, participant recruitment, and speech sample transcription; Mary Emm for assistance with interjudge reliability measurements; and all individuals who participated in the study.
Order a Subscription
Pay Per View
Entire Journal of Speech, Language, and Hearing Research content & archive
24-hour access
This Article
24-hour access