Auditory Environment Across the Life Span of Cochlear Implant Users: Insights From Data Logging Purpose We describe the natural auditory environment of people with cochlear implants (CIs), how it changes across the life span, and how it varies between individuals. Method We performed a retrospective cross-sectional analysis of Cochlear Nucleus 6 CI sound-processor data logs. The logs were obtained from 1,501 people ... Research Article
Open Access
Research Article  |   May 24, 2017
Auditory Environment Across the Life Span of Cochlear Implant Users: Insights From Data Logging
 
Author Affiliations & Notes
  • Tobias Busch
    KU Leuven, Belgium
    Cochlear Technology Centre, Mechelen, Belgium
  • Filiep Vanpoucke
    Cochlear Technology Centre, Mechelen, Belgium
  • Astrid van Wieringen
    KU Leuven, Belgium
  • Disclosures: Tobias Busch is an early-stage researcher in the European international training network iCARE. He is pursuing a doctoral degree at the KU Leuven with Astrid van Wieringen. He is employed as a researcher by the company Cochlear, which manufactures the cochlear-implant sound processor and maintains the data-log database used in this study. Filiep Vanpoucke is a research employee of Cochlear.
    Disclosures: Tobias Busch is an early-stage researcher in the European international training network iCARE. He is pursuing a doctoral degree at the KU Leuven with Astrid van Wieringen. He is employed as a researcher by the company Cochlear, which manufactures the cochlear-implant sound processor and maintains the data-log database used in this study. Filiep Vanpoucke is a research employee of Cochlear. ×
  • Correspondence to Tobias Busch: tobias.busch@kuleuven.be
  • Editor: Nancy Tye-Murray
    Editor: Nancy Tye-Murray×
  • Associate Editor: Richard Dowell
    Associate Editor: Richard Dowell×
Article Information
Hearing Disorders / Hearing Aids, Cochlear Implants & Assistive Technology / School-Based Settings / Hearing / Research Articles
Research Article   |   May 24, 2017
Auditory Environment Across the Life Span of Cochlear Implant Users: Insights From Data Logging
Journal of Speech, Language, and Hearing Research, May 2017, Vol. 60, 1362-1377. doi:10.1044/2016_JSLHR-H-16-0162
History: Received April 21, 2016 , Revised August 17, 2016 , Accepted August 31, 2016
 
Journal of Speech, Language, and Hearing Research, May 2017, Vol. 60, 1362-1377. doi:10.1044/2016_JSLHR-H-16-0162
History: Received April 21, 2016; Revised August 17, 2016; Accepted August 31, 2016

Purpose We describe the natural auditory environment of people with cochlear implants (CIs), how it changes across the life span, and how it varies between individuals.

Method We performed a retrospective cross-sectional analysis of Cochlear Nucleus 6 CI sound-processor data logs. The logs were obtained from 1,501 people with CIs (ages 0–96 years). They covered over 2.4 million hr of implant use and indicated how much time the CI users had spent in various acoustical environments. We investigated exposure to spoken language, noise, music, and quiet, and analyzed variation between age groups, users, and countries.

Results CI users spent a substantial part of their daily life in noisy environments. As a consequence, most speech was presented in background noise. We found significant differences between age groups for all auditory scenes. Yet even within the same age group and country, variability between individuals was substantial.

Conclusions Regardless of their age, people with CIs face challenging acoustical environments in their daily life. Our results underline the importance of supporting them with assistive listening technology. Moreover, we found large differences between individuals' auditory diets that might contribute to differences in rehabilitation outcomes. Their causes and effects should be investigated further.

From learning language in a noisy nursery to tea time at the retirement home, different stages of life are characterized by different environments. Each of these comes with particular learning tasks and social interactions. For 360 million people with hearing loss (World Health Organization, 2015), the environment also provides acoustical challenges that can jeopardize learning and social participation. This holds true for those who perceive sounds through a cochlear implant (CI).
Auditory Environment Throughout Life
During the early years of life, the sounds that children hear are crucial for successful language acquisition (e.g., Hoff & Naigles, 2002; Huttenlocher, Waterfall, Vasilyeva, Vevea, & Hedges, 2010; Weisleder & Fernald, 2013). However, some environments are more conducive than others (Hoff, 2006). For children with a CI in particular, early implantation—and therefore early access to the sound environment—is clearly advantageous for language acquisition (e.g., Connor, Craig, Raudenbush, Heavner, & Zwolan, 2006; Nicholas & Geers, 2006, 2007; Svirsky, Teoh, & Neuburger, 2004). Yet their development of receptive and productive language also depends on the quantity and quality of the language they hear (DesJardin & Eisenberg, 2007; Quittner et al., 2013; Szagun & Stumper, 2012; Vohr, Topol, Watson, St Pierre, & Tucker, 2014). Moreover, high amounts of ambient noise are detrimental for their speech understanding (Caldwell & Nittrouer, 2013; Davidson, Geers, Blamey, Tobey, & Brenner, 2011) and development and health in general (Basner et al., 2014; Evans, 2006).
At day care centers and schools, young children with CIs face a variety of acoustical challenges. Aspects such as classroom size, number of students, and distance to the teacher can affect classroom acoustics to their disadvantage (Chute & Nevins, 2003; Crandell & Smaldino, 2000; Neuman, Wroblewski, Hajicek, & Rubinstein, 2012; Shield & Dockrell, 2003). Although mainstream classrooms often do not cater well to their needs (Neuman et al., 2012), more and more CI users are being placed in mainstream education (De Raeve & Lichtert, 2012; Geers & Brenner, 2003). Poor classroom acoustics might be one reason why the academic performance of children with CIs is often not on a par with that of their peers with unimpaired hearing (Huber, Hitzl, & Albegger, 2008; Mukari, Ling, & Ghani, 2007).
For adults with profound hearing loss, a CI can support integration into the wider world regarding both professional (Huber et al., 2008; Saxon, Holmes, & Spitznagel, 2001) and personal life (Faber & Grøntved, 2000; Hallberg & Ringdahl, 2004; Mäki-Torkko, Vestergren, Harder, & Lyxell, 2015). On the converse, if hearing aids fail to support their users in difficult listening situations, this can limit quality of life, professional development, and social participation and lead to frustration and nonuse (Gygi & Hall, 2016; Ng & Loke, 2015; Zhao, Bai, & Stephens, 2008).
The Need for Naturalistic Observations
Even with all the foregoing, knowledge about key aspects of the natural auditory environment of CI users is still limited—it is unclear how much spoken language CI users are exposed to throughout the day, how much of the speech they hear is embedded in noise, and how much time they spend in quietness. Nor do we know whether people in different stages of life differ in these respects.
At the same time, a lot of the variability in rehabilitation outcomes after cochlear implantation remains unexplained (Peterson, Pisoni, & Miyamoto, 2010; van Wieringen & Wouters, 2015). Boons et al. (2012), for instance, investigated the language outcomes of 288 children who received a CI before age 5 years. They were able to trace around half of the variation back to factors such as age at implantation, bilateral stimulation, and parental involvement. It is conceivable that some of the remaining variability originates from the auditory environment—that is, from differential exposure to spoken language (e.g., Vohr et al., 2014). Yet in the study by Boons et al. (2012)  and similar prospective studies (e.g., Davidson et al., 2011; Geers, Strube, Tobey, Pisoni, & Moog, 2011; Niparko et al., 2010; Percy-Smith et al., 2013; Quittner et al., 2013), differences in the everyday auditory environment of children have not been fully taken into account. In adults, the outcomes of cochlear implantation have similarly been shown to depend on several factors, including aspects of the environment (Francis, Yeagle, & Thompson, 2015; Hallberg, Ringdahl, Holmes, & Carver, 2005; Holden et al., 2013). Yet the role of the auditory environment in particular has not been investigated.
Environmental factors are typically assessed by means of self-reports (e.g., Boons et al., 2012; Holt, Beer, Kronenberger, & Pisoni, 2013) or brief behavioral observations (e.g., Niparko et al., 2010; Quittner et al., 2013; Szagun & Stumper, 2012). These methods are invaluable in establishing links between specific environmental factors and performance. However, to get a more complete picture of the natural environment, it is necessary to conduct comprehensive naturalistic observations (Fahrenberg, Myrtek, Pawlik, & Perrez, 2015; Reis & Gosling, 2010; Schwarz, 2007; Trull & Ebner-Priemer, 2013). For users of hearing aids and hearing implants, a means for naturalistic measurement is often already built into their device.
Hearing-Aid Data Logging
Many hearing aids keep records of various events in a so-called data log. Data logging is usually inconspicuous and unobtrusive, and the measurements are independent of the user's judgment or memory. This makes them relatively robust against response biases and reactive behavior and ideal for naturalistic observations. Studies have used data logging to investigate discrepancies between self-reported and device-recorded amounts of hearing-aid use (Laplante-Lévesque, Nielsen, Jensen, & Naylor, 2014; Muñoz, Preston, & Hicken, 2014; Walker et al., 2013) and other aspects of usage behavior (Banerjee, 2011b; Keidser & Alamudi, 2013; Mueller, Hornsby, & Weber, 2008).
In some hearing aids the data logs also allow inferences about the acoustical environment of the user, because they contain the output of an automatic scene classifier. Scene-classification algorithms are used to automatically adapt the signal processing to the user's sound environment. For listening to speech in background noise, for example, the microphone directionality may be increased to help with speech understanding. If there is only noise, the sound might be attenuated to provide a more comfortable listening experience.
On the basis of logged scene-classifier output, Mueller et al. (2008)  found that speech accounted for around 44% of the environments in which adults used their hearing aids. More than half of the speech exposure was in the presence of background noise, and an additional 22% of the sound environment was classified as noise. The remainder of the signal consisted of Music (4%) and Quiet (28%). Banerjee (2011a, 2011b), using a different scene classifier, found that 21% of hearing-aid use took place in loud and noisy environments, whereas as much as half of the users' environment was labeled as Quiet.
With the release of the Nucleus 6 CI sound processor (Cochlear, Sydney, Australia) in 2013, the combination of data logging and environmental scene classification was introduced to CIs (Mauger, Warren, Knight, Goorevich, & Nel, 2014). It is now available to a growing number of Nucleus 6 users. Because many people with CIs use their devices for the greater part of the day (Archbold, Nikolopoulos, & Lloyd-Richmond, 2009; Francis, Chee, Yeagle, Cheng, & Niparko, 2002; Markey et al., 2015; Proops et al., 1999), most relevant exposure to sound will be captured in the CI data logs. This makes data logging suitable for comprehensive naturalistic observations of the auditory environment.
Current Study
The main objective of the current study was to provide a description of the natural auditory environment of CI users using CI data logging. On the basis of the scene-classification data from a large cross-sectional sample of data logs, we built statistical models of the time CI users were exposed to different auditory environments. We addressed the following research questions:
  • Are users from different age groups exposed to different auditory environments—that is, does the exposure to the auditory scenes vary as a function of age group?

  • How much interindividual variability can be attributed to differences between geographical regions?

  • How much variability exists between users after controlling for age and region?

We expected to find differences between age groups and geographical regions because our sample covered a wide age range and users from diverse countries. We expected this diversity to be reflected in the users' daily life: Age groups may differ with respect to lifestyle, history of hearing loss, treatment, and usage behavior. Regions can differ in terms of typical living environments and family sizes, or the availability of medical and educational services. Furthermore, a region's candidacy and reimbursement policies shape the population of CI users regarding, for example, additional disabilities and socioeconomic status. We assumed that the corresponding differences in daily life go hand in hand with regional and age-related differences in the auditory environment, as measured by CI data logging.
We also expected to find variability between users after controlling for age group and region. Previous studies have reported large amounts of unexplained variation in CI users' rehabilitation outcomes (for overviews, see Peterson et al., 2010; van Wieringen & Wouters, 2015). One potential source of this variability could be differences in the user's auditory environments. Here we set out in search for evidence of such differences.
Method
Data Logging
The Nucleus 6 CI sound processor features an environmental scene classifier. As long as the CI is active, this classifier analyzes the user's acoustical environment. It distinguishes between six scenes: Speech in Quiet, Speech in Noise, Noise, Music, Quiet, and Wind. The classifier was trained through supervised machine learning—that is, by means of labeled examples. At an approximate rhythm of once per second, the algorithm determines how much the microphone input resembles each scene. When the resemblance to a particular scene is consistent over multiple seconds, a scene change is issued. The primary purpose of the classifier is to provide an optimal listening experience without the need for manual adjustments: Each scene is associated with optimized signal-processing settings, and CI users can set their device to follow these suggestions automatically (Mauger et al., 2014).
The sound processor also stores the time spent in each scene in a data log. This is done by means of six time counters—one per scene. The classifier, importantly, works exclusively on the signal from the sound-processor microphone, never on input from other sources (such as personal frequency modulation [FM] systems, t-coil, or audio cable). When other sources are used, the scene counters stop and a single separate counter for the respective source is incremented instead. In addition to the scene-classifier output and the time spent using alternative input sources, the data logs include other information such as the time on air—that is, the total duration of device use.
Data Collection
During a clinic visit the sound processor is connected to the Custom Sound (Cochlear Ltd.) software and the data log can be exported. Over time, clinics thus accumulate data logs from their CI users. These logs cover the time between consecutive visits to the clinic, which can range from a few days to more than a year.
Some clinics have kindly agreed to share anonymized data logs with Cochlear under a proper legal agreement for the purpose of postmarket research. Before sharing, the data are strictly anonymized: Only the users' year of birth, clinic, and country are retained. The clinic is coded so that the data do not allow identification of the clinic name or clinician. This database was accessed for this study in February 2016. This initial data set contained 7,133 logs from 1,820 Nucleus 6 CI sound-processor users. Before the analysis, data cleaning and preprocessing was performed.
Preprocessing
Short Logs
First, 2,642 logs were removed because they were shorter than 14 days. A short log likely indicates that the users were in the early fitting phase or that there was a problem with the device. Therefore, many short logs might not be representative of typical CI use.
Overlapping Logs
In the next step we merged overlapping logs from the same user. Some people with a unilateral CI wear multiple sound processors for the same implant interchangeably, thus producing multiple logs that cover the CI use partially. For bilateral users, on the other hand, merging was required to avoid data duplication.
We merged logs from the same CI whenever they overlapped by more than 12 hr. We summed the counters for time spent in each scene and time on air. Starting date, end date, and log duration for the merged log were set according to the spanned time. Overlapping bilateral logs were merged mostly following the same procedure, but the summed counters were corrected for potential duplication. For this, the proportion of overlap was calculated for each log and multiplied with the respective counters. The smaller of both estimates was then subtracted from the summed counters. After overlapping logs were merged, the sample contained 3,724 data points.
Age and Age Groups
The user's age was calculated for each data log as the difference between the middle of the log and the user's date of birth (for anonymization, all birth dates were rounded to the nearest June). We then divided logs into age groups on the basis of the user's age. The age groups were chosen so that they presumably correspond to major phases of life—that is, different stages of childhood, adolescence, and adulthood. The groups were Early Childhood (less than 6 years old), Primary School (6–11 years), Secondary School (12–17 years), Adult (18–64 years), and Senior (65 and above). It is important to note that the group assignment was based on age alone—it is not guaranteed, for example, that all users in the Primary School or Secondary School group actually went to school.
The motivation for modeling age-group means rather than using age as a continuous variable was twofold. First, age groups allowed us to describe nonlinear age-related trends with a straightforward model. Second, our measure of age was not accurate enough: Age was calculated as the difference between the user's date of birth and the middle of the data log, yet dates of birth were rounded by up to six months, and the logs often spanned months as well. This uncertainty was also one reason why age groups were made relatively wide (at least six years)—narrower age groups would have caused too many false categorizations.
All Speech and All Noise
The scene classifier distinguishes between Speech in Quiet, Speech in Noise, and Noise. We summed Speech in Quiet and Speech in Noise to create a measure of the total amount of speech in the environment (All Speech). Because environmental noise can be a source of annoyance, stress, and fatigue, and has adverse effects beyond its interference with communication (Basner et al., 2014; Bess & Hornsby, 2014; Skagerstrand, Stenfelt, Arlinger, & Wikström, 2014), we also created a measure of the total amount of noise in the environment (All Noise). This was done by summing the counters for Noise and Speech in Noise. All five speech and noise variables were used in the analysis.
Hours per Day
We were interested in the average hours per day that users spend using their CI (i.e., time on air) and in each scene. Therefore, we divided the scene-counter values (in hours) by the duration of the log (in days).
Low Time on Air
Because we were interested in the environment experienced during regular CI use, we excluded logs with extraordinarily low levels of time on air. Low time on air could indicate deliberate nonuse of the CI, medical or technical problems, or a recent implant. We excluded data points for which daily time on air was more than 1.5 times the interquartile range (IQR) below the sample median (median = 11.08, IQR = 5.85)—that is, below 2.31 hr/day (see Figure 1a). Of the 3,724 data points, 283 (7.6%) were excluded—Early Childhood: 66 of 638 (10.3%); Primary School: 22 of 452 (4.9%); Secondary School: 23 of 185 (12.4%); Adult: 118 of 1492 (7.9%); Senior: 54 of 957 (5.6%).
Figure 1.

Average daily duration of device use (time on air), on the basis of user-averaged data. (a) Time on air against user age in the middle of the averaged logs, fitted with locally weighted polynomial regression (LOESS; blue line). (b) Device use by age group.

 Average daily duration of device use (time on air), on the basis of user-averaged data. (a) Time on air against user age in the middle of the averaged logs, fitted with locally weighted polynomial regression (LOESS; blue line). (b) Device use by age group.
Figure 1.

Average daily duration of device use (time on air), on the basis of user-averaged data. (a) Time on air against user age in the middle of the averaged logs, fitted with locally weighted polynomial regression (LOESS; blue line). (b) Device use by age group.

×
Final Sample
The preprocessed sample contained 3,441 observations from 1,501 users, with age ranging from less than 1 to over 96 years (median = 44.7, IQR = 56.0). The data came from 69 clinics in 13 countries. The number of clinics, users, and data logs in the final sample is shown in Table 1. The measurements covered extended periods of time, from two weeks to more than 14 months (median = 48.9 days, IQR = 61.8). The combined time on air of all observations amounted to over 2.4 million hr.
Table 1. Final sample after preprocessing.
Final sample after preprocessing.×
Country Clinics Data logs (users)
Early childhood Primary school Secondary school Adult Senior Total
Australia 4 11 (6) 33 (20) 5 (3) 157 (45) 183 (46) 389 (118)
Belgium 11 131 (45) 134 (51) 20 (13) 163 (60) 86 (26) 534 (191)
Canada 3 1 (1) 2 (1) 7 (4) 8 (3) 18 (9)
France 2 43 (19) 16 (9) 17 (11) 19 (12) 3 (2) 98 (53)
Germany 3 93 (32) 81 (36) 35 (24) 461 (242) 120 (78) 790 (407)
Hong Kong 1 2 (1) 2 (1)
India 10 46 (13) 22 (9) 14 (8) 22 (6) 104 (36)
Malaysia 2 8 (2) 4 (1) 12 (3)
New Zealand 3 59 (20) 21 (13) 7 (7) 150 (58) 142 (50) 379 (146)
Spain 1 4 (3) 1 (1) 9 (6) 2 (2) 16 (12)
Switzerland 1 1 (1) 2 (1) 3 (2)
Netherlands 2 58 (21) 80 (32) 40 (16) 197 (80) 143 (60) 518 (201)
United States 26 118 (64) 41 (22) 14 (10) 189 (107) 216 (121) 578 (323)
Total 69 572 (225) 430 (194) 162 (96) 1,374 (620) 903 (388) 3,441 (1,501)
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.×
Table 1. Final sample after preprocessing.
Final sample after preprocessing.×
Country Clinics Data logs (users)
Early childhood Primary school Secondary school Adult Senior Total
Australia 4 11 (6) 33 (20) 5 (3) 157 (45) 183 (46) 389 (118)
Belgium 11 131 (45) 134 (51) 20 (13) 163 (60) 86 (26) 534 (191)
Canada 3 1 (1) 2 (1) 7 (4) 8 (3) 18 (9)
France 2 43 (19) 16 (9) 17 (11) 19 (12) 3 (2) 98 (53)
Germany 3 93 (32) 81 (36) 35 (24) 461 (242) 120 (78) 790 (407)
Hong Kong 1 2 (1) 2 (1)
India 10 46 (13) 22 (9) 14 (8) 22 (6) 104 (36)
Malaysia 2 8 (2) 4 (1) 12 (3)
New Zealand 3 59 (20) 21 (13) 7 (7) 150 (58) 142 (50) 379 (146)
Spain 1 4 (3) 1 (1) 9 (6) 2 (2) 16 (12)
Switzerland 1 1 (1) 2 (1) 3 (2)
Netherlands 2 58 (21) 80 (32) 40 (16) 197 (80) 143 (60) 518 (201)
United States 26 118 (64) 41 (22) 14 (10) 189 (107) 216 (121) 578 (323)
Total 69 572 (225) 430 (194) 162 (96) 1,374 (620) 903 (388) 3,441 (1,501)
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.×
×
Statistical Analysis
Our main analysis concerned the time that CI users spent in five acoustical scenes (Speech in Quiet, Speech in Noise, Noise, Music, Quiet) and the aggregated total amount of speech (All Speech—i.e., Speech in Quiet + Speech in Noise) and noise (All Noise—i.e., Noise + Speech in Noise). Exposure to Wind was not analyzed, because of its low prevalence: In the user-averaged data, 95% of the users had no more than 2.51 min of Wind exposure per day; median exposure was below 2 s (IQR = 19 s).
Because the scene-classifier output does not account for time when assistive listening devices (such as FM or t-coil) are used, we analyzed how often this was the case. Furthermore, we investigated the users' daily duration of CI use (time on air).
There were multiple data logs in the sample from 58.0% of users (users with two logs: 24.6%; three logs: 14.6%; four logs: 9.4%; five logs: 4.1%; six logs: 2.1%; seven to 13 logs: 3.2%). Such repeated measurements do not carry independent information. This has to be taken into account in the statistical analysis.
User Averages
For descriptive statistics and data visualization we aggregated repeated measures into user averages. We used the procedure we had used to merge overlapping logs (see earlier) and recalculated hours per day and age groups. For statistical inference, the clustering of observations by user and region was dealt with more thoroughly by using hierarchical linear modeling (HLM).
HLM
HLM was used to model the effect of age group on CI users' exposure to the different scenes and the contribution of clustering by country and user to the variability among observations (i.e., data logs). A separate HLM model was fitted for each outcome variable. In these models, dependencies between observations from the same user and country were accounted for by random effects. Age group was entered into the model as a predictor on Level 1, that is, for each observation. The resulting three-level random intercept model can be formalized as Display Formula
Y l u c = β AgeGrou p l u c + w 00 c + r 0 u c + e l u c
Display Formula
w 00 c N 0 σ country 2
Display Formula
r 0 u c N 0 σ user 2
Display Formula
e l u c N 0 σ residual 2 ,
where y luc is the time spent in the auditory scene in question, recorded in log l from user u in country c. The 6-dimensional vector β contains the age-group means, and AgeGroup luc is a corresponding 6-dimensional vector of indicator variables coding which age group the user belonged to at the time of the recording. The age groups were cell-means coded to obtain the group means directly (hence removing the intercept from the model).
Three variance components represent deviation from the age-group mean: Two random effects model the variation shared by all observations from the same country (w00c ) and user (r0uc ); the residual e luc captures the unexplained variation. They are modeled as normally distributed random variables with mean zero and variance σcountry2, σuser2, and σresidual2, respectively.
Restricted maximum-likelihood estimates of the model parameters and bootstrapped 95% confidence intervals were obtained using the R programming language (R Core Team, 2015) with the lme4 package (Bates, Mächler, Bolker, & Walker, 2015). The model residuals were inspected for violations of the assumptions of normality, homoscedasticity, and independence.
Whether adding age group brought significant improvement to the model fit was determined by running likelihood-ratio tests against the null model (i.e., a model containing only an intercept and the random effects). As suggested by Nakagawa and Schielzeth (2013), we report two types of R2: Rmarginal2, the proportion of variance explained by the fixed effect (age group) alone, and Rconditional2, the proportion of variance explained by age group and the random effects for user and country.
We further analyzed the contribution of each variance component to the variation unexplained by age group. This so-called residual intraclass correlation coefficient (ICC) is the ratio of the variance explained by one variance component (here σcountry2, σuser2, or σresidual2) to the variance explained by all of them (here σcountry2 + σuser2 + σresidual2). The ICC also indicates whether a random effect should be included in the model. In fact, an earlier version of the model included a variance component for clinics; however, its ICC was so small that we decided to exclude it from the final analysis. As a last matter, Tukey's pairwise comparisons of all age-group means were performed using R's multcomp package (Hothorn, Bretz, & Westfall, 2008) and a familywise error rate of α = .05.
Results
The user-averaged data showed that CI users in our sample used their devices for an average of 10.74 hr/day. Results for the different age groups are shown in Figure 1b.
On average, users spent 1.27 hr/day (SD = 0.80) in environments that were classified as Speech in Quiet and 2.48 h/day (SD = 1.20) in Speech in Noise. On average, CI users spent 3.76 hr/day (SD = 1.70) in either of the two speech environments (i.e., All Speech). An additional 1.47 hr/day were classified as Noise (SD = 1.10). On average, the time spent in either of the two noise environments (i.e., All Noise) amounted to 3.96 hr/day (SD = 1.89). In addition, an average of 0.59 hr/day were classified as Music (SD = 0.57) and 4.91 hr/day as Quiet (SD = 2.75). The distribution of the user-averaged data is shown in Table 2 and in the boxplots in Figures 12345.
Table 2. Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).
Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).×
M SD Percentile
5 25 50 75 95
Speech in quiet 1.27 0.80 0.26 0.65 1.12 1.74 2.82
Speech in noise 2.48 1.20 0.70 1.61 2.35 3.28 4.71
All speech 3.76 1.70 1.19 2.47 3.63 4.97 6.74
Noise 1.47 1.10 0.29 0.75 1.24 1.88 3.48
All noise 3.96 1.89 1.19 2.59 3.77 5.06 7.46
Music 0.59 0.57 0.06 0.19 0.4 0.84 1.83
Quiet 4.91 2.75 0.99 2.69 4.58 6.92 9.84
Time on air 10.74 3.45 4.30 8.35 11.31 13.41 15.22
Table 2. Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).
Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).×
M SD Percentile
5 25 50 75 95
Speech in quiet 1.27 0.80 0.26 0.65 1.12 1.74 2.82
Speech in noise 2.48 1.20 0.70 1.61 2.35 3.28 4.71
All speech 3.76 1.70 1.19 2.47 3.63 4.97 6.74
Noise 1.47 1.10 0.29 0.75 1.24 1.88 3.48
All noise 3.96 1.89 1.19 2.59 3.77 5.06 7.46
Music 0.59 0.57 0.06 0.19 0.4 0.84 1.83
Quiet 4.91 2.75 0.99 2.69 4.58 6.92 9.84
Time on air 10.74 3.45 4.30 8.35 11.31 13.41 15.22
×
Figure 2.

Exposure to spoken-language environments by age group: (a) Speech in Quiet; (b) Speech in Noise; (c) All Speech (Speech in Quiet + Speech in Noise). Left: The distribution of the user-averaged data is shown in gray as annotated box plots. The whiskers extend to the 5th and 95th percentiles. The modeled age-group means and bootstrapped 95% confidence intervals are shown in blue. Right: The estimated distribution of each variance component is depicted as a density plot. Individual random intercepts are depicted as rug plots (horizontal lines). Random intercepts outside of the axis limits are not shown.

 Exposure to spoken-language environments by age group: (a) Speech in Quiet; (b) Speech in Noise; (c) All Speech (Speech in Quiet + Speech in Noise). Left: The distribution of the user-averaged data is shown in gray as annotated box plots. The whiskers extend to the 5th and 95th percentiles. The modeled age-group means and bootstrapped 95% confidence intervals are shown in blue. Right: The estimated distribution of each variance component is depicted as a density plot. Individual random intercepts are depicted as rug plots (horizontal lines). Random intercepts outside of the axis limits are not shown.
Figure 2.

Exposure to spoken-language environments by age group: (a) Speech in Quiet; (b) Speech in Noise; (c) All Speech (Speech in Quiet + Speech in Noise). Left: The distribution of the user-averaged data is shown in gray as annotated box plots. The whiskers extend to the 5th and 95th percentiles. The modeled age-group means and bootstrapped 95% confidence intervals are shown in blue. Right: The estimated distribution of each variance component is depicted as a density plot. Individual random intercepts are depicted as rug plots (horizontal lines). Random intercepts outside of the axis limits are not shown.

×
Figure 3.

Exposure to noisy environments by age group: (a) Noise; (b) All Noise (Noise + Speech in Noise). The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

 Exposure to noisy environments by age group: (a) Noise; (b) All Noise (Noise + Speech in Noise). The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.
Figure 3.

Exposure to noisy environments by age group: (a) Noise; (b) All Noise (Noise + Speech in Noise). The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

×
Figure 4.

Exposure to Music by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

 Exposure to Music by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.
Figure 4.

Exposure to Music by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

×
Figure 5.

Exposure to Quiet by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

 Exposure to Quiet by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.
Figure 5.

Exposure to Quiet by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

×
The scene-classifier output accounts only for sounds that were picked up by the sound-processor microphone when no other input sources were used. Assistive listening devices (such as FM, t-coil, or audio cable) are logged separately, without scene classification. In our sample, 75% of users had used such devices, but only 32% had used them for an average of at least 5 min/day. Table 3 shows their use by age group. Other remote input sources, such as Bluetooth streaming, were used by less than 1% of users in all age groups and are therefore not listed.
Table 3. Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).
Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).×
Assistive listening device Age group % of users a Percentile (hr/day)
5 25 50 75 95
Any Early childhood 18 0.10 0.19 0.29 0.87 1.63
Primary school 52 0.10 0.21 0.47 1.04 3.01
Secondary school 42 0.10 0.27 0.64 1.45 3.82
Adult 32 0.09 0.17 0.39 1.07 3.97
Senior 29 0.10 0.15 0.27 0.94 3.66
Total 32 0.10 0.17 0.38 1.05 3.67
FM or t-coil b Early childhood 17 0.10 0.19 0.27 0.71 1.49
Primary school 46 0.10 0.21 0.49 1.08 2.87
Secondary school 23 0.09 0.21 0.53 1.11 3.68
Adult 21 0.09 0.15 0.30 0.64 3.60
Senior 26 0.10 0.15 0.30 0.93 3.53
Total 25 0.10 0.16 0.34 0.89 3.35
Audio cable Early childhood 1.8 0.09 0.09 0.11 0.18 0.32
Primary school 8.2 0.12 0.15 0.23 0.46 1.26
Secondary school 23 0.17 0.27 0.61 1.37 3.57
Adult 14 0.10 0.21 0.54 1.17 3.21
Senior 2.1 0.11 0.15 0.18 0.36 6.29
Total 8.9 0.09 0.17 0.43 1.07 3.28
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.×
a CI users who have used the respective device for an average of at least 5 min/day.
CI users who have used the respective device for an average of at least 5 min/day.×
b A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.
A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.×
Table 3. Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).
Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).×
Assistive listening device Age group % of users a Percentile (hr/day)
5 25 50 75 95
Any Early childhood 18 0.10 0.19 0.29 0.87 1.63
Primary school 52 0.10 0.21 0.47 1.04 3.01
Secondary school 42 0.10 0.27 0.64 1.45 3.82
Adult 32 0.09 0.17 0.39 1.07 3.97
Senior 29 0.10 0.15 0.27 0.94 3.66
Total 32 0.10 0.17 0.38 1.05 3.67
FM or t-coil b Early childhood 17 0.10 0.19 0.27 0.71 1.49
Primary school 46 0.10 0.21 0.49 1.08 2.87
Secondary school 23 0.09 0.21 0.53 1.11 3.68
Adult 21 0.09 0.15 0.30 0.64 3.60
Senior 26 0.10 0.15 0.30 0.93 3.53
Total 25 0.10 0.16 0.34 0.89 3.35
Audio cable Early childhood 1.8 0.09 0.09 0.11 0.18 0.32
Primary school 8.2 0.12 0.15 0.23 0.46 1.26
Secondary school 23 0.17 0.27 0.61 1.37 3.57
Adult 14 0.10 0.21 0.54 1.17 3.21
Senior 2.1 0.11 0.15 0.18 0.36 6.29
Total 8.9 0.09 0.17 0.43 1.07 3.28
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.×
a CI users who have used the respective device for an average of at least 5 min/day.
CI users who have used the respective device for an average of at least 5 min/day.×
b A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.
A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.×
×
We used HLM to further investigate the effect of age group on exposure to the different scenes and the contribution of clustering by country and user to the variability among observations. The results are reported in the following.
Exposure to Spoken Language
Speech in Quiet
Analysis with HLM confirmed age group as a significant predictor of the time spent in Speech in Quiet (likelihood-ratio test, χ2(4) = 179.52, p < .001). Age group explained 10% of the observed variance (Rmarginal2 = .10). Together with the random effects for country and user, 83% of the variance was explained by the model (Rconditional2 = .83). Further analysis of the ICCs of the variance components showed that 3.3% of remaining variance was attributable to differences between countries (σcountry2 = 0.02, SD = 0.14); variability between users contributed 77.7% (σuser2 = 0.48, SD = 0.69), and 19.0% was unexplained (σresidual2 = 0.12, SD = 0.34).
The modeled age-group means ranged from 1.02 hr/day in the Senior group to 1.80 hr/day in the Primary School group (see Figure 2a). All group differences were statistically significant at the α = .05 significance level, except for the difference between the Early Childhood and Secondary School groups and the difference between the Adult and Senior groups.
Speech in Noise
Exposure to Speech in Noise was significantly predicted by age group (likelihood-ratio test, χ2(4) = 191.38, p < .001). Age group explained 8.6% of the observed variance (Rmarginal2 = .086). Including the variation explained by the random effects, the model explained 78% of the variance (Rconditional2 = .78).
The random effect for country accounted for 10.0% of the variation not explained by age group (σcountry2 = .14, SD = 0.38); variation between users explained 66.4% (σuser2 = 0.96, SD = 0.98). The remaining 23.6% of the variation between observations was unexplained (σresidual2 = .34, SD = 0.58).
Mean exposure to Speech in Noise ranged from 2.37 hr/day in the Senior group to 3.46 hr/day in the Secondary School group (see Figure 2b). The differences between the Early Childhood and Adult groups, the Early Childhood and Senior groups, and the Primary School and Secondary School groups were not statistically significant. All other pairwise comparisons were significant.
To get a better understanding of when the Speech in Noise scene would be activated, we played LIST sentences (van Wieringen & Wouters, 2008) at 70 dBA to the Nucleus 6 sound processor. Cocktail-party noise was mixed in at decreasing signal-to-noise ratios (SNRs). We found that at 15 dB SNR and lower, the classifier switched from the Speech scene to the Speech in Noise scene.
All Speech (Speech in Quiet + Speech in Noise)
Age group was a significant predictor of the total amount of speech in users' environments (i.e., Speech + Speech in Noise; likelihood-ratio test, χ2(4) = 225.78, p < .001). Age group explained 12% of the observed variance (Rmarginal2 = .12). Together with the random effects for country and user, 80% of the variance was explained by the model (Rconditional2 = .80). The ICCs of the variance components showed that, after taking age group into account, between-countries variability explained 2.57% (σcountry2 = 0.07, SD = 0.26) of the remaining variability in exposure to speech, variation between users explained 75.2% (σuser2 = 1.99, SD = 1.41), and 22.3% were unexplained (σresidual2 = 0.59, SD = 0.77).
Mean exposure to speech environments ranged from 3.34 hr/day in the Senior group to 5.17 hr/day in the Primary School group (see Figure 2c). The differences between all groups were statistically significant at the α = .05 significance level, except for the difference between the Primary School and Secondary School groups.
Exposure to Noise
Noise
Exposure to environments classified as Noise varied significantly between age groups (likelihood-ratio test, χ2(4) = 303.45, p < .001). The model explained Rmarginal2 = 14% of the observed variance. When taking into account the variance explained by the random effects for user and country, the model explained Rconditional2 = 84%. Of the variability not explained by age group, the variance component for country contributed 27.6% (σcountry2 = 0.37, SD = 0.61), between-users variation accounted for 53.7% (σuser2 = 0.72, SD = 0.85), and 18.7% was unexplained (σresidual2 = 0.25, SD = 0.50).
Average exposure to Noise ranged from 0.95 hr/day in the Early Childhood group to 2.19 hr/day in the Adult group (see Figure 3a). All group differences were statistically significantly except for the difference between the Senior and Secondary School groups.
All Noise (Noise + Speech in Noise)
Age group was a significant predictor of the amount of exposure to noisy environments (i.e., environments classified as either Noise or Speech in Noise; likelihood-ratio test, χ2(4) = 146.79, p < .001). The fixed effect for age group explained Rmarginal2 = 6.2% of the observed variance. Together with the variance explained by the random effects, the model accounted for Rconditional2 = 81% of the variance. The residual ICCs revealed that variation between countries contributed 22.2% of the variance not explained by age group (σcountry2 = 0.91, SD = 0.95), 57.9% was attributable to differences between users (σuser2 = 2.36, SD = 1.54), and 20.0% was unexplained (σresidual2 = 0.82, SD = 0.90).
Mean daily exposure to noisy environments ranged from 3.41 hr/day in Early Childhood to 5.12 hr/day in Secondary School (see Figure 3b). The Primary School, Secondary School, and Adult groups' means did not differ significantly from each other. All other pairwise comparisons were statistically significant.
Exposure to Music
Age group was a significant predictor of the time people were exposed to Music (likelihood-ratio test, χ2(4) = 683.15, p < .001). Age group explained Rmarginal2 = 34% of the observed variance. The variance explained by age group and the random effects for user and country was Rconditional2 = 86%.
After taking age group into account, variation between countries explained 14.7% of the remaining variability (σcountry2 = 0.03, SD = 0.18) and variation between users explained 64.6% (σuser2 = 0.15, SD = 0.38). The remaining 20.7% was unexplained (σresidual2 = 0.05, SD = 0.22).
Exposure to Music ranged from 0.33 hr/day in the Senior group to 1.22 hr/day in the Early Childhood group (see Figure 4). All age-group means were significantly different at the α = .05 significance level.
Exposure to Quiet
The time spent in environments classified as Quiet varied significantly between age groups (likelihood-ratio test, χ2(4) = 421.91, p < .001). Age group explained Rmarginal2 = 24% of the observed variance. Together with the variance explained by the random effects for user and country, the model accounted for Rconditional2 = 88% of variance between observations.
Of the variation unexplained by age group, the variance component for country accounted for 9.6% (σcountry2 = 0.59, SD = 0.77) and variation between users explained 74.2% (σuser2 = 4.60, SD = 2.14). The remaining 16.3% was unexplained (σresidual2 = 1.01, SD = 1.01).
Mean daily exposure to Quiet ranged from 2.27 hr/day in the Early Childhood group to 5.95 hr/day in the Senior group (see Figure 5). All group differences were statistically significant.
Discussion
We explored the natural auditory environment of CI users through data logs of the Cochlear Nucleus 6 CI sound processor. To be specific, we analyzed the output of the environmental scene classifier stored in these logs. Our sample contained 3,441 observations from 1,501 users in 13 countries. The logs covered time periods from two weeks to more than one year, and contained the amount of time the CI users had spent in various acoustical scenes.
As expected, the auditory environment varied between age groups. Additional variation was attributable to differences between users and countries.
Differences Between Age Groups
Exposure to Spoken Language
We found the amount of spoken language (i.e., Speech in Quiet + Speech in Noise) in the environment of CI users to be relatively high, especially for young CI users (Early Childhood: M = 3.97 hr/day; Primary School: M = 5.17 hr/day; Secondary School: M = 4.77 hr/day; Adult: M = 3.62 hr/day; Senior: M = 3.34 hr/day). Granting access to spoken language is an important motivation for cochlear implantation across age groups. In early life, sufficient exposure to spoken language is important for successful language acquisition (Vohr et al., 2014; Weisleder & Fernald, 2013). For adult users, regaining access to speech communication, and social participation along with it, increases quality of life (Contrera et al., 2016; Hallberg & Ringdahl, 2004; Mäki-Torkko et al., 2015; Zhao et al., 2008). Other hypothesized benefits of cochlear implantation in older adults, such as reduced cognitive decline and reduced depression (e.g., Choi et al., 2016; Mosnier et al., 2015), could in part also be driven by improved social functioning (Lin et al., 2012).
Our results show that CI users across the life span are navigating a world full of speech. However, when interpreting the amount of speech in the data logs, it should be noted that much of the CI user's own utterances will count toward speech, as might certain TV or radio programs. A separate class for the user's own voice would be preferable, because it would allow to assess speech input alone as well as the user's engagement in conversations. TV consumption is similarly an interesting factor in and of itself in child development and aging (e.g., Courage & Howe, 2010; Eggermont & Vandebosch, 2001). On the other hand, some intelligible, albeit soft or distant, speech might be classified as Quiet, because sound pressure overrules the other acoustical features so that all sounds below 50 dB SPL are labeled as Quiet (this is the case for all soft sounds—e.g., soft noise and soft music). Regarding language development in particular, it should also be kept in mind that some important features of communication are not captured by data logging, such as the amount of turn-taking (Ambrose, VanDam, & Moeller, 2014), the complexity of parent's speech (DesJardin & Eisenberg, 2007; Szagun & Rüter, 2009; Szagun & Stumper, 2012), parental engagement (Niparko et al., 2010; Quittner et al., 2013), and use of facilitative language techniques (DesJardin, Ambrose, & Eisenberg, 2009, 2011; DesJardin & Eisenberg, 2007; Szagun & Rüter, 2009; Szagun & Stumper, 2012).
Because no performance data were available, we cannot tell whether the amount of spoken language in the environment was sufficient. It will have to be the subject of further studies whether the data logs' count of spoken language, despite its limitations, is a good predictor for language development and rehabilitation.
We also analyzed the exposure to Speech in Noise and Speech in Quiet separately. We found high daily amounts of Speech in Noise across all age groups (Early Childhood: M = 2.46 hr/day; Adults: M = 2.56 hr/day; Senior: M = 2.37 hr/day), with a clear peak in school age (Primary School: M = 3.42 hr/day; Secondary School: M = 3.46 hr/day). Compared with Speech in Noise, the exposure to Speech in Quiet was low: It lay between 1 and 2 hr/day for all age groups (Early Childhood: M = 1.53 hr/day; Primary School: M = 1.80 hr/day; Secondary School: M = 1.35 hr/day; Adults: M = 1.11 hr/day; Senior: M = 1.02 hr/day).
It has been shown that understanding Speech in Noise is challenging for people with CIs (e.g., Caldwell & Nittrouer, 2013; Hazrati & Loizou, 2012) and can, for instance, jeopardize academic performance (Chute & Nevins, 2003). We found that the classifier categorized speech as Speech in Noise at an SNR of less than 15 dB. This is in line with the calculation of the Speech Intelligibility Index, where SNRs below 15 dB are considered to have a negative impact on intelligibility (American National Standards Institute, 1997, section 4.7).
Exposure to Noisy Environments
Many CI users spent large parts of their day in noisy environments. All age groups spent an average of more than 4 hr/day in Noise or Speech in Noise, except for the Early Childhood group (M = 3.41). In the Secondary School group, noisy environments even amounted to 5.12 hr/day. Yet we cannot infer how much the users in our sample struggled with the noise in their environment: The effects of noise are mediated by aspects that are not covered by the data logs, such as the difficulty of the listening task and subjective factors (Plyler, Bahng, & von Hapsburg, 2008; Whitmal & Poissant, 2009).
Regardless, our data emphasize that listening in noise is the rule rather than the exception for people with CIs, and are a reminder to support these people with sound-cleaning technologies such as directional microphones and noise reduction. Furthermore, assistive listening technologies (such as FM or t-coil) can support speech understanding in difficult listening situations (Fitzpatrick, Séguin, Schramm, Armstrong, & Chénier, 2009; Iglehart, 2004; Wolfe et al., 2013).
We were surprised to find that adoption of assistive listening technologies was low in terms of both number of users and duration of use: Only 32% of CI users had used remote sound sources for a noteworthy amount of time; this was more common in the Primary School and Secondary School groups (52% and 42% of users, respectively). Among those who did use assistive listening devices, the amount of use varied, and there were again differences between age groups, but for the vast majority in all age groups it barely exceeded 1 hr/day (see Table 3). Our data do not reveal whether assistive listening technologies were used whenever they would have been beneficial, but considering the sheer amount of noise picked up by the scene classifier (which is only logged when no remote sound source is used), it seems likely that there is potential for improvement.
Exposure to Music
Music exposure peaked in the Early Childhood with a mean of 1.22 hr/day. It declined with increasing age (Primary School: M = 1.07; Secondary School: M = 0.69; Adult: M = 0.46; Senior: M = 0.33). It is unsurprising that younger age groups are exposed more to Music—after all, Music is certainly more common in preschools and schools than in most work environments. Furthermore, it has been reported that adults with preimplantation experience with music enjoy music less after the implantation (Gfeller et al., 2000). Our findings could be a reflection of this.
However, there are confounding factors: First, whether Music is recognized as such (instead of, for example, as Noise or Quiet) also depends on genre and playback level. Preferences for those could be confounded with age.
Second, some users have reported that children's voices are occasionally classified as Music. Although we have no formal evidence of it, future studies should be aware of this potential confound.
Last, if music is played back any other way than through the sound-processor microphone (e.g., through audio cable or Bluetooth streaming), it will not be counted by the classifier. Although streaming was not at all common, audio cables were indeed more often used by older age groups (see Table 3). However, this is unlikely to account for the entire decline in Music exposure, because even at its peak in the Secondary School group, only 23% of users had used an audio cable for more than 5 min/day, with a median of 0.61 hr/day. Moreover, it is unclear how much of the input through the audio cable actually was music.
Exposure to Quiet
With increasing age, more of the users' environment was classified as Quiet. Average exposure to Quiet more than doubled between the youngest (Early Childhood: M = 2.27 hr/day) and oldest (Senior: M = 5.95 hr/day) age groups. This increase in Quiet could be the result of lifestyle changes, about which we can only speculate. Perhaps more time is spent with contemplative activities (e.g., studying or reading) or in quiet environments (e.g., in the office rather than on the playground).
Differences Between Users
We examined the variability between users through the modeled random effect of user (r0uc ). We were specifically interested in its ICC and estimated variance (σuser2). The ICC quantifies the proportion of the variance unexplained by the age-group effect that can be attributed to between-users differences. The user-specific effect r0uc of user u in country c comes from a normal distribution of user-specific effects with mean zero and variance σuser2. Thus, roughly speaking, σuser2 indicates how far individual users are expected to diverge from the mean of other users in the same country and age group.
The high ICC of the user effect in all models shows that a substantial amount of the remaining variability was explained by differences between users. Furthermore, large σuser2 indicated a wide diversity of auditory environments, even when controlling for age group and country.
For example, in the model of exposure to spoken language (i.e., Speech in Quiet + Speech in Noise), the random effect for user explained 75.2% of the remaining variation between observations. The spread of the user-specific intercepts was estimated as σuser2 = 1.99. Because the user effects are modeled to be normally distributed, about 50% of the users would be expected to have a daily speech exposure within ±0.95 hr of their respective age-group mean. This is in line with the quartiles we obtained from the user-averaged data: In all age groups, the central 50% were spread over approximately 2 hr.
In the model of exposure to noisy environments (Noise + Speech in Noise), 57.9% of the remaining variability was attributable to between-users differences. The variance of the user effect was estimated as σuser2 = 2.36. Therefore, 50% of the users would be expected to have a noise exposure within ±1.04 hr of their respective age-group mean. Again, this is comparable to the quartiles observed from the user-averaged data, and shows that the 25% of the CI users with the highest amount of exposure to spoken language hear roughly 2 hr more speech per day than those at the other end of the spectrum. The same is true for exposure to noisy environments.
The between-users differences were not only quite pronounced, they were also persistent: The logs are recorded over weeks and months rather than days and thus indicate ongoing trends rather than singular outliers. Such differences in environment likely affect the users' rehabilitation. For example, it has been shown that a lack of spoken language input can be detrimental for language development (Vohr et al., 2014), and an abundance of noise can affect academic performance (Chute & Nevins, 2003).
However, the reverse could also be the case: Previous studies have shown large interindividual variability in rehabilitation outcomes (e.g., Boons et al., 2012; Holden et al., 2013). Such differences could in turn affect the environment, because users might select their environments according to their abilities and needs. For instance, those who have little trouble listening in noise might more often expose themselves to noisy situations, whereas others might avoid them by, for example, attending special schools or avoiding certain social interactions. Thus, it remains unclear to what extent the environmental differences that we found do actually cause differences in rehabilitation, and to what extent they are yet another manifestation of such rehabilitation differences, without additional explanatory value. It should also be noted that some of the between-users variability could come from age-related changes that are not captured by the age groups. For example, in the first 6 years of life—that is, the Early Childhood group—hearing-aid use increases (Walker et al., 2013, 2015; see also Figure 1a), which creates opportunities for exposure to speech and other environments. A rise in speech exposure might also come from the child's own language development (e.g., Faes, Gillis, & Gillis, 2015), whereas a surge of environmental noise can be expected by the time children enter preschool (Grebennikov, 2006; Sjödin, Kjellberg, Knutsson, Landström, & Lindberg, 2012). The rationale behind categorizing age and our choice of age groups were explained earlier. Nevertheless, more complex models might be able to fit age trends more closely and thereby explain more of the between-users variability.
Differences Between Countries
After taking age group into account, a proportion of the remaining variation between observations was attributable to differences between countries. For some environmental scenes, the variance explained by the random effect for country (w00c ) was rather small (e.g., All Speech: ICC = 2.57%). For exposure to noisy environments, however, the country effect was quite pronounced (All Noise: ICC = 27.2%). This could lead to the conclusion that some countries are noisier than others. Although not an unreasonable hypothesis, it should be kept in mind that our sample was unbalanced and uncontrolled. In particular, most countries contributed relatively little data from a small number of clinics (see Table 1). Between-countries differences could be due to sampling bias (e.g., if only clinics in large and noisy cities have been sampled) or to the larger standard errors of these smaller samples. Without more knowledge about our sample, the random effect for country can hardly be interpreted, and little credibility can be given to individual countries' random intercepts. Thus, we merely used the random effect for country to account for the similarities of users within countries—whatever their origins may be.
Limitations
Because of its size and diversity, we believe that our sample provides a good representation of the population of CI users as a whole. This was made possible by the generous collaboration of clinics around the world, the wide availability of Nucleus 6 data logging, and the ease of collecting data logs. Because the scene classifier records whenever the CI is in use, it provides insights into the natural auditory environment of people with CIs, which would otherwise be very difficult to get.
On the downside, a certain degree of measurement bias is to be expected from the scene classifier as a consequence of its original purpose: The algorithm was not geared toward accurate descriptions of the acoustical environment. Its primary purpose is to react to circumstances that call for adjustments to sound processing. For example, the classifier reacts slowly to changes in the sound environment. This makes the listening experience comfortable for the user but also means that temporary changes of the environment evade data logging. Some other potential technical biases have been reported in the foregoing. It is obvious that further validation of the scene classifier is necessary to facilitate its usefulness as a research tool.
The main limitation of this study was the scarce demographic information. More information about the individuals in our sample could help us understand differences between users or countries and the effects of the auditory diet on rehabilitation.
Conclusions
We used data logs as a window into the natural auditory environment of people with CIs. Our results highlight some of the challenges these people faced in their daily life. For example, they spent a substantial part of their time in noisy environments. A lot of speech, accordingly, was presented in background noise; the amount of speech presented in quiet was rather low.
We also found that different age groups were characterized by different auditory environments. However, even within the same age group and country, environments varied widely. Such interindividual differences could explain some of the great variability in rehabilitation outcomes. Further studies are needed to explore the origins and effects of these differences and show whether the Nucleus 6 scene classifier is a good indicator of the conduciveness of a user's auditory environment.
Here we have made one step toward a better understanding of CI users' natural auditory environments. Understanding the acoustical challenges that people with hearing impairment encounter throughout their life will help guide research and intervention, identify risks, and provide the best possible support.
Acknowledgment
The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under Grant Agreement FP7-607139 (iCARE).
References
Ambrose, S. E., VanDam, M., & Moeller, M. P. (2014). Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss. Ear and Hearing, 35, 139–147. https://doi.org/10.1097/AUD.0b013e3182a76768 [Article] [PubMed]
Ambrose, S. E., VanDam, M., & Moeller, M. P. (2014). Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss. Ear and Hearing, 35, 139–147. https://doi.org/10.1097/AUD.0b013e3182a76768 [Article] [PubMed]×
American National Standards Institute. (1997). American national standard methods for calculation of the speech intelligibility index (ANSI S3.5-1997 R2012) . New York, NY: Author.
American National Standards Institute. (1997). American national standard methods for calculation of the speech intelligibility index (ANSI S3.5-1997 R2012) . New York, NY: Author.×
Archbold, S. M., Nikolopoulos, T. P., & Lloyd-Richmond, H. (2009). Long-term use of cochlear implant systems in paediatric recipients and factors contributing to non-use. Cochlear Implants International, 10, 25–40. https://doi.org/10.1002/cii.363 [Article] [PubMed]
Archbold, S. M., Nikolopoulos, T. P., & Lloyd-Richmond, H. (2009). Long-term use of cochlear implant systems in paediatric recipients and factors contributing to non-use. Cochlear Implants International, 10, 25–40. https://doi.org/10.1002/cii.363 [Article] [PubMed]×
Banerjee, S. (2011a). Hearing aids in the real world: Typical automatic behavior of expansion, directionality, and noise management. Journal of the American Academy of Audiology, 22, 34–48. https://doi.org/10.3766/jaaa.22.1.5 [Article]
Banerjee, S. (2011a). Hearing aids in the real world: Typical automatic behavior of expansion, directionality, and noise management. Journal of the American Academy of Audiology, 22, 34–48. https://doi.org/10.3766/jaaa.22.1.5 [Article] ×
Banerjee, S. (2011b). Hearing aids in the real world: Use of multimemory and volume controls. Journal of the American Academy of Audiology, 22, 359–374. https://doi.org/10.3766/jaaa.22.6.5 [Article]
Banerjee, S. (2011b). Hearing aids in the real world: Use of multimemory and volume controls. Journal of the American Academy of Audiology, 22, 359–374. https://doi.org/10.3766/jaaa.22.6.5 [Article] ×
Basner, M., Babisch, W., Davis, A., Brink, M., Clark, C., Janssen, S., & Stansfeld, S. (2014). Auditory and non-auditory effects of noise on health. The Lancet, 383, 1325–1332. https://doi.org/10.1016/S0140-6736(13)61613-X [Article]
Basner, M., Babisch, W., Davis, A., Brink, M., Clark, C., Janssen, S., & Stansfeld, S. (2014). Auditory and non-auditory effects of noise on health. The Lancet, 383, 1325–1332. https://doi.org/10.1016/S0140-6736(13)61613-X [Article] ×
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01 [Article]
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01 [Article] ×
Bess, F. H., & Hornsby, B. W. Y. (2014). Commentary: Listening can be exhausting—Fatigue in children and adults with hearing loss. Ear and Hearing, 35, 592–599. https://doi.org/10.1097/AUD.0000000000000099 [Article] [PubMed]
Bess, F. H., & Hornsby, B. W. Y. (2014). Commentary: Listening can be exhausting—Fatigue in children and adults with hearing loss. Ear and Hearing, 35, 592–599. https://doi.org/10.1097/AUD.0000000000000099 [Article] [PubMed]×
Boons, T., Brokx, J. P. L., Dhooge, I., Frijns, J. H. M., Peeraer, L., Vermeulen, A., … van Wieringen, A. (2012). Predictors of spoken language development following pediatric cochlear implantation. Ear and Hearing, 33, 617–639. https://doi.org/10.1097/AUD.0b013e3182503e47 [Article] [PubMed]
Boons, T., Brokx, J. P. L., Dhooge, I., Frijns, J. H. M., Peeraer, L., Vermeulen, A., … van Wieringen, A. (2012). Predictors of spoken language development following pediatric cochlear implantation. Ear and Hearing, 33, 617–639. https://doi.org/10.1097/AUD.0b013e3182503e47 [Article] [PubMed]×
Caldwell, A., & Nittrouer, S. (2013). Speech perception in noise by children with cochlear implants. Journal of Speech, Language, and Hearing Research, 56, 13–30. https://doi.org/10.1044/1092-4388(2012/11-0338) [Article]
Caldwell, A., & Nittrouer, S. (2013). Speech perception in noise by children with cochlear implants. Journal of Speech, Language, and Hearing Research, 56, 13–30. https://doi.org/10.1044/1092-4388(2012/11-0338) [Article] ×
Choi, J. S., Betz, J., Li, L., Blake, C. R., Sung, Y. K., Contrera, K. J., & Lin, F. R. (2016). Association of using hearing aids or cochlear implants with changes in depressive symptoms in older adults. JAMA Otolaryngology—Head & Neck Surgery, 142, 652–657. https://doi.org/10.1001/jamaoto.2016.0700 [Article] [PubMed]
Choi, J. S., Betz, J., Li, L., Blake, C. R., Sung, Y. K., Contrera, K. J., & Lin, F. R. (2016). Association of using hearing aids or cochlear implants with changes in depressive symptoms in older adults. JAMA Otolaryngology—Head & Neck Surgery, 142, 652–657. https://doi.org/10.1001/jamaoto.2016.0700 [Article] [PubMed]×
Chute, P. M., & Nevins, M. E. (2003). Educational challenges for children with cochlear implants. Topics in Language Disorders, 23, 57–67. https://doi.org/10.1097/00011363-200301000-00008 [Article]
Chute, P. M., & Nevins, M. E. (2003). Educational challenges for children with cochlear implants. Topics in Language Disorders, 23, 57–67. https://doi.org/10.1097/00011363-200301000-00008 [Article] ×
Connor, C. M., Craig, H. K., Raudenbush, S. W., Heavner, K., & Zwolan, T. A. (2006). The age at which young deaf children receive cochlear implants and their vocabulary and speech-production growth: Is there an added value for early implantation? Ear and Hearing, 27, 628–644. https://doi.org/10.1097/01.aud.0000240640.59205.42 [Article] [PubMed]
Connor, C. M., Craig, H. K., Raudenbush, S. W., Heavner, K., & Zwolan, T. A. (2006). The age at which young deaf children receive cochlear implants and their vocabulary and speech-production growth: Is there an added value for early implantation? Ear and Hearing, 27, 628–644. https://doi.org/10.1097/01.aud.0000240640.59205.42 [Article] [PubMed]×
Contrera, K. J., Betz, J., Li, L., Blake, C. R., Sung, Y. K., Choi, J. S., & Lin, F. R. (2016). Quality of life after intervention with a cochlear implant or hearing aid. The Laryngoscope, 126, 2110–2115. https://doi.org/10.1002/lary.25848 [Article] [PubMed]
Contrera, K. J., Betz, J., Li, L., Blake, C. R., Sung, Y. K., Choi, J. S., & Lin, F. R. (2016). Quality of life after intervention with a cochlear implant or hearing aid. The Laryngoscope, 126, 2110–2115. https://doi.org/10.1002/lary.25848 [Article] [PubMed]×
Courage, M. L., & Howe, M. L. (2010). To watch or not to watch: Infants and toddlers in a brave new electronic world. Developmental Review, 30, 101–115. https://doi.org/10.1016/j.dr.2010.03.002 [Article]
Courage, M. L., & Howe, M. L. (2010). To watch or not to watch: Infants and toddlers in a brave new electronic world. Developmental Review, 30, 101–115. https://doi.org/10.1016/j.dr.2010.03.002 [Article] ×
Crandell, C. C., & Smaldino, J. J. (2000). Classroom acoustics for children with normal hearing and with hearing impairment. Language, Speech, and Hearing Services in Schools, 31, 362–370. https://doi.org/10.1044/0161-1461.3104.362 [Article] [PubMed]
Crandell, C. C., & Smaldino, J. J. (2000). Classroom acoustics for children with normal hearing and with hearing impairment. Language, Speech, and Hearing Services in Schools, 31, 362–370. https://doi.org/10.1044/0161-1461.3104.362 [Article] [PubMed]×
Davidson, L. S., Geers, A. E., Blamey, P. J., Tobey, E. A., & Brenner, C. A. (2011). Factors contributing to speech perception scores in long-term pediatric cochlear implant users. Ear and Hearing, 32, 19S–26S. https://doi.org/10.1097/AUD.0b013e3181ffdb8b [Article] [PubMed]
Davidson, L. S., Geers, A. E., Blamey, P. J., Tobey, E. A., & Brenner, C. A. (2011). Factors contributing to speech perception scores in long-term pediatric cochlear implant users. Ear and Hearing, 32, 19S–26S. https://doi.org/10.1097/AUD.0b013e3181ffdb8b [Article] [PubMed]×
De Raeve, L., & Lichtert, G. (2012). Changing trends within the population of children who are deaf or hard of hearing in Flanders (Belgium): Effects of 12 years of universal newborn hearing screening, early intervention, and early cochlear implantation. The Volta Review, 112, 131–148.
De Raeve, L., & Lichtert, G. (2012). Changing trends within the population of children who are deaf or hard of hearing in Flanders (Belgium): Effects of 12 years of universal newborn hearing screening, early intervention, and early cochlear implantation. The Volta Review, 112, 131–148.×
DesJardin, J. L., Ambrose, S. E., & Eisenberg, L. S. (2009). Literacy skills in children with cochlear implants: The importance of early oral language and joint storybook reading. Journal of Deaf Studies and Deaf Education, 14, 22–43. https://doi.org/10.1093/deafed/enn011 [Article] [PubMed]
DesJardin, J. L., Ambrose, S. E., & Eisenberg, L. S. (2009). Literacy skills in children with cochlear implants: The importance of early oral language and joint storybook reading. Journal of Deaf Studies and Deaf Education, 14, 22–43. https://doi.org/10.1093/deafed/enn011 [Article] [PubMed]×
DesJardin, J. L., Ambrose, S. E., & Eisenberg, L. S. (2011). Maternal involvement in the home literacy environment: Supporting literacy skills in children with cochlear implants. Communication Disorders Quarterly, 32, 135–150. https://doi.org/10.1177/1525740109340916 [Article]
DesJardin, J. L., Ambrose, S. E., & Eisenberg, L. S. (2011). Maternal involvement in the home literacy environment: Supporting literacy skills in children with cochlear implants. Communication Disorders Quarterly, 32, 135–150. https://doi.org/10.1177/1525740109340916 [Article] ×
DesJardin, J. L., & Eisenberg, L. S. (2007). Maternal contributions: Supporting language development in young children with cochlear implants. Ear and Hearing, 28, 456–469. https://doi.org/10.1097/AUD.0b013e31806dc1ab [Article] [PubMed]
DesJardin, J. L., & Eisenberg, L. S. (2007). Maternal contributions: Supporting language development in young children with cochlear implants. Ear and Hearing, 28, 456–469. https://doi.org/10.1097/AUD.0b013e31806dc1ab [Article] [PubMed]×
Eggermont, S., & Vandebosch, H. (2001). Television as a substitute: Loneliness, need intensity, mobility, life-satisfaction and the elderly television viewer. Communicatio, 27(2), 10–18. https://doi.org/10.1080/02500160108537902 [Article]
Eggermont, S., & Vandebosch, H. (2001). Television as a substitute: Loneliness, need intensity, mobility, life-satisfaction and the elderly television viewer. Communicatio, 27(2), 10–18. https://doi.org/10.1080/02500160108537902 [Article] ×
Evans, G. W. (2006). Child development and the physical environment. Annual Review of Psychology, 57, 423–451. https://doi.org/10.1146/annurev.psych.57.102904.190057 [Article] [PubMed]
Evans, G. W. (2006). Child development and the physical environment. Annual Review of Psychology, 57, 423–451. https://doi.org/10.1146/annurev.psych.57.102904.190057 [Article] [PubMed]×
Faber, C. E., & Grøntved, A. M. (2000). Cochlear implantation and change in quality of life. Acta Oto-Laryngologica, 120(543), 151–153. https://doi.org/10.1080/000164800750000801-1 [Article] [PubMed]
Faber, C. E., & Grøntved, A. M. (2000). Cochlear implantation and change in quality of life. Acta Oto-Laryngologica, 120(543), 151–153. https://doi.org/10.1080/000164800750000801-1 [Article] [PubMed]×
Faes, J., Gillis, J., & Gillis, S. (2015). Syntagmatic and paradigmatic development of cochlear implanted children in comparison with normally hearing peers up to age 7. International Journal of Pediatric Otorhinolaryngology, 79, 1533–1540. https://doi.org/10.1016/j.ijporl.2015.07.005 [Article] [PubMed]
Faes, J., Gillis, J., & Gillis, S. (2015). Syntagmatic and paradigmatic development of cochlear implanted children in comparison with normally hearing peers up to age 7. International Journal of Pediatric Otorhinolaryngology, 79, 1533–1540. https://doi.org/10.1016/j.ijporl.2015.07.005 [Article] [PubMed]×
Fahrenberg, J., Myrtek, M., Pawlik, K., & Perrez, M. (2015). Ambulatory assessment—Monitoring behavior in daily life settings. European Journal of Psychological Assessment, 23, 206–213. https://doi.org/10.1027/1015-5759.23.4.206 [Article]
Fahrenberg, J., Myrtek, M., Pawlik, K., & Perrez, M. (2015). Ambulatory assessment—Monitoring behavior in daily life settings. European Journal of Psychological Assessment, 23, 206–213. https://doi.org/10.1027/1015-5759.23.4.206 [Article] ×
Fitzpatrick, E. M., Séguin, C., Schramm, D. R., Armstrong, S., & Chénier, J. (2009). The benefits of remote microphone technology for adults with cochlear implants. Ear and Hearing, 30, 590–599. https://doi.org/10.1097/AUD.0b013e3181acfb70 [Article] [PubMed]
Fitzpatrick, E. M., Séguin, C., Schramm, D. R., Armstrong, S., & Chénier, J. (2009). The benefits of remote microphone technology for adults with cochlear implants. Ear and Hearing, 30, 590–599. https://doi.org/10.1097/AUD.0b013e3181acfb70 [Article] [PubMed]×
Francis, H. W., Chee, N., Yeagle, J., Cheng, A., & Niparko, J. K. (2002). Impact of cochlear implants on the functional health status of older adults. The Laryngoscope, 112, 1482–1488. https://doi.org/10.1097/00005537-200208000-00028 [Article] [PubMed]
Francis, H. W., Chee, N., Yeagle, J., Cheng, A., & Niparko, J. K. (2002). Impact of cochlear implants on the functional health status of older adults. The Laryngoscope, 112, 1482–1488. https://doi.org/10.1097/00005537-200208000-00028 [Article] [PubMed]×
Francis, H. W., Yeagle, J. A., & Thompson, C. B. (2015). Clinical and psychosocial risk factors of hearing outcome in Older adults with cochlear implants. The Laryngoscope, 125, 695–702. https://doi.org/10.1002/lary.24921 [Article] [PubMed]
Francis, H. W., Yeagle, J. A., & Thompson, C. B. (2015). Clinical and psychosocial risk factors of hearing outcome in Older adults with cochlear implants. The Laryngoscope, 125, 695–702. https://doi.org/10.1002/lary.24921 [Article] [PubMed]×
Geers, A., & Brenner, C. (2003). Background and educational characteristics of prelingually deaf children implanted by five years of age. Ear and Hearing, 24(Suppl. 1), 2S–14S. https://doi.org/10.1097/01.AUD.0000051685.19171.BD [Article] [PubMed]
Geers, A., & Brenner, C. (2003). Background and educational characteristics of prelingually deaf children implanted by five years of age. Ear and Hearing, 24(Suppl. 1), 2S–14S. https://doi.org/10.1097/01.AUD.0000051685.19171.BD [Article] [PubMed]×
Geers, A. E., Strube, M. J., Tobey, E. A., Pisoni, D. B., & Moog, J. S. (2011). Epilogue: Factors contributing to long-term outcomes of cochlear implantation in early childhood. Ear and Hearing, 32(Suppl.), 84S–92S. https://doi.org/10.1097/AUD.0b013e3181ffd5b5 [Article] [PubMed]
Geers, A. E., Strube, M. J., Tobey, E. A., Pisoni, D. B., & Moog, J. S. (2011). Epilogue: Factors contributing to long-term outcomes of cochlear implantation in early childhood. Ear and Hearing, 32(Suppl.), 84S–92S. https://doi.org/10.1097/AUD.0b013e3181ffd5b5 [Article] [PubMed]×
Gfeller, K., Christ, A., Knutson, J. F., Witt, S., Murray, K. T., & Tyler, R. S. (2000). Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients. Journal of the American Academy of Audiology, 11, 390–406. [PubMed]
Gfeller, K., Christ, A., Knutson, J. F., Witt, S., Murray, K. T., & Tyler, R. S. (2000). Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients. Journal of the American Academy of Audiology, 11, 390–406. [PubMed]×
Grebennikov, L. (2006). Preschool teachers' exposure to classroom noise. International Journal of Early Years Education, 14, 35–44. https://doi.org/10.1080/09669760500446382 [Article]
Grebennikov, L. (2006). Preschool teachers' exposure to classroom noise. International Journal of Early Years Education, 14, 35–44. https://doi.org/10.1080/09669760500446382 [Article] ×
Gygi, B., & Hall, D. A. (2016). Background sounds and hearing-aid users: A scoping review. International Journal of Audiology, 55, 1–10. https://doi.org/10.3109/14992027.2015.1072773 [Article] [PubMed]
Gygi, B., & Hall, D. A. (2016). Background sounds and hearing-aid users: A scoping review. International Journal of Audiology, 55, 1–10. https://doi.org/10.3109/14992027.2015.1072773 [Article] [PubMed]×
Hallberg, L. R.-M., & Ringdahl, A. (2004). Living with cochlear implants: Experiences of 17 adult patients in Sweden. International Journal of Audiology, 43, 115–121. https://doi.org/10.1080/14992020400050016 [Article] [PubMed]
Hallberg, L. R.-M., & Ringdahl, A. (2004). Living with cochlear implants: Experiences of 17 adult patients in Sweden. International Journal of Audiology, 43, 115–121. https://doi.org/10.1080/14992020400050016 [Article] [PubMed]×
Hallberg, L. R.-M., Ringdahl, A., Holmes, A., & Carver, C. (2005). Psychological general well-being (quality of life) in patients with cochlear implants: Importance of social environment and age. International Journal of Audiology, 44, 706–711. https://doi.org/10.1080/14992020500266852 [Article] [PubMed]
Hallberg, L. R.-M., Ringdahl, A., Holmes, A., & Carver, C. (2005). Psychological general well-being (quality of life) in patients with cochlear implants: Importance of social environment and age. International Journal of Audiology, 44, 706–711. https://doi.org/10.1080/14992020500266852 [Article] [PubMed]×
Hazrati, O., & Loizou, P. C. (2012). The combined effects of reverberation and noise on speech intelligibility by cochlear implant listeners. International Journal of Audiology, 51, 437–443. https://doi.org/10.3109/14992027.2012.658972 [Article] [PubMed]
Hazrati, O., & Loizou, P. C. (2012). The combined effects of reverberation and noise on speech intelligibility by cochlear implant listeners. International Journal of Audiology, 51, 437–443. https://doi.org/10.3109/14992027.2012.658972 [Article] [PubMed]×
Hoff, E. (2006). How social contexts support and shape language development. Developmental Review, 26, 55–88. https://doi.org/10.1016/j.dr.2005.11.002 [Article]
Hoff, E. (2006). How social contexts support and shape language development. Developmental Review, 26, 55–88. https://doi.org/10.1016/j.dr.2005.11.002 [Article] ×
Hoff, E., & Naigles, L. (2002). How children use input to acquire a lexicon. Child Development, 73, 418–433. https://doi.org/10.1111/1467-8624.00415 [Article] [PubMed]
Hoff, E., & Naigles, L. (2002). How children use input to acquire a lexicon. Child Development, 73, 418–433. https://doi.org/10.1111/1467-8624.00415 [Article] [PubMed]×
Holden, L. K., Finley, C. C., Firszt, J. B., Holden, T. A., Brenner, C., Potts, L. G., … Skinner, M. W. (2013). Factors affecting open-set word recognition in adults with cochlear implants. Ear and Hearing, 34, 342–360. https://doi.org/10.1097/AUD.0b013e3182741aa7 [Article] [PubMed]
Holden, L. K., Finley, C. C., Firszt, J. B., Holden, T. A., Brenner, C., Potts, L. G., … Skinner, M. W. (2013). Factors affecting open-set word recognition in adults with cochlear implants. Ear and Hearing, 34, 342–360. https://doi.org/10.1097/AUD.0b013e3182741aa7 [Article] [PubMed]×
Holt, R. F., Beer, J., Kronenberger, W. G., & Pisoni, D. B. (2013). Developmental effects of family environment on outcomes in pediatric cochlear implant recipients. Otology & Neurotology, 34, 388–395. https://doi.org/10.1097/MAO.0b013e318277a0af [Article]
Holt, R. F., Beer, J., Kronenberger, W. G., & Pisoni, D. B. (2013). Developmental effects of family environment on outcomes in pediatric cochlear implant recipients. Otology & Neurotology, 34, 388–395. https://doi.org/10.1097/MAO.0b013e318277a0af [Article] ×
Hothorn, T., Bretz, F., & Westfall, P. (2008). Simultaneous inference in general parametric models. Biometrical Journal, 50, 346–363. https://doi.org/10.1002/bimj.200810425 [Article] [PubMed]
Hothorn, T., Bretz, F., & Westfall, P. (2008). Simultaneous inference in general parametric models. Biometrical Journal, 50, 346–363. https://doi.org/10.1002/bimj.200810425 [Article] [PubMed]×
Huber, M., Hitzl, W., & Albegger, K. (2008). Education and training of young people who grew up with cochlear implants. International Journal of Pediatric Otorhinolaryngology, 72, 1393–1403. https://doi.org/10.1016/j.ijporl.2008.06.002 [Article] [PubMed]
Huber, M., Hitzl, W., & Albegger, K. (2008). Education and training of young people who grew up with cochlear implants. International Journal of Pediatric Otorhinolaryngology, 72, 1393–1403. https://doi.org/10.1016/j.ijporl.2008.06.002 [Article] [PubMed]×
Huttenlocher, J., Waterfall, H., Vasilyeva, M., Vevea, J., & Hedges, L. V. (2010). Sources of variability in children's language growth. Cognitive Psychology, 61, 343–365. https://doi.org/10.1016/j.cogpsych.2010.08.002 [Article] [PubMed]
Huttenlocher, J., Waterfall, H., Vasilyeva, M., Vevea, J., & Hedges, L. V. (2010). Sources of variability in children's language growth. Cognitive Psychology, 61, 343–365. https://doi.org/10.1016/j.cogpsych.2010.08.002 [Article] [PubMed]×
Iglehart, F. (2004). Speech perception by students with cochlear implants using sound-field systems in classrooms. American Journal of Audiology, 13, 62–72. https://doi.org/10.1044/1059-0889(2004/009) [Article] [PubMed]
Iglehart, F. (2004). Speech perception by students with cochlear implants using sound-field systems in classrooms. American Journal of Audiology, 13, 62–72. https://doi.org/10.1044/1059-0889(2004/009) [Article] [PubMed]×
Keidser, G., & Alamudi, K. (2013). Real-life efficacy and reliability of training a hearing aid. Ear and Hearing, 34, 619–629. https://doi.org/10.1097/AUD.0b013e31828d269a [Article] [PubMed]
Keidser, G., & Alamudi, K. (2013). Real-life efficacy and reliability of training a hearing aid. Ear and Hearing, 34, 619–629. https://doi.org/10.1097/AUD.0b013e31828d269a [Article] [PubMed]×
Laplante-Lévesque, A., Nielsen, C., Jensen, L. D., & Naylor, G. (2014). Patterns of hearing aid usage predict hearing aid use amount (data logged and self-reported) and overreport. Journal of the American Academy of Audiology, 25, 187–198. https://doi.org/10.3766/jaaa.25.2.7 [Article] [PubMed]
Laplante-Lévesque, A., Nielsen, C., Jensen, L. D., & Naylor, G. (2014). Patterns of hearing aid usage predict hearing aid use amount (data logged and self-reported) and overreport. Journal of the American Academy of Audiology, 25, 187–198. https://doi.org/10.3766/jaaa.25.2.7 [Article] [PubMed]×
Lin, F. R., Chien, W. W., Li, L., Clarrett, D. M., Niparko, J. K., & Francis, H. W. (2012). Cochlear implantation in older adults. Medicine, 91, 229–241. https://doi.org/10.1097/MD.0b013e31826b145a [Article] [PubMed]
Lin, F. R., Chien, W. W., Li, L., Clarrett, D. M., Niparko, J. K., & Francis, H. W. (2012). Cochlear implantation in older adults. Medicine, 91, 229–241. https://doi.org/10.1097/MD.0b013e31826b145a [Article] [PubMed]×
Mäki-Torkko, E. M., Vestergren, S., Harder, H., & Lyxell, B. (2015). From isolation and dependence to autonomy—Expectations before and experiences after cochlear implantation in adult cochlear implant users and their significant others. Disability and Rehabilitation, 37, 541–547. https://doi.org/10.3109/09638288.2014.935490 [Article] [PubMed]
Mäki-Torkko, E. M., Vestergren, S., Harder, H., & Lyxell, B. (2015). From isolation and dependence to autonomy—Expectations before and experiences after cochlear implantation in adult cochlear implant users and their significant others. Disability and Rehabilitation, 37, 541–547. https://doi.org/10.3109/09638288.2014.935490 [Article] [PubMed]×
Markey, A. L., Nichani, J., Lockley, M., Melling, C., Ramsden, R. T., Green, K. M. J., & Bruce, I. A. (2015). Cochlear implantation in adolescents: Factors influencing compliance. Cochlear Implants International, 16, 186–194. https://doi.org/10.1179/1754762813Y.0000000033 [Article] [PubMed]
Markey, A. L., Nichani, J., Lockley, M., Melling, C., Ramsden, R. T., Green, K. M. J., & Bruce, I. A. (2015). Cochlear implantation in adolescents: Factors influencing compliance. Cochlear Implants International, 16, 186–194. https://doi.org/10.1179/1754762813Y.0000000033 [Article] [PubMed]×
Mauger, S. J., Warren, C. D., Knight, M. R., Goorevich, M., & Nel, E. (2014). Clinical evaluation of the Nucleus® 6 cochlear implant system: Performance improvements with SmartSound iQ. International Journal of Audiology, 53, 564–576. https://doi.org/10.3109/14992027.2014.895431 [Article] [PubMed]
Mauger, S. J., Warren, C. D., Knight, M. R., Goorevich, M., & Nel, E. (2014). Clinical evaluation of the Nucleus® 6 cochlear implant system: Performance improvements with SmartSound iQ. International Journal of Audiology, 53, 564–576. https://doi.org/10.3109/14992027.2014.895431 [Article] [PubMed]×
Mosnier, I., Bebear, J.-P., Marx, M., Fraysse, B., Truy, E., Lina-Granade, G., … Sterkers, O. (2015). Improvement of cognitive function after cochlear implantation in elderly patients. JAMA Otolaryngology—Head & Neck Surgery, 141, 442–450. https://doi.org/10.1001/jamaoto.2015.129 [Article] [PubMed]
Mosnier, I., Bebear, J.-P., Marx, M., Fraysse, B., Truy, E., Lina-Granade, G., … Sterkers, O. (2015). Improvement of cognitive function after cochlear implantation in elderly patients. JAMA Otolaryngology—Head & Neck Surgery, 141, 442–450. https://doi.org/10.1001/jamaoto.2015.129 [Article] [PubMed]×
Mueller, H. G., Hornsby, B. W. Y., & Weber, J. E. (2008). Using trainable hearing aids to examine real-world preferred gain. Journal of the American Academy of Audiology, 19, 758–773. https://doi.org/10.3766/jaaa.19.10.4 [Article] [PubMed]
Mueller, H. G., Hornsby, B. W. Y., & Weber, J. E. (2008). Using trainable hearing aids to examine real-world preferred gain. Journal of the American Academy of Audiology, 19, 758–773. https://doi.org/10.3766/jaaa.19.10.4 [Article] [PubMed]×
Mukari, S. Z., Ling, L. N., & Ghani, H. A. (2007). Educational performance of pediatric cochlear implant recipients in mainstream classes. International Journal of Pediatric Otorhinolaryngology, 71, 231–240. https://doi.org/10.1016/j.ijporl.2006.10.005 [Article] [PubMed]
Mukari, S. Z., Ling, L. N., & Ghani, H. A. (2007). Educational performance of pediatric cochlear implant recipients in mainstream classes. International Journal of Pediatric Otorhinolaryngology, 71, 231–240. https://doi.org/10.1016/j.ijporl.2006.10.005 [Article] [PubMed]×
Muñoz, K., Preston, E., & Hicken, S. (2014). Pediatric hearing aid use: How can audiologists support parents to increase consistency? Journal of the American Academy of Audiology, 25, 380–387. https://doi.org/10.3766/jaaa.25.4.9 [Article] [PubMed]
Muñoz, K., Preston, E., & Hicken, S. (2014). Pediatric hearing aid use: How can audiologists support parents to increase consistency? Journal of the American Academy of Audiology, 25, 380–387. https://doi.org/10.3766/jaaa.25.4.9 [Article] [PubMed]×
Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining R 2 from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4, 133–142. https://doi.org/10.1111/j.2041-210x.2012.00261.x [Article]
Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining R 2 from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4, 133–142. https://doi.org/10.1111/j.2041-210x.2012.00261.x [Article] ×
Neuman, A. C., Wroblewski, M., Hajicek, J., & Rubinstein, A. (2012). Measuring speech recognition in children with cochlear implants in a virtual classroom. Journal of Speech, Language, and Hearing Research, 55, 532–540. https://doi.org/10.1044/1092-4388(2011/11-0058) [Article]
Neuman, A. C., Wroblewski, M., Hajicek, J., & Rubinstein, A. (2012). Measuring speech recognition in children with cochlear implants in a virtual classroom. Journal of Speech, Language, and Hearing Research, 55, 532–540. https://doi.org/10.1044/1092-4388(2011/11-0058) [Article] ×
Ng, J. H.-Y., & Loke, A. Y. (2015). Determinants of hearing-aid adoption and use among the elderly: A systematic review. International Journal of Audiology, 54, 291–300. https://doi.org/10.3109/14992027.2014.966922 [Article] [PubMed]
Ng, J. H.-Y., & Loke, A. Y. (2015). Determinants of hearing-aid adoption and use among the elderly: A systematic review. International Journal of Audiology, 54, 291–300. https://doi.org/10.3109/14992027.2014.966922 [Article] [PubMed]×
Nicholas, J. G., & Geers, A. E. (2006). Effects of early auditory experience on the spoken language of deaf children at 3 years of age. Ear and Hearing, 27, 286–298. https://doi.org/10.1097/01.aud.0000215973.76912.c6 [Article] [PubMed]
Nicholas, J. G., & Geers, A. E. (2006). Effects of early auditory experience on the spoken language of deaf children at 3 years of age. Ear and Hearing, 27, 286–298. https://doi.org/10.1097/01.aud.0000215973.76912.c6 [Article] [PubMed]×
Nicholas, J. G., & Geers, A. E. (2007). Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. Journal of Speech, Language, and Hearing Research, 50, 1048–1062. https://doi.org/10.1044/1092-4388(2007/073) [Article]
Nicholas, J. G., & Geers, A. E. (2007). Will they catch up? The role of age at cochlear implantation in the spoken language development of children with severe to profound hearing loss. Journal of Speech, Language, and Hearing Research, 50, 1048–1062. https://doi.org/10.1044/1092-4388(2007/073) [Article] ×
Niparko, J. K., Tobey, E. A., Thal, D. J., Eisenberg, L. S., Wang, N.-Y., Quittner, A. L., & Fink, N. E. (2010). Spoken language development in children following cochlear implantation. JAMA, 303, 1498–1506. https://doi.org/10.1001/jama.2010.451 [Article] [PubMed]
Niparko, J. K., Tobey, E. A., Thal, D. J., Eisenberg, L. S., Wang, N.-Y., Quittner, A. L., & Fink, N. E. (2010). Spoken language development in children following cochlear implantation. JAMA, 303, 1498–1506. https://doi.org/10.1001/jama.2010.451 [Article] [PubMed]×
Percy-Smith, L., Busch, G., Sandahl, M., Nissen, L., Josvassen, J. L., Lange, T., … Cayé-Thomasen, P. (2013). Language understanding and vocabulary of early cochlear implanted children. International Journal of Pediatric Otorhinolaryngology, 77, 184–188. https://doi.org/10.1016/j.ijporl.2012.10.014 [Article] [PubMed]
Percy-Smith, L., Busch, G., Sandahl, M., Nissen, L., Josvassen, J. L., Lange, T., … Cayé-Thomasen, P. (2013). Language understanding and vocabulary of early cochlear implanted children. International Journal of Pediatric Otorhinolaryngology, 77, 184–188. https://doi.org/10.1016/j.ijporl.2012.10.014 [Article] [PubMed]×
Peterson, N. R., Pisoni, D. B., & Miyamoto, R. T. (2010). Cochlear implants and spoken language processing abilities: Review and assessment of the literature. Restorative Neurology and Neuroscience, 28, 237–250. https://doi.org/10.3233/RNN-2010-0535 [PubMed]
Peterson, N. R., Pisoni, D. B., & Miyamoto, R. T. (2010). Cochlear implants and spoken language processing abilities: Review and assessment of the literature. Restorative Neurology and Neuroscience, 28, 237–250. https://doi.org/10.3233/RNN-2010-0535 [PubMed]×
Plyler, P. N., Bahng, J., & von Hapsburg, D. (2008). The acceptance of background noise in adult cochlear implant users. Journal of Speech, Language, and Hearing Research, 51, 502–515. https://doi.org/10.1044/1092-4388(2008/036) [Article]
Plyler, P. N., Bahng, J., & von Hapsburg, D. (2008). The acceptance of background noise in adult cochlear implant users. Journal of Speech, Language, and Hearing Research, 51, 502–515. https://doi.org/10.1044/1092-4388(2008/036) [Article] ×
Proops, D. W., Donaldson, I., Cooper, H. R., Thomas, J., Burrell, S. P., Stoddart, R. L., … Cheshire, I. M. (1999). Outcomes from adult implantation, the first 100 patients. The Journal of Laryngology & Otology, 113(24), 5–13. https://doi.org/10.1017/S0022215100146018
Proops, D. W., Donaldson, I., Cooper, H. R., Thomas, J., Burrell, S. P., Stoddart, R. L., … Cheshire, I. M. (1999). Outcomes from adult implantation, the first 100 patients. The Journal of Laryngology & Otology, 113(24), 5–13. https://doi.org/10.1017/S0022215100146018 ×
Quittner, A. L., Cruz, I., Barker, D. H., Tobey, E., Eisenberg, L. S., Niparko, J. K. , & Childhood Development after Cochlear Implantation Investigative Team. (2013). Effects of maternal sensitivity and cognitive and linguistic stimulation on cochlear implant users' language development over four years. The Journal of Pediatrics, 162, 343–348.e3. https://doi.org/10.1016/j.jpeds.2012.08.003 [Article] [PubMed]
Quittner, A. L., Cruz, I., Barker, D. H., Tobey, E., Eisenberg, L. S., Niparko, J. K. , & Childhood Development after Cochlear Implantation Investigative Team. (2013). Effects of maternal sensitivity and cognitive and linguistic stimulation on cochlear implant users' language development over four years. The Journal of Pediatrics, 162, 343–348.e3. https://doi.org/10.1016/j.jpeds.2012.08.003 [Article] [PubMed]×
R Core Team. (2015). R: A Language and Environment for Statistical Computing [Computer software] . Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.r-project.org/
R Core Team. (2015). R: A Language and Environment for Statistical Computing [Computer software] . Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.r-project.org/ ×
Reis, H. T., & Gosling, S. D. (2010). Social psychological methods outside the laboratory. In Fiske, S. T., Gilbert, D. T., & Lindzey, G. (Eds.), Handbook of social psychology (pp. 82–114). Hoboken, NJ: Wiley. https://doi.org/10.1002/9780470561119.socpsy001003
Reis, H. T., & Gosling, S. D. (2010). Social psychological methods outside the laboratory. In Fiske, S. T., Gilbert, D. T., & Lindzey, G. (Eds.), Handbook of social psychology (pp. 82–114). Hoboken, NJ: Wiley. https://doi.org/10.1002/9780470561119.socpsy001003 ×
Saxon, J. P., Holmes, A. E., & Spitznagel, R. J. (2001). Impact of a cochlear implant on job functioning. Journal of Rehabilitation, 67(3), 49–54.
Saxon, J. P., Holmes, A. E., & Spitznagel, R. J. (2001). Impact of a cochlear implant on job functioning. Journal of Rehabilitation, 67(3), 49–54.×
Schwarz, N. (2007). Retrospective and concurrent self-reports: The rationale for real-time data capture. In Stone, A. A., Shiffman, S., Atienza, A. A., & Nebeling, L. (Eds.), The science of real-time data capture: Self-reports in health research (pp. 11–26). New York, NY: Oxford University Press.
Schwarz, N. (2007). Retrospective and concurrent self-reports: The rationale for real-time data capture. In Stone, A. A., Shiffman, S., Atienza, A. A., & Nebeling, L. (Eds.), The science of real-time data capture: Self-reports in health research (pp. 11–26). New York, NY: Oxford University Press.×
Shield, B. M., & Dockrell, J. E. (2003). The effects of noise on children at school: A review. Building Acoustics, 10, 97–116. https://doi.org/10.1260/135101003768965960 [Article]
Shield, B. M., & Dockrell, J. E. (2003). The effects of noise on children at school: A review. Building Acoustics, 10, 97–116. https://doi.org/10.1260/135101003768965960 [Article] ×
Sjödin, F., Kjellberg, A., Knutsson, A., Landström, U., & Lindberg, L. (2012). Noise exposure and auditory effects on preschool personnel. Noise & Health, 14(57), 72–82. https://doi.org/10.4103/1463-1741.95135 [Article] [PubMed]
Sjödin, F., Kjellberg, A., Knutsson, A., Landström, U., & Lindberg, L. (2012). Noise exposure and auditory effects on preschool personnel. Noise & Health, 14(57), 72–82. https://doi.org/10.4103/1463-1741.95135 [Article] [PubMed]×
Skagerstrand, Å., Stenfelt, S., Arlinger, S., & Wikström, J. (2014). Sounds perceived as annoying by hearing-aid users in their daily soundscape. International Journal of Audiology, 53, 259–269. https://doi.org/10.3109/14992027.2013.876108 [Article] [PubMed]
Skagerstrand, Å., Stenfelt, S., Arlinger, S., & Wikström, J. (2014). Sounds perceived as annoying by hearing-aid users in their daily soundscape. International Journal of Audiology, 53, 259–269. https://doi.org/10.3109/14992027.2013.876108 [Article] [PubMed]×
Svirsky, M. A., Teoh, S.-W., & Neuburger, H. (2004). Development of language and speech perception in congenitally, profoundly deaf children as a function of age at cochlear implantation. Audiology and Neuro-Otology, 9, 224–233. https://doi.org/10.1159/000078392 [Article] [PubMed]
Svirsky, M. A., Teoh, S.-W., & Neuburger, H. (2004). Development of language and speech perception in congenitally, profoundly deaf children as a function of age at cochlear implantation. Audiology and Neuro-Otology, 9, 224–233. https://doi.org/10.1159/000078392 [Article] [PubMed]×
Szagun, G., & Rüter, M. (2009). The influence of parents' speech on the development of spoken language in German-speaking children with cochlear implants. Revista de Logopedia, Foniatría y Audiología, 29, 165–173. https://doi.org/10.1016/S0214-4603(09)70025-7 [Article]
Szagun, G., & Rüter, M. (2009). The influence of parents' speech on the development of spoken language in German-speaking children with cochlear implants. Revista de Logopedia, Foniatría y Audiología, 29, 165–173. https://doi.org/10.1016/S0214-4603(09)70025-7 [Article] ×
Szagun, G., & Stumper, B. (2012). Age or experience? The influence of age at implantation and social and linguistic environment on language development in children with cochlear implants. Journal of Speech, Language, and Hearing Research, 55, 1640–1654. https://doi.org/10.1044/1092-4388(2012/11-0119) [Article]
Szagun, G., & Stumper, B. (2012). Age or experience? The influence of age at implantation and social and linguistic environment on language development in children with cochlear implants. Journal of Speech, Language, and Hearing Research, 55, 1640–1654. https://doi.org/10.1044/1092-4388(2012/11-0119) [Article] ×
Trull, T. J., & Ebner-Priemer, U. (2013). Ambulatory sssessment. Annual Review of Clinical Psychology, 9, 151–176. https://doi.org/10.1146/annurev-clinpsy-050212-185510 [Article] [PubMed]
Trull, T. J., & Ebner-Priemer, U. (2013). Ambulatory sssessment. Annual Review of Clinical Psychology, 9, 151–176. https://doi.org/10.1146/annurev-clinpsy-050212-185510 [Article] [PubMed]×
van Wieringen, A., & Wouters, J. (2008). LIST and LINT: Sentences and numbers for quantifying speech understanding in severely impaired listeners for Flanders and the Netherlands. International Journal of Audiology, 47, 348–355. https://doi.org/10.1080/14992020801895144 [Article] [PubMed]
van Wieringen, A., & Wouters, J. (2008). LIST and LINT: Sentences and numbers for quantifying speech understanding in severely impaired listeners for Flanders and the Netherlands. International Journal of Audiology, 47, 348–355. https://doi.org/10.1080/14992020801895144 [Article] [PubMed]×
van Wieringen, A., & Wouters, J. (2015). What can we expect of normally-developing children implanted at a young age with respect to their auditory, linguistic and cognitive skills? Hearing Research, 322, 171–179. https://doi.org/10.1016/j.heares.2014.09.002 [Article] [PubMed]
van Wieringen, A., & Wouters, J. (2015). What can we expect of normally-developing children implanted at a young age with respect to their auditory, linguistic and cognitive skills? Hearing Research, 322, 171–179. https://doi.org/10.1016/j.heares.2014.09.002 [Article] [PubMed]×
Vohr, B. R., Topol, D., Watson, V., St Pierre, L., & Tucker, R. (2014). The importance of language in the home for school-age children with permanent hearing loss. Acta Pædiatrica, 103, 62–69. https://doi.org/10.1111/apa.12441 [Article] [PubMed]
Vohr, B. R., Topol, D., Watson, V., St Pierre, L., & Tucker, R. (2014). The importance of language in the home for school-age children with permanent hearing loss. Acta Pædiatrica, 103, 62–69. https://doi.org/10.1111/apa.12441 [Article] [PubMed]×
Walker, E. A., McCreery, R. W., Spratford, M., Oleson, J. J., Van Buren, J., Bentler, R., … Moeller, M. P. (2015). Trends and predictors of longitudinal hearing aid use for children who are hard of hearing. Ear and Hearing, 36(Suppl. 1), 38S–47S. https://doi.org/10.1097/AUD.0000000000000208 [Article] [PubMed]
Walker, E. A., McCreery, R. W., Spratford, M., Oleson, J. J., Van Buren, J., Bentler, R., … Moeller, M. P. (2015). Trends and predictors of longitudinal hearing aid use for children who are hard of hearing. Ear and Hearing, 36(Suppl. 1), 38S–47S. https://doi.org/10.1097/AUD.0000000000000208 [Article] [PubMed]×
Walker, E. A., Spratford, M., Moeller, M. P., Oleson, J., Ou, H., Roush, P., & Jacobs, S. (2013). Predictors of hearing aid use time in children with mild-to-severe hearing loss. Language, Speech, and Hearing Services in Schools, 44, 73–88. https://doi.org/10.1044/0161-1461(2012/12-0005) [Article] [PubMed]
Walker, E. A., Spratford, M., Moeller, M. P., Oleson, J., Ou, H., Roush, P., & Jacobs, S. (2013). Predictors of hearing aid use time in children with mild-to-severe hearing loss. Language, Speech, and Hearing Services in Schools, 44, 73–88. https://doi.org/10.1044/0161-1461(2012/12-0005) [Article] [PubMed]×
Weisleder, A., & Fernald, A. (2013). Talking to children matters: Early language experience strengthens processing and builds vocabulary. Psychological Science, 24, 2143–2152. https://doi.org/10.1177/0956797613488145 [Article] [PubMed]
Weisleder, A., & Fernald, A. (2013). Talking to children matters: Early language experience strengthens processing and builds vocabulary. Psychological Science, 24, 2143–2152. https://doi.org/10.1177/0956797613488145 [Article] [PubMed]×
Whitmal, N. A.III, & Poissant, S. F. (2009). Effects of source-to-listener distance and masking on perception of cochlear implant processed speech in reverberant rooms. The Journal of the Acoustical Society of America, 126, 2556–2569. https://doi.org/10.1121/1.3216912 [Article] [PubMed]
Whitmal, N. A.III, & Poissant, S. F. (2009). Effects of source-to-listener distance and masking on perception of cochlear implant processed speech in reverberant rooms. The Journal of the Acoustical Society of America, 126, 2556–2569. https://doi.org/10.1121/1.3216912 [Article] [PubMed]×
World Health Organization. (2015, March). Deafness and hearing loss [Fact sheet 300] . Retrieved from http://www.who.int/mediacentre/factsheets/fs300/
World Health Organization. (2015, March). Deafness and hearing loss [Fact sheet 300] . Retrieved from http://www.who.int/mediacentre/factsheets/fs300/ ×
Wolfe, J., Morais, M., Schafer, E., Mills, E., Mülder, H. E., Goldbeck, F., … Lianos, L. (2013). Evaluation of speech recognition of cochlear implant recipients using a personal digital adaptive radio frequency system. Journal of the American Academy of Audiology, 24, 714–724. https://doi.org/10.3766/jaaa.24.8.8 [Article] [PubMed]
Wolfe, J., Morais, M., Schafer, E., Mills, E., Mülder, H. E., Goldbeck, F., … Lianos, L. (2013). Evaluation of speech recognition of cochlear implant recipients using a personal digital adaptive radio frequency system. Journal of the American Academy of Audiology, 24, 714–724. https://doi.org/10.3766/jaaa.24.8.8 [Article] [PubMed]×
Zhao, F., Bai, Z., & Stephens, D. (2008). The relationship between changes in self-rated quality of life after cochlear implantation and changes in individual complaints. Clinical Otolaryngology, 33, 427–434. https://doi.org/10.1111/j.1749-4486.2008.01773.x [Article] [PubMed]
Zhao, F., Bai, Z., & Stephens, D. (2008). The relationship between changes in self-rated quality of life after cochlear implantation and changes in individual complaints. Clinical Otolaryngology, 33, 427–434. https://doi.org/10.1111/j.1749-4486.2008.01773.x [Article] [PubMed]×
Figure 1.

Average daily duration of device use (time on air), on the basis of user-averaged data. (a) Time on air against user age in the middle of the averaged logs, fitted with locally weighted polynomial regression (LOESS; blue line). (b) Device use by age group.

 Average daily duration of device use (time on air), on the basis of user-averaged data. (a) Time on air against user age in the middle of the averaged logs, fitted with locally weighted polynomial regression (LOESS; blue line). (b) Device use by age group.
Figure 1.

Average daily duration of device use (time on air), on the basis of user-averaged data. (a) Time on air against user age in the middle of the averaged logs, fitted with locally weighted polynomial regression (LOESS; blue line). (b) Device use by age group.

×
Figure 2.

Exposure to spoken-language environments by age group: (a) Speech in Quiet; (b) Speech in Noise; (c) All Speech (Speech in Quiet + Speech in Noise). Left: The distribution of the user-averaged data is shown in gray as annotated box plots. The whiskers extend to the 5th and 95th percentiles. The modeled age-group means and bootstrapped 95% confidence intervals are shown in blue. Right: The estimated distribution of each variance component is depicted as a density plot. Individual random intercepts are depicted as rug plots (horizontal lines). Random intercepts outside of the axis limits are not shown.

 Exposure to spoken-language environments by age group: (a) Speech in Quiet; (b) Speech in Noise; (c) All Speech (Speech in Quiet + Speech in Noise). Left: The distribution of the user-averaged data is shown in gray as annotated box plots. The whiskers extend to the 5th and 95th percentiles. The modeled age-group means and bootstrapped 95% confidence intervals are shown in blue. Right: The estimated distribution of each variance component is depicted as a density plot. Individual random intercepts are depicted as rug plots (horizontal lines). Random intercepts outside of the axis limits are not shown.
Figure 2.

Exposure to spoken-language environments by age group: (a) Speech in Quiet; (b) Speech in Noise; (c) All Speech (Speech in Quiet + Speech in Noise). Left: The distribution of the user-averaged data is shown in gray as annotated box plots. The whiskers extend to the 5th and 95th percentiles. The modeled age-group means and bootstrapped 95% confidence intervals are shown in blue. Right: The estimated distribution of each variance component is depicted as a density plot. Individual random intercepts are depicted as rug plots (horizontal lines). Random intercepts outside of the axis limits are not shown.

×
Figure 3.

Exposure to noisy environments by age group: (a) Noise; (b) All Noise (Noise + Speech in Noise). The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

 Exposure to noisy environments by age group: (a) Noise; (b) All Noise (Noise + Speech in Noise). The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.
Figure 3.

Exposure to noisy environments by age group: (a) Noise; (b) All Noise (Noise + Speech in Noise). The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

×
Figure 4.

Exposure to Music by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

 Exposure to Music by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.
Figure 4.

Exposure to Music by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

×
Figure 5.

Exposure to Quiet by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

 Exposure to Quiet by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.
Figure 5.

Exposure to Quiet by age group. The distribution of the user-averaged data is shown in gray. Modeled age-group means and variance components are shown in blue. See Figure 2 for a detailed explanation.

×
Table 1. Final sample after preprocessing.
Final sample after preprocessing.×
Country Clinics Data logs (users)
Early childhood Primary school Secondary school Adult Senior Total
Australia 4 11 (6) 33 (20) 5 (3) 157 (45) 183 (46) 389 (118)
Belgium 11 131 (45) 134 (51) 20 (13) 163 (60) 86 (26) 534 (191)
Canada 3 1 (1) 2 (1) 7 (4) 8 (3) 18 (9)
France 2 43 (19) 16 (9) 17 (11) 19 (12) 3 (2) 98 (53)
Germany 3 93 (32) 81 (36) 35 (24) 461 (242) 120 (78) 790 (407)
Hong Kong 1 2 (1) 2 (1)
India 10 46 (13) 22 (9) 14 (8) 22 (6) 104 (36)
Malaysia 2 8 (2) 4 (1) 12 (3)
New Zealand 3 59 (20) 21 (13) 7 (7) 150 (58) 142 (50) 379 (146)
Spain 1 4 (3) 1 (1) 9 (6) 2 (2) 16 (12)
Switzerland 1 1 (1) 2 (1) 3 (2)
Netherlands 2 58 (21) 80 (32) 40 (16) 197 (80) 143 (60) 518 (201)
United States 26 118 (64) 41 (22) 14 (10) 189 (107) 216 (121) 578 (323)
Total 69 572 (225) 430 (194) 162 (96) 1,374 (620) 903 (388) 3,441 (1,501)
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.×
Table 1. Final sample after preprocessing.
Final sample after preprocessing.×
Country Clinics Data logs (users)
Early childhood Primary school Secondary school Adult Senior Total
Australia 4 11 (6) 33 (20) 5 (3) 157 (45) 183 (46) 389 (118)
Belgium 11 131 (45) 134 (51) 20 (13) 163 (60) 86 (26) 534 (191)
Canada 3 1 (1) 2 (1) 7 (4) 8 (3) 18 (9)
France 2 43 (19) 16 (9) 17 (11) 19 (12) 3 (2) 98 (53)
Germany 3 93 (32) 81 (36) 35 (24) 461 (242) 120 (78) 790 (407)
Hong Kong 1 2 (1) 2 (1)
India 10 46 (13) 22 (9) 14 (8) 22 (6) 104 (36)
Malaysia 2 8 (2) 4 (1) 12 (3)
New Zealand 3 59 (20) 21 (13) 7 (7) 150 (58) 142 (50) 379 (146)
Spain 1 4 (3) 1 (1) 9 (6) 2 (2) 16 (12)
Switzerland 1 1 (1) 2 (1) 3 (2)
Netherlands 2 58 (21) 80 (32) 40 (16) 197 (80) 143 (60) 518 (201)
United States 26 118 (64) 41 (22) 14 (10) 189 (107) 216 (121) 578 (323)
Total 69 572 (225) 430 (194) 162 (96) 1,374 (620) 903 (388) 3,441 (1,501)
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.
Note. Some users contributed logs to multiple age groups or countries. See total for the number of unique users in each country or age group.×
×
Table 2. Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).
Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).×
M SD Percentile
5 25 50 75 95
Speech in quiet 1.27 0.80 0.26 0.65 1.12 1.74 2.82
Speech in noise 2.48 1.20 0.70 1.61 2.35 3.28 4.71
All speech 3.76 1.70 1.19 2.47 3.63 4.97 6.74
Noise 1.47 1.10 0.29 0.75 1.24 1.88 3.48
All noise 3.96 1.89 1.19 2.59 3.77 5.06 7.46
Music 0.59 0.57 0.06 0.19 0.4 0.84 1.83
Quiet 4.91 2.75 0.99 2.69 4.58 6.92 9.84
Time on air 10.74 3.45 4.30 8.35 11.31 13.41 15.22
Table 2. Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).
Exposure to environmental scenes and device use (time on air) in average hours per day, on the basis of the user-averaged data (n = 1,501).×
M SD Percentile
5 25 50 75 95
Speech in quiet 1.27 0.80 0.26 0.65 1.12 1.74 2.82
Speech in noise 2.48 1.20 0.70 1.61 2.35 3.28 4.71
All speech 3.76 1.70 1.19 2.47 3.63 4.97 6.74
Noise 1.47 1.10 0.29 0.75 1.24 1.88 3.48
All noise 3.96 1.89 1.19 2.59 3.77 5.06 7.46
Music 0.59 0.57 0.06 0.19 0.4 0.84 1.83
Quiet 4.91 2.75 0.99 2.69 4.58 6.92 9.84
Time on air 10.74 3.45 4.30 8.35 11.31 13.41 15.22
×
Table 3. Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).
Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).×
Assistive listening device Age group % of users a Percentile (hr/day)
5 25 50 75 95
Any Early childhood 18 0.10 0.19 0.29 0.87 1.63
Primary school 52 0.10 0.21 0.47 1.04 3.01
Secondary school 42 0.10 0.27 0.64 1.45 3.82
Adult 32 0.09 0.17 0.39 1.07 3.97
Senior 29 0.10 0.15 0.27 0.94 3.66
Total 32 0.10 0.17 0.38 1.05 3.67
FM or t-coil b Early childhood 17 0.10 0.19 0.27 0.71 1.49
Primary school 46 0.10 0.21 0.49 1.08 2.87
Secondary school 23 0.09 0.21 0.53 1.11 3.68
Adult 21 0.09 0.15 0.30 0.64 3.60
Senior 26 0.10 0.15 0.30 0.93 3.53
Total 25 0.10 0.16 0.34 0.89 3.35
Audio cable Early childhood 1.8 0.09 0.09 0.11 0.18 0.32
Primary school 8.2 0.12 0.15 0.23 0.46 1.26
Secondary school 23 0.17 0.27 0.61 1.37 3.57
Adult 14 0.10 0.21 0.54 1.17 3.21
Senior 2.1 0.11 0.15 0.18 0.36 6.29
Total 8.9 0.09 0.17 0.43 1.07 3.28
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.×
a CI users who have used the respective device for an average of at least 5 min/day.
CI users who have used the respective device for an average of at least 5 min/day.×
b A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.
A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.×
Table 3. Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).
Use of assistive listening devices, on the basis of the user-averaged data (n = 1,501).×
Assistive listening device Age group % of users a Percentile (hr/day)
5 25 50 75 95
Any Early childhood 18 0.10 0.19 0.29 0.87 1.63
Primary school 52 0.10 0.21 0.47 1.04 3.01
Secondary school 42 0.10 0.27 0.64 1.45 3.82
Adult 32 0.09 0.17 0.39 1.07 3.97
Senior 29 0.10 0.15 0.27 0.94 3.66
Total 32 0.10 0.17 0.38 1.05 3.67
FM or t-coil b Early childhood 17 0.10 0.19 0.27 0.71 1.49
Primary school 46 0.10 0.21 0.49 1.08 2.87
Secondary school 23 0.09 0.21 0.53 1.11 3.68
Adult 21 0.09 0.15 0.30 0.64 3.60
Senior 26 0.10 0.15 0.30 0.93 3.53
Total 25 0.10 0.16 0.34 0.89 3.35
Audio cable Early childhood 1.8 0.09 0.09 0.11 0.18 0.32
Primary school 8.2 0.12 0.15 0.23 0.46 1.26
Secondary school 23 0.17 0.27 0.61 1.37 3.57
Adult 14 0.10 0.21 0.54 1.17 3.21
Senior 2.1 0.11 0.15 0.18 0.36 6.29
Total 8.9 0.09 0.17 0.43 1.07 3.28
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.
Note. Percentiles are based on cochlear implant (CI) users that have used the respective device for more than 5 min/day.×
a CI users who have used the respective device for an average of at least 5 min/day.
CI users who have used the respective device for an average of at least 5 min/day.×
b A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.
A common form of personal frequency modulation (FM) systems uses a neck loop that is connected to the sound processor via t-coil and will thus be counted as such. Because it is not possible to distinguish between the two, FM and t-coil are presented together.×
×