Effect of Channel Interaction and Presentation Level on Speech Recognitionin Simulated Bilateral Cochlear Implants

Article information

Clin Arch Commun Disord. 2016;1(1):77-86
Publication date (electronic) : 2016 December 29
doi : https://doi.org/10.21849/cacd.2016.00031
1Department of Speech, Language, & Hearing Sciences, Texas Tech University Health Sciences Center, Lubbock, TX, USA
2Department of Otolaryngology, Head and Neck Surgery, Tongji Hospital, Tongji Medical College of Huazhong University of and Technology, Wuhan, Hubei Province, People’s Republic of China
3Soree Ear Clinic, Korea
4Center for Counseling and Family Studies, Liberty University, Lynchburg, VA, USA
5Department of Head and Neck Surgery, University of California, Los Angeles, CA, USA
Correspondence: Yang-soo Yoon, Department of Speech, Language, and Hearing Sciences, Texas Tech University Health Sciences Center, 3601 4th St STOP 6073, Lubbock, Texas 79430, USA, Tel: +806-743-5660, Fax: +806-743-5670, E-mail: yang-soo.yoon@ttuhsc.edu
Received 2016 September 28; Revised 2016 November 19; Accepted 2016 November 21.

Abstract

Purpose

The present study used the acoustic simulation of bilateral cochlear implants to investigate the effect of presentation level (EXP 1) and channel interaction (EXP 2) on binaural summation benefit in speech recognition.

Methods

The acoustic 6-channel processors were constructed, and the envelope of each band was used to modulate a sinusoid (EXP 1) or band-pass filtered white noise (EXP 2) with a frequency matching the center frequency of the carrier band. Presentation level was fixed for one ear and varied for the other. Channel interaction was simulated by altering filter slopes while keeping interaural spectral peaks matched in the carrier bands. Two separate groups of ten adult subjects with normal hearing participated in EXP 1 and EXP 2. Sentence recognition was measured with left ear alone, right ear alone, and both ears in quiet and in noise at +5 dB signal-to-noise ratio.

Results

A significant binaural summation benefit occurs only in noise, regardless of interaural mismatch in channel interactions and presentation levels.

Conclusions

Results suggest that factors other than channel interaction are important and that matching interaural loudness is an unimportant factor in binaural summation benefit in noise. For both EXP 1 and EXP 2, the data trend is indicative of speech information perceived in quiet being fully coded by the better performing ear alone, which leads to no binaural summation benefit. The results of current and future studies could have implications in programming of bilateral cochlear implant users.

INTRODUCTION

Data exists evidencing that binaural benefit, defined as the difference between bilateral performance and unilateral performance with the better ear, on tasks of speech perception [13] is greater when unilateral performance is similar across ears [4]. This functional relationship was observed in nine adult bilateral cochlear implant (CI) participants across different speech materials and signal-to-noise levels even though there was substantial intra- and inter-subject variability [1]. Despite the clear link between unilateral similarities in CI user’s ears, very little evidence exists examining the effects of varying insertion depths of CI arrays on speech perception measures between ears.

When programming CIs, the most critical factor is the patient’s own perception of sound through the device. The two most notable components of the sound signal are loudness and sound quality. Sound quality is heavily dependent upon spectral processing and is primarily influenced by channel interaction within and between ears in CI users [57]. In the unilateral CI listening condition, the effect of loudness [810] and channel interaction [11,12] on speech perception are well documented, but little information exists for binaural CI listening. With the current procedure for mapping bilateral CIs being to program them individually which, in many cases, results in varying stimulation levels between ears we must form a better understanding of how the effects of ear level differences between CIs effects binaural CI users.

There are no data available on how unmatched loudness across ears influences the binaural summation benefit in speech perception for bilateral CI users. In a study performed by Busby and Arora [9], the researchers suspected that improving overall audibility may be the determining factor in overcoming the impact of noise on tasks of speech perception in noise in unilateral CI users. Though the researchers tested speech perception in noise, the loudness presentation sound pressure levels were the same for both ears. As binaural summation occurs in binaural CI users [13], researchers have made efforts to ensure that the perceived loudness in each ear of binaural CI users is matched during experimental conditions [14]. In an effort to create the best possible CI mappings for binaural CI users, we must have an in-depth understanding of how variable perceived loudness between ears affects patients with two cochlear implants in order to ensure binaural benefit is maximized.

Yoon et al. [2] investigated the effects of binaural spectral mismatch on binaural benefits in the context of bilateral CIs using acoustic simulations. Binaural spectral mismatch was systematically manipulated by simulating changes in the relative insertion depths across ears, suggesting a dependence of binaural benefit on redundant speech perception between implanted ears. In a separate study by Yoon et al. [3], it was found that bilateral spectral mismatch may have a negative impact on the binaural benefit of squelch and redundancy for bilateral CI users. The results also suggest that clinical mapping should be carefully administrated for bilateral CI users to minimize the difference in spectral patterns between the two CIs. Wackym et al. [15] showed that the largest binaural summation benefit in speech perception in bilateral CI users occurred in more challenging listening conditions (e.g., at a soft presentation level in noise). However, the study method used the same presentation level for both ears, leaving it unclear how unmatched presentation levels across ears influences binaural summation benefit in speech perception for bilateral CI users. The gap in current literature regarding how unmatched presentation levels affects binaural summation in bilateral CI users is the impetus for our first experiment (EXP 1) to measure the binaural summation benefit in speech perception when four different combinations of presentation levels across ears were presented in quiet and noise.

Channel interaction is one of the most important factors which limits performance of a CI on speech perception [3,16,17], vocal emotion recognition [18], and music perception [19,20]. Though channel interactions are well documented for unilateral CI listening [16,2125], these effects not been studied extensively in bilateral users. One potential reason for this may be due to the confounding variables associated with CI use such as acoustic frequency allocation, electrode location, pattern of nerve survival, and experience [6]. These variables are doubled when examining bilateral CI users as each CIs fundamental characteristics may differ. It is highly likely that each individual implanted ear processes sounds quite differently due to differences in insertion depth and possible interrelationship among the aforementioned confounding factors across ears [26]. The unmatched insertion depths across ears can cause spectral mismatches, leading to less effective use of bilateral CIs in speech perception [3]. Our understanding of how channel interaction affects the benefit of using bilateral CIs in speech perception is substantially lacking. Characterization of the effect of interaural channel interactions on speech perception should be carried out in more controlled listening environments, limiting confounding variables in order to fill in this gap.

The method used in this study to limit confounding factors in evaluating channel interaction through the use of acoustic simulation of CIs. Such simulations have been shown to provide results consistent with testing outcomes of real CI users [3,17] even though some of limitations are conferred [27,28]. The most common approach to model channel interaction in acoustic simulation is by changing the slope of each carrier band-pass filters; a steep filter slope represents less channel interaction, while a shallow slope represents more channel interaction. Bingabr et al. [17] modeled channel interaction with physiologically-based parameters that describe the exponential decay rate of electrical current dispersed in the cochlea [17]. The researchers’ findings revealed that sentence perception was most sensitive to changes in filter slopes between 14 and 50 dB/octave and remained unchanged with a filter slope beyond 50 dB/octave [17]. While the study indicated that performance could be affected by channel interaction created with a filter slope less than a 14 dB/octave, that possibility was not tested. In a study by Crew et al. [20], the researchers investigated the effect of channel interaction on melodic pitch perception, using output filter slopes of 24, 12, or 6 dB/octave, simulating “slight,” “moderate,” and “severe” channel interaction, respectively [20]. The CI simulations were vocoded using sine-wave carriers with 16 channels for NH hearing listeners with both ears. Again, no data was made available regarding the effect of binaural channel interaction on binaural summation benefit in speech perception.

In the present study, we will quantity the effect of binaural presentation levels and binaural channel interactions on speech recognition in simulated bilateral CI listening conditions. The purpose of the first experiment (EXP 1) was to measure the binaural summation benefit in speech perception when four different combinations of presentation levels across ears were presented in quiet and noise. The purpose of the second experiment (EXP 2) was to measure the binaural summation benefit in speech perception when the amount of channel interaction was systematically manipulated across ears in quiet and noise. Measuring the binaural summation effect is one way to evaluate how speech signals processed by each CI in bilateral CI listening are integrated, while avoiding other confounding binaural factors that could be involved when spatial cues such as head shadow or squelch are tested.

METHODS

Subjects

Two separate groups of ten adult subjects with normal hearing participated in EXP 1 (6 females and 4 males; average age: 41.9±12.9) and EXP 2 (7 females and 3 males; average age: 45.7±9.7). Subjects were native speakers of American English and were between the ages of 20 and 57 years with thresholds better than 20 dB SPL hearing level at audiometric frequencies ranging from .25 to 8 kHz. All subjects provided informed consent, and all procedures were approved by the local Institutional Review Board.

Stimuli

Sentence recognition was measured in quiet (Q) and in noise at a +5 dB signal-to-noise ratio (SNR) using IEEE sentences [29] under three listening modes: left ear alone (L), right ear alone (R), and both ears (L+R). Speech and noise were presented from the front. A steady-state speech-shaped noise was generated as a masker with a low-pass cutoff of 2 kHz (−12 dB/octave) to provide a generic long-term speech spectrum. This noise masker was combined with speech stimuli to generate the designated SNR level before vocoding.

Signal processing

The acoustic 6-channel processors were constructed using TigerSpeech Technology from the Emily Shannon Fu Foundation [30]. The 6-band processor was chosen based on results from the study of Bingabr et al. (2008) demonstrating that changes in performance of sentence recognition in quiet was dynamic when acoustic simulation was performed with 6-band processors [17]. For both EXP 1 and EXP 2, the input acoustic signal was band-pass filtered into six frequency bands using 4th-order Butterworth filters. The attenuation at the crossover point of adjacent filter bands was −3 dB. A fixed input frequency range (200–7,000 Hz) was used for the analysis bands. The corner frequencies of each band were determined by the Greenwood function [31] with a 22-mm, 6-electrode array (3.65-mm electrode spacing) and a simulated insertion depth of 29.7 mm for a 35-mm-long cochlea. The temporal envelope in each band was extracted by half-wave rectification and low-pass filtering (eight-order Butterworth: −24 dB/octave) with a 160-Hz cutoff frequency. The envelope of each band was used to modulate a sinusoid (EXP 1) or bandpass filtered white noise (EXP 2) with a frequency matching the center frequency of the carrier band. The output carrier bands were upwardly shifted to simulate a 25-mm insertion depth for both ears with a 16-mm-long (i.e., the length of a typical electrode array for Cochlear Americas Nucleus, AB (Advanced Bionics) HiRes 90K, and Med-El Sonata or Pulsar medium), 6-electrode array (2.67-mm electrode spacing). The 25-mm insertion depth was selected as a reference because the recommended insertion depth for the commercialized cochlear implant devices is about 24–25 mm in order for the electrode array to be positioned at the middle of the cochlea (assuming that the total length of cochlea is 35 mm and the length of electrode array is 16 mm). Hence, the spectral envelope from the analysis frequency range (200–7,000 Hz with 22-mm electrode array) was compressed onto the carrier frequency ranges (513–5,862 Hz with a 16-mm electrode array).

Note that for EXP 1, sine-wave vocoders were used to constrain channel interaction. Although phoneme and sentence recognition performance has been shown to be similar with sine-wave and noise-band vocoders [32], sine-wave vocoder has been shown to better emulate real CI performance for pitch-related speech tasks [18]. Sine-wave carriers also offer better stimulation site specificity and better temporal envelope representation than noise-band carriers. For EXP 1, a presentation level of 65 dB(A) was chosen as a reference for the left ear (L65) while the level was varied at 65 (R65), 60 (R60), 55 (R55), and 45 (R45) dB(A) for the right ear. Finally, the outputs from all bands were summed.

For EXP 2, noise-band vocoders were used to better simulate the spread of excitation of stimulation for each channel within the cochlea. Channel interaction was simulated by altering the slope of the carrier noise bands from steep (36 dB/ octave, the least channel interaction) to shallow (6 dB/octave, the greatest channel interaction). A filter slope with a 36 dB/ octave was chosen as a reference for the left ear (L36); for the right ear, filter slopes in the carrier bands varied at 36 (R36), 24 (R24), 18 (R18), 12 (R12), and 6 (R6) dB/octave. This manipulation simulated the different degrees of speech smearing that would be produced by channel interaction. However, spectral peaks were intact and matched across ears because no spectral mismatch existed across ears. As discussed in the introduction, the selection of these filter slopes was made based on the results of Bingabr et al. study [17]. Parameters applied to the simulations for both experiments are given in Table 1.

Detailed parameters used in the design for a 6-band vocoder in EXP 1 and EXP 2

Procedure

To familiarize the subjects with vocoded stimuli, all subjects were provided familiarization over 2 lists of IEEE sentences with L65, R45, and L65+R45 for EXP 1 (another 6 lists) or with each of L36 alone, R6 alone, and L36+R6 for EXP 2 (6 lists in total). Stimuli lists used for familiarization were excluded from the formal test. In this article, “binaural summation benefit” refers to the difference between bilateral performance and the performance of the better performing ear alone when speech and noise are presented from the front.

For all listening modes, stimuli were delivered via an audio interface (Edirol UA 25) to headphones (Sennheiser HDA 200). Subjects were tested in a double-walled sound treated booth (IAC). During testing, a sentence list was randomly selected from a total of 60 possible lists (excluding the 12 lists used only for familiarization), and sentences were also randomly selected from within the list and presented to the subject. The subject was instructed to repeat the sentence as accurately as possible and to guess if they were not sure, but were cautioned not to provide the same response for each stimulus. Speech performance was measured over 2 runs. No training or trial-by-trial feedback was provided during testing. The performance scores for words in each sentence were computed by dividing the number of correctly identified words by the total number of words presented.

For EXP 1, the total experimental conditions per listener included two SNRs (Q and +5 dB), one reference presentation level for the left ear (L65), 4 presentation levels for the right ear (R65, R60, R55, and R45), and 4 combined (L+R) listening modes across ears (36 IEEE lists were used per listener). The lists of IEEE sentences were randomized across subjects.

For EXP 2, the total experimental conditions per listener included two SNRs (Q and +5 dB), one setting for channel interaction for the left ear (L36), 5 settings for channel interactions for the right ear (R36, R24, R18, R12, and R6), and 5 combined (i.e., L+R) listening modes across ears (44 IEEE lists were used per listener). Some of the same sentence lists were presented in each experiment; however, this repetition did not affect the results of the study as subjects varied between experiments (i.e., no one subject heard the same sentence list more than once).

RESULTS

Figure 1 shows the effect of presentation levels across ears on sentence recognition (EXP 1). For each panel, the unilateral performance for the reference presentation level (L65) is shown with a filled circle, the unilateral performance for each presentation level in the right ear (R45∼R65) is shown with an open square, and binaural performance is denoted with a dotted line. It is noted that the benefit was consistently greater in noise than in quiet, regardless of interaural presentation level conditions. To avoid violation of the assumption for analyses of variance, arcsine transformation was performed. Two-way repeated measures analyses of variance showed a significant effect of SNR (p <0.001) and the listening mode (p <0.001) for each of panels in Figure 1. The analyses also showed no significant interactions between SNR and listening mode (p>0.05) for all presentation levels. Post-hoc pair-wise comparisons (Sidak method) show significant binaural summation benefit (11 to 20 percentage points) in noise across presentation level conditions, as indicated by the asterisks (p<0.05). In contrast, there is no significant benefit (<5 percentage points) in quiet across presentation level conditions (p>0.05).

Figure 1.

Binaural presentation levels and binaural benefit in sentence perception. Data is shown for both ears (dotted lines), the right ear alone (filled squares), and the left (reference with 65 dB (A)) ear along (filled circles) for all conditions. The error bars show standard error. The asterisks show significant differences between the better ear alone and both ears (p<0.05).

Figure 2 shows the effect of channel interactions across ears on sentence recognition (EXP 2). For each panel, unilateral performance for the reference channel interaction (L36) is shown with a filled circle, the unilateral performance for each channel interaction for the right ear (R6-R36) is shown with an open square, and binaural performance is denoted with a dotted line. It is noted that the binaural summation benefit is consistently greater in noise than in quiet. After arcsine transformation was performed, two-way repeated measures analyses of variance were performed, with SNR (+5 dB and in quiet) and listening mode (L alone, R alone, and L+R) as factors and the test run as the repeated measure. Speech recognition scores were significantly higher in quiet than in noise (p < 0.0001) for all channel interaction conditions. There was also a significant effect of listening mode (p<0.0001) for all channel interaction conditions. The analyses also showed no significant interactions between SNR and listening mode (p >0.05) except for the L36+R6 condition (top left panel) (p <0.001).Post-hoc pair-wise comparisons (Sidak method) showed that significant binaural summation benefit (10 to 16 percentage points) occurred in noise (p<0.05) across channel interaction conditions, as indicated by the asterisks. In quiet, however, only two binaural channel interaction conditions (L36+R18 and L36+R24) produced a significant binaural summation benefit (4 to 7 percentage points) (p<0.05).

Figure 2.

Binaural channel interactions and binaural benefit in sentence perception. Left and right ear is denoted by L and R, and numbers represent amount of channel interaction (36 dB/octave, the least channel interaction while 6 dB/octave, the greatest channel interaction). Data is shown for both ears (dotted lines), the right ear alone (filled squares), and the left ear alone as a reference with 36-dB/octave filter slope for all conditions (filled circles). The error bars show standard error. The asterisks show significant differences between the better ear alone and both ears (p<0.05).

DISCUSSION

The purpose of the current study was to determine the impact on binaural summation benefit when presentation levels and channel interaction were varied across ears. This topic is important, given that binaural summation benefit can have implications on programming for bilateral CI users. Possible implications on programming stem from previous studies where researchers found that perceptual loudness doubled for certain stimuli for binaural CI users [13]. In this simulation study of bilateral CIs, a significant binaural summation benefit occurred in noise while a little or inconsistent benefit occurred in quiet, irrespective of differences in the presentation level (EXP 1) and the degree of channel interaction (EXP 2) across ears. As the current experiment held other factors constant and ensured that no interaural spectral mismatch was evident in the stimuli, the results of EXP 1 suggest that in noise, exact matching interaural loudness might not be a factor for binaural summation. The results of EXP 2 indicated factors other than channel interaction are important in binaural summation benefit in noise.

The results of EXP 1 show that balancing interaural overall loudness was not a major factor in creating a significant binaural benefit in noise. This finding is consistent with the results of Wackym et al. [15], who investigated the effect of matched presentation levels in word and sentence recognition in unilateral and bilateral CI users [15]. They found that as testing conditions became more challenging (in noise at a softer presentation level), there were steady increases in binaural benefit [15]. The result of EXP 1suggests that speech information processed by each ear in noise is double-coded due to summation or redundancy, but in quiet speech cues coded by the better performing ear alone are dominant. This result also suggests that binaural summation benefit in speech perception might not be affected by loudness adjustment across ears in bilateral CI users. However, there is the possibility that binaural summation processing would be affected if loudness mapping is performed for each electrode, which the present study did not simulate. For example, stimulation within an array is loudness balanced by sweeping across sets of adjacent 5 electrodes at the most comfortable loudness level. The stimulation level will be adjusted for any electrode that is not perceptually matched in loudness to adjacent electrodes. Stimulation levels will be loudness-balanced across ears by adjusting the simulation level of electrode 10 on the right array (in 0.5 dB steps) to match the loudness of electrode 10 of the left array; stimulation levels of all electrodes in the right array then will be globally adjusted according to this loudness adjustment.

In EXP 2, the simulation of channel interaction generated speech smearing, but spectral peaks in each carrier band were intact and matched across ears because no interaural spectral mismatch existed. These findings from EXP 2 may indicate that binaural summation benefit is reliant upon processing matched spectral peaks across ears rather than processing spectral details. This finding is consistent with results of previous studies performed by Yoon et al. [2] who found that when frequencies at spectral peaks across ears were separated by more than 894 Hz to the base, significant binaural summation interference (decrease in binaural performance relative to better performing ear alone) occurred. In a separate study Yoon et al. [3] observed a significant binaural benefit of squelch, redundancy, and head shadow for the spectrally matched condition (25-mm insertion depth) across ears compared to those for the mismatched condition (25-mm insertion depth in one ear and 22-mm in the opposite ear), regardless of masker locations and SNRs.

The results of EXP 2 suggested that binaural benefit was impacted by the presence of noise but not by channel interaction, along with the evidence from previous studies indicating the importance of spectral matching. Overall results indicate that matching spectral band and peaks across ears may be as important of a goal as increasing the number of independent stimulation sites for bilateral cochlear implant users.

In the study of Crew et al. [20], melodic pitch perception was negatively affected when spectral envelope cues were weakened by CI signal processing and further by channel interaction. As such, increasing the number of channels may not sufficiently enhance spectral contrasts between notes. The amount of channel interaction was constant across subjects and channels within each condition. In the case of an implanted CI, channel interaction may vary greatly across CI users and across electrode location within CI users. Interestingly, mean performance with real CI users for exactly the same task and stimuli was 61% correct [4], and was most comparable to mean CI simulation performance with slight channel interaction in the present study. Note that the present NH subjects had no prior experience listening to vocoded sounds, compared with years of experience with electric stimulation for real CI users. With more experience, NH performance would probably improve, but the general trend across conditions would most likely remain. It is possible that the effects of channel interaction observed in this study may explain some of the larger variability observed in Zhu et al. [4], with some CI users experiencing moderate-to-severe channel interaction and others experiencing very little. Of course, many other factors can contribute to CI users’ melodic pitch perception (e.g., acoustic frequency allocation, electrode location, pattern of nerve survival, experience, etc.).

One common finding from both EXP 1 and EXP 2 is robustness of binaural summation benefit in noise, which has been evidenced in previous studies. Yoon et al. [1] measured consonant, vowel, and sentence recognition in bilateral CI users and reported that mean binaural summation advantages at +5 dB and +10 dB SNRs were approximately 7 percentage points larger than those in quiet (statistically significant). Wackym et al. [15] reported a comparable amount of difference in binaural summation benefit between quiet and noise (+8 dB SNR with speech-weight and talker-babble) conditions in bilateral CI users. A similar trend in binaural amplification was reported by Boymans et al. [33] who found that binaural amplification provided not only a greater binaural summation benefit of speech perception in noise than in quiet, but it also provided subjective benefits such as reduced listening effort. In bimodal hearing (a hearing aid in one ear and a CI in the contralateral ear), multiple studies reported that the benefit for sentence recognition is approximately 7% to 15% higher in noise than in quiet [3438] even though the magnitude of the benefit is varied depending on types of noise maskers (i.e., speech-shaped versus talker-babble) [39].

It is possible that by using different reference values for both experiments, the trends of the current results be different. In the current study, a single selection was used as a reference: the left ear alone with the presentation level of 65 dB SPL (L65) for EXP 1 and the left ear alone with the filter slope of 36 dB/octave (L36) for EXP 2. For EXP 1, since unilateral performance measured with R65 and R60 is comparable with that measured with the current reference (L65), it is reasonable to expect similar results if 60 dB or 65 dB sound pressure level is used as a reference. However, due to different performance either with R45 alone or R55 alone relative to performance with L65, it is possible different effects of the unmatched presentation levels across ears would impact binaural benefit.

While the 6-band condition is representative, it would be good to have other conditions such as 12 and 16 bands. A 6-channel acoustic simulation of CI already has taken into the account the effect of channel interaction. If further channel interaction happens at the 6-band processing, the spectral resolution will be further reduced to only 3–4 channels which will not be true reflection of current CIs. There may be an optimal tradeoff between the number of channels and the degree of channel interaction as spread of excitation also occurs to some extent in acoustic hearing.

A more limited interpretation of results of the present study should be made, because as discussed in Introduction, acoustic simulation of CI processing has several limitations. It is possible in this simulation study that the amount of binaural summation benefit is overestimated because NH listeners have extensive experience with two-ear listening, while they have almost no experience with single-ear listening. Thus, CI listeners with extensive experience in single-ear listening cannot expect the same amount of binaural summation benefit as the NH listeners in the present study. Another issue is that CI users have a smaller dynamic range (3–5 dB in terms of stimulation intensity) as compared to normal hearing listeners (20–120 dB) [40]. A nonlinear compression operation is normally used in CIs to map wider acoustic dynamic range (30–80 dB) into 3–5 dB electric dynamic range. This nonlinear compression with a logarithmic growth function also introduces distortions. Finally, CI users have a poorer temporal processing ability, measured with amplitude modulation detection. CI users detect temporal modulation at modulation frequencies below 300 Hz, and are most sensitive to 80- to 100-Hz modulation [41] while low-pass characteristic for modulation frequencies below about 800 Hz [42].

CONCLUSIONS

The data collected from this study matches previous findings that binaural summation is impacted by the presence of noise [15], but not as significantly interaural channel interactions or matching of interaural loudness. In quiet, however, speech information is fully coded by the better performing ear alone, which leads to little or no binaural summation benefit. While the current study was conducted utilizing simulations of these factors, future studies should be conducted with bilateral cochlear implant users. Repeating this study with bilateral cochlear implant users would provide the data needed to compare the less than perfect binaural summation found in normal hearing persons [43] with the binaural summation of cochlear implant users with matched and unmatched perceptual loudness between ears. The results of current and future studies could have implications in programming of bilateral cochlear implant users.

Notes

The author has no conflict of interests.

Acknowledgements

We would like to thank our participants for their time and effort. We would also like to thank Justin Aronoff for his editorial assistance. This work was supported by NIH grants R01-DC004993 and R01-DC004792.

References

1. Yoon YS, Li YX, Kang HY, Fu QJ. The relationship between binaural benefit and difference in unilateral speech recognition performance for bilateral cochlear implant users. Int J Audiol [Internet] 2011a;50(8):554–565.
2. Yoon YS, Liu A, Fu QJ. Binaural benefit for speech recognition with spectral mismatch across ears in simulated electric hearing. J Acoust Soc Am [Internet] 2011b;130(2):E94–EL100.
3. Yoon YS, Shin YR, Fu Q J. Binaural benefit with and without a bilateral spectral mismatch in acoustic simulations of cochlear implant processing. Ear Hear [Internet] 2013;34(3):273–279.
4. Zhu M, Chen B, Galvin JJ, Fu QJ. Influence of pitch, timbre and timing cues on melodic contour identification with a competing masker. J Acoust Soc Am [Internet] 2011;130:3562–3565.
5. Snel-Bongers J, Briaire JJ, Vanpoucke FJ, Frijns JH. Spread of excitation and channel interaction in single- and dual-electrode cochlear implant stimulation. Ear Hear [Internet] 2012;33(3):367–376.
6. Tang Q, Beniterz R, Zeng FG. Spatial channel interactions in cochlear implants. J Neural Eng [Internet] 2011;8(4):046029.
7. Won JH, Humphrey EL, Yeager KR, Martinez AA, Robinson CH, Mills KE, et al. Relationship among the physiologic channel interactions, spectral-ripple discrimination, and vowel identification in cochlear implant users. J Acoust Soc Am [Internet] 2014;136(5):2714–2725.
8. Baudhuin J, Cadieux J, Firszt JB, Reeder RM, Maxson JL. Optimization of programming parameters in children with the Advanced Cionics cochlear implant. J Am Acad Audiol [Internet] 2012;23(5):302–312.
9. Busby PA, Arora K. Effects of threshold adjustment on speech perception in Nucleus cochlear implant recipients. Ear Hear [Internet] 2015;37(3):303–311.
10. Plant KL, van Hoesel RJ, McDermott HJ, Dawson PW, Cowan RS. Clinical outcomes for adult cochlear implant recipients experiencing loss of usable acoustic hearing in the implanted ear. Ear Hear [Internet] 2015;36(3):338–356. Erratum in: Ear Hear 2015;36(5):618.
11. Baskent D. Speech recognition in normal hearing and sensorineural hearing loss as a function of the number of spectral channels. J Acoust Soc Am [Internet] 2006;120(5 Pt 1):2908–2925.
12. Kwon BJ. Effects of electrode separation between speech and noise signals on consonant identification in cochlear implants. J Acoust Soc Am[Internet] 2009;126(6):3258–3267.
13. van Hoesel RJ. Exploring the benefits of bilateral cochlear implants. Audiol Neurootol [Internet] 2004;9:234–246.
14. Litovsky RY, Parkinson A, Arcaroli J. Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear [Internet} 2010;30(4):419–431.
15. Wackym PA, Runge-Samuelson CL, Firszt JB, Alkaf FM, Burg LS. More challenging speech-perception tasks demonstrate binaural benefit in bilateral cochlear implant users. Ear Hear [Internet] 2007;28(2 Suppl):80S–85S.
16. Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants. J Acoust Soc Am [Internet] 2001;110(2):1150–1163.
17. Bingabr M, Espinoza-Varas B, Loizou PC. Simulating the effect of spread of excitation in cochlear implants. Hear Res [Internet] 2008;241(1–2):73–79.
18. Luo X, Fu QJ, Galvin JJ III. Vocal emotion recognition by normal-hearing listeners and cochlear implant users. Trends Amplif [Internet] 2007;11(4):301–315.
19. Kong YY, Cruz R, Jones JA, Zeng FG. Music perception with temporal cues in acoustic and electric hearing. Ear Hear [Internet] 2004;25:173–185.
20. Crew JD, Galvin JJ III, Fu QJ. Channel interaction limits melodic pitch perception in simulated cochlear implants. J Acoust Soc Am [Internet] 2012;132(5):EL429–EL43.
21. Favre E, Pelizzone M. Channel interactions in patients using the Ineraid multichannel cochlear implant. Hear Res [Internet] 1993;66(2):150–156.
22. Gfeller K, Turner C, Mehr M, Woodworth G, Fearn R, Knutson J, et al. Recognition of familiar melodies by adult cochlear implant recipients and normal-hearing adults. Coch Imp Inter [Internet] 2002;3:29–53.
23. McDermott HJ. Music perception with cochlear implants: A review. Trends Amplif [Internet] 2004;8:49–82.
24. Bierer JA. Probing the electrode-neuron interface with focused cochlear implant stimulation. Trends Amplif [Internet] 2010;14(2):84–95.
25. Shannon RV, Cruz RJ, Galvin JJ III. Effect of stimulation rate on cochlear implant users’ phoneme, word and sentence recognition in quiet and in noise. Audio Neurootol [Internet] 2011;16(2):113–123.
26. Litovsky R, Parkinson A, Arcaroli J, Sammeth C. Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear [Internet] 2006;27:714–731.
27. Fu QJ, Nogaki G, Galvin JJ III. Auditory training with spectrally shifted speech: implications for cochlear implant patient auditory rehabilitation. J Assoc Res Otolaryngol [Internet] 2005;6(2):180–9.
28. Liu C, Fu QJ. Estimation of vowel recognition with cochlear implant simulations. IEEE Trans Biomed Eng [Internet] 2007;54(1):74–81.
29. Rothauser EH, Chapman ND, Guttman N, Hecker MHL, Nordby KS, Silbiger HR, et al. IEEE recommended practice for speech quality measurements. IEEE Trans Audio Electroacoust [Internet] 1969;17:227–246.
30. TigerSpeech Technology. Innovative speech software [Internet] Los Angeles: Emily Shannon Fu Foundation. p. 2005–2012. Available from: http://www.tigerspeech.com/.
31. Greenwood DD. A cochlear frequency-position function for several species - 29 years later. J Acoust Soc Am [Internet] 1990;87:2592–2605.
32. Dorman MF, Loizou PC, Rainey D. Speech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputs. J Acoust Soc Am [Internet] 1997;102:2403–2411.
33. Boymans M, Goverts ST, Kramer SE, Festen JM, Dreschler WA. A prospective multi-centre study of the benefits of bilateral hearing aids. Ear Hear [Internet] 2008;29(6):930–941.
34. Kong YY, Stickney GS, Zeng FG. Speech and melody recognition in binaurally combined acoustic and electric hearing. J Acoust Soc Am [Internet] 2005;117(3 Pt 1):1351–1361.
35. James CJ, Fraysse B, Deguine O, Lenarz T, Mawman D, Ramos A, et al. Combined electroacoustic stimulation in conventional candidates for cochlear implantation. Audio Neurootol [Internet] 2006;11(Suppl 1):57–62.
36. Chang J, Bai J, Zeng FG. Unintelligible low-frequency sound enhances simulated cochlear-implant speech recognition in noise. IEEE Trans Biomed Eng [Internet] 2006;53:2598–2601.
37. Dorman MF, Gifford RH, Spahr AJ, McKarns SA. The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies. Audio Neurooto [Internet] 2008;l13(2):105–112.
38. Gifford RH, Dorman MF, McKarns SA, Spahr AJ. Combined electric and contralateral acoustic hearing: word and sentence recognition with bimodal hearing. J Speech Lang Hear Res [Internet] 2007;50(4):835–843.
39. Brown CA, Bacon SP. Low-frequency speech cues and simulated electric-acoustic hearing. J Acoust Soc Am [Internet] 2009;125(3):1658–1665.
40. Nie K, Drennan W, Rubinstein J. The 17th Edition of Ballenger’s Otorhinolaryngology Head and Neck Surgery Shelton, CT: People’s Medical Publishing House; 2009. p. 389–394.
41. Shannon RV. Temporal modulation transfer functions in patients with cochlear implants. J Acoust Soc Am [Internet] 1992;91:2156–2164.
42. Viemeister NF. Temporal modulation transfer functions based upon modulation thresholds. J Acoust Soc Am [Internet] 1979;66:1364–1380.
43. Moore BCJ, Glasberg BR. Modeling binaural loudness. J Acoust Soc Am [Internet] 2007;121(3):1604–1612.

Article information Continued

Figure 1.

Binaural presentation levels and binaural benefit in sentence perception. Data is shown for both ears (dotted lines), the right ear alone (filled squares), and the left (reference with 65 dB (A)) ear along (filled circles) for all conditions. The error bars show standard error. The asterisks show significant differences between the better ear alone and both ears (p<0.05).

Figure 2.

Binaural channel interactions and binaural benefit in sentence perception. Left and right ear is denoted by L and R, and numbers represent amount of channel interaction (36 dB/octave, the least channel interaction while 6 dB/octave, the greatest channel interaction). Data is shown for both ears (dotted lines), the right ear alone (filled squares), and the left ear alone as a reference with 36-dB/octave filter slope for all conditions (filled circles). The error bars show standard error. The asterisks show significant differences between the better ear alone and both ears (p<0.05).

Table 1.

Detailed parameters used in the design for a 6-band vocoder in EXP 1 and EXP 2

EXP 1 EXP 2
# of channels 6
Analysis bands Corner frequencies [Hz] 200
427
803
1,426
2,458
4,167
7,000
Insertion depth 29.7 mm
Length of electrode array 22 mm
Linear spacing 3.65 mm
Filter slope [dB/octave] 24 36
Carrier bands Corner frequencies [Hz] 513
806
1,230
1,843
2,729
4,009
5,862
Insertion depth 25 mm
Length of electrode array 16 mm
Linear spacing between bands 2.66 mm
Carrier type Sine wave White noise
Filter slope [dB/octave] 24 Left ear: 36
Right ear: 36, 24, 18, 12, 6
Presentation level [dB (A)] Left ear: 65
Right ear: 65, 60, 55, 45
65