0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Article |

Performance of Subjects Fit With the Advanced Bionics CII and Nucleus 3G Cochlear Implant Devices FREE

Anthony J. Spahr, MS; Michael F. Dorman, PhD
[+] Author Affiliations

From the Department of Speech and Hearing Sciences, Arizona State University, Tempe. The authors have no relevant financial interest in this article.


Arch Otolaryngol Head Neck Surg. 2004;130(5):624-628. doi:10.1001/archotol.130.5.624.
Text Size: A A A
Published online

Objective  To determine if subjects who used different cochlear implant devices and who were matched on consonant-vowel-consonant (CNC) identification in quiet would show differences in performance on speech-based tests of spectral and temporal resolution, speech understanding in noise, or speech understanding at low sound levels.

Design  The performance of 15 subjects fit with the CII Bionic Ear System (CII Bionic Ear behind-the-ear speech processor with the Hi-Resolution sound processing strategy; Advanced Bionics Corporation) was compared with the performance of 15 subjects fit with the Nucleus 24 electrode array and ESPrit 3G behind-the-ear speech processor with the Advanced Combination Encoder speech coding strategy (Cochlear Corporation).

Subjects  Thirty adults with late-onset deafness and above-average speech perception abilities who used cochlear implants.

Main Outcome Measures  Vowel recognition, consonant recognition, sentences in quiet (74, 64, and 54 dB SPL [sound pressure level]) and in noise (+10 and +5 dB SNR [signal-to-noise ratio]), voice discrimination, and melody recognition.

Results  Group differences in performance were significant in 4 conditions: vowel identification, difficult sentence material at +5 dB and +10 dB SNR, and a measure that quantified performance in noise and low input levels relative to performance in quiet.

Conclusions  We have identified tasks on which there are between-group differences in performance for subjects matched on CNC word scores in quiet. We suspect that the differences in performance are due to differences in signal processing. Our next goal is to uncover the signal processing attributes of the speech processors that are responsible for the differences in performance.

Figures in this Article

The cochlear implants available in the United States differ in terms of signal processing strategies and in the hardware that implements those strategies. It is likely that some of these differences in hardware and software lead to differences in subject performance. Herein we report the outcomes of an experiment in which the perception of speech, voice, and music was assessed (1) in subjects who used the Advanced Bionics Corporation (Sylmar, Calif) Hi-Focus electrode array with positioner, CII Bionic Ear behind-the-ear speech processor, and Hi-Resolution sound processing strategy (CII subjects) and (2) in subjects who used the Cochlear Corporation (Englewood, Colo) Nucleus 24 electrode array, ESPrit 3G behind-the-ear speech processor (3G), and Advanced Combination Encoder (ACE) speech coding strategy (3G subjects). Our aim was to find aspects of speech understanding that are better conveyed by one speech processing scheme than the other and to use that information to design better speech processors.

To achieve this goal, we matched CII and 3G subjects on single-syllable word scores in quiet (consonant-vowel-consonant [CNC] words). We then tested these subjects on material that targeted detailed spectral resolution (vowel recognition and place of consonant articulation), detailed resolution of low frequencies (voice discrimination and melody recognition), sentence understanding in noise, and sentence understanding at low signal levels. At issue was whether devices that provided equal levels of speech understanding in quiet would provide different levels of speech understanding for specific frequency-based or temporally based tasks or for speech understanding in more difficult listening environments.

It is relevant to point out differences in our design and the usual design for a device comparison experiment, ie, that of a prospective randomized clinical trial. In a randomized clinical trial the issue is whether one device provides, on average, a higher level of performance than another device. A common outcome measure is the mean speech perception score for the population samples. When using this research design, biographic variables, such as duration of deafness, preimplant speech understanding, and duration of device use, are relevant because they are correlated with performance on tests of speech understanding.

In our experimental design, subjects who used the CII and 3G processors were matched on performance in quiet, ie, on CNC word score, and for age. The CNC word test has proved to be sensitive to performance level and serves as a reasonable criterion for matching subjects. Age is a relevant matching factor because there is evidence that age has a large effect on speech perception in noise.1 Other biographic variables known to affect performance are less likely to be an issue in this design because biographic variables affect performance and performance level was our primary matching criterion. That said, matching on age and CNC word scores yielded groups that did not differ, on average, in terms of duration of deafness, preimplant hearing, preimplant speech understanding, and duration of experience with electrical hearing.

By matching subjects on a very difficult test of speech understanding in quiet, ie, CNC words, we biased our experiment against finding any difference in performance on other test materials. However, we did find differences in performance and those outcomes are described below.

This study was approved by the institutional review board at Arizona State University. All subjects gave informed consent prior to participation.

STUDY POPULATION

Fifteen subjects who used the CII processor and 15 subjects who used the 3G processor were selected for this study. All CII subjects were implanted with the Hi-Focus electrode array with positioner and used the CII Bionic Ear behind-the-ear speech processor with Hi-Resolution system. The number of active electrodes varied from 14 to 16. The pulse rates varied from 725 Hz to 5155 pulses per second (pps)/channel (mean [SD], 211 [1410]). Six subjects were programmed in continuous interleaved sampler (CIS) mode. Nine were programmed in paired CIS mode.

All of the 3G subjects were implanted with the Nucleus 24 electrode array and used an ACE encoding scheme with either an 8- or 12-of-20 channel-picking scheme. The pulse rates varied between 500 and 1200 pps (mean [SD], 913 [246]). All subjects used the ESPrit 3G behind-the-ear speech processor.

All subjects were asked to use the program they most commonly used and were not allowed to change programs, or device settings, during the course of the experiments. Subjects were recruited by letter from clinics in the United States and Canada. Eleven US clinics and 3 Canadian clinics provided subjects for this project. All subjects were flown to Arizona for testing. All subjects were tested by a single examiner using one set of test equipment.

The 2 groups of 15 subjects each were created from a larger sample of 59 subjects. The first matching variable was CNC score. As shown in Figure 1, scores were matched within 4 percentage points (2 words on a 50-item CNC list). The mean score was 66% correct for both groups. The range of CNC word scores was 46% to 86% correct for CII subjects and 46% to 88% correct for 3G subjects. The second matching variable was age. Because we had a larger sample of 3G subjects than CII subjects, we found the closest age matches for the CII subjects from the larger group of 3G subjects. The mean (SD) age of subjects in the 2 groups did not differ significantly (CII: 56.1 [12.2] years; range, 36.3-83.4 years; 3G: 56.7 [13.5] years; range, 28.9-75.7 years). Although we did not explicitly match on other biographic variables, there were no differences between the groups in the mean (SD) duration of experience with electrical stimulation (CII: 1.46 [0.38] years; range, 1.0-2.5 years; 3G: 2.03 [1.53] years; range, 0.2-5.0 years), duration of deafness (CII: 17.5 [19.7] years; range, 0.2-53.9 years; 3G: 14.1 [10.4] years; range, 1.1-32.8 years), preimplant pure-tone thresholds for the implanted ear, averaged over 0.5, 1, and 2 kHz (CII: 100.8 [9.24] dB HL [hearing level]; range, 83-113 dB HL; 3G: 100.3 [16.24] dB HL; range, 60-120 dB HL), and the preimplant level of speech understanding (CNC words) (CII: 9.79 [11.6] words; range, 0-36 words; 3G: 4.86 [4.75] words; range, 0-12 words).

Place holder to copy figure label and caption
Figure 1.

Consonant-vowel-consonant word scores for 15 subjects using the Advanced Bionics Corporation Hi-Focus electrode array with positioner, CII Bionic Ear behind-the-ear speech processor, and Hi-Resolution sound processing strategy (CII) and 15 subjects using the Cochlear Corporation Nucleus 24 electrode array, ESPrit 3G behind-the-ear speech processor, and Advanced Combination Encoder speech coding strategy (3G). Each circle indicates the score for 1 subject.

Graphic Jump Location
TEST MATERIALS

To ensure that subjects from one group did not have more experience with the test materials than subjects from the other group, a new battery of tests was created.

AzBio Sentences. Five talkers (2 male and 3 female) recorded 500 sentences ranging in length from 6 to 10 words. The talkers were instructed to speak in a conversational fashion and were instructed to avoid the "clear speech" mode used for the Hearing In Noise Test (HINT) sentences and the exaggerated "clear speech" mode used for the City University of New York (CUNY) sentences. All sentences were normalized to be of approximately equal sound pressure level (SPL). The sentences then were processed using a 5-channel cochlear implant simulation and presented to 10 normal-hearing subjects for identification. Mean percent correct scores were calculated for each of the 500 sentences. Based upon those intelligibility scores, lists of 40 sentences each were constructed using sentences produced by 4 talkers (2 male and 2 female). The mean intelligibility of the lists (for normal-hearing subjects listening to the 5-channel simulation) was 89% correct. The lists differed in intelligibility by less than 1%.

One 40-sentence list was used for each of the following conditions: sentences in quiet at 74, 64, and 54 dB SPL and sentences in noise at +10 and +5 dB SNR (signal-to-noise ratio). Subjects were asked to repeat back the sentences and were encouraged to guess when unsure. All individual sentences were scored as words correct and an overall percent correct score was computed.

Talker Discrimination.2 Subjects were asked to discriminate between male and female voices and between same-sex voices. The stimuli were drawn from a digital database developed at the Speech Research Laboratory at Indiana University, Bloomington. A total of 108 words produced by 5 males and 5 females were selected. Subjects were presented with pairs of words. Within each condition, half of the pairings were produced by the same talker and half were produced by different talkers. The words in the pairings always differed, eg, one male talker might say "ball" and the other male talker might say "brush." Across the different talker pairs, each talker was paired with every other talker an equal number of times. Participants responded "same" or "different" by pressing 1 of 2 buttons. Responses were scored as the percent of correct responses. Chance is 50% correct on this task.

Melody Recognition. Each subject selected 5 familiar melodies from a list of 33 simple melodies. Each melody consisted of 16 equal-duration notes and synthesized with MIDI software that used samples of a grand piano.3 The frequencies of the notes ranged from 277 to 622 Hz. The average note was concert A (440 Hz) ± 1 semitone. The melodies were created without distinctive rhythmic information. Each subject selected 5 familiar melodies from the larger list. Following familiarization with the melodies, the melodies were put into a randomized test sequence and subjects were asked to identify the melodies. After the presentation of a melody subjects responded by pressing a button from a list containing the 5 preselected melodies. Chance is 20% correct on this task.

Vowel Recognition Without Duration Cues. Thirteen vowels were created with KLATT software4 in /bVt/ format (bait, Bart, bat, beet, Bert, bet, bit, bite, boat, boot, bought, bout, but). The vowel presentations were brief (90 milliseconds) and of equal duration so that vowel length would not be a cue to identity.5 There were 5 repetitions of each stimulus. The order of the items was randomized in the test list.

Consonants in /e/ Environment. Twenty consonants were recorded in /eCe/ format, eg, "a bay," "a day", "a gay." A single male talker made 5 productions of each token. The pitch and vocalic portion of each token was intentionally varied. The order of items was randomized in the test list.

Standard Test Material. (1) CNC words: All subjects were tested with the same 50-item CNC word list in our laboratory; (2) HINT: all 250 HINT sentences were presented in quiet in random order6; (3) CUNY: CUNY7 sentences were presented in quiet and at +10 dB SNR. Two lists (24 sentences) were used in each condition. All standard test materials were taken from commercial CD recordings or from the original sources (HINT sentences).

Noise. The noise was 4-talker babble from an Auditec CD. The noise started 100 milliseconds before the onset of the signal and ended 100 milliseconds after the end of the signal.

New Measures of Performance in Difficult Listening Conditions. Many subjects achieve high levels of performance on sentences in quiet but perform less well in difficult listening conditions, eg, in noise or when speech is presented at a low level. We created 2 measures to quantify performance in these difficult listening conditions. One measure, termed balance, is the difference in scores in noise (AzBio sentences at 74 dB SPL with +10 dB SNR) and at a low input level (54 dB SPL). A small difference score indicates that a subject is able to understand speech with similar accuracy in the 2 difficult listening conditions, ie, performance is balanced in the 2 conditions. A large difference score indicates that a subject understands speech much better in 1 of the 2 conditions.

To provide a measure of performance in the 2 difficult listening conditions (AzBio sentences at +10 dB SNR and AzBio sentences at 54 dB SPL) relative to performance in quiet, scores for the difficult conditions were averaged and then divided by performance in quiet. This number was multiplied by 100 to create a score for a measure we term robustness. A high score on the robustness index indicates little drop in performance between the 74 dB, quiet condition and the 2 difficult listening environments. A low score indicates a large drop in performance between the 74 dB, quiet condition and 1 or both of the difficult listening environments.

As shown in Figure 2, CII subjects recognized sentences at +5 dB SNR with higher accuracy than 3G subjects (CII mean [SD], 36.7% [19.5%]; 3G mean [SD], 14.4% [12.2%]; F1,23 = 11.0, P = .003, power = .89). At +10 dB SNR, CII subjects recognized sentences with higher accuracy than 3G subjects (CII mean [SD], 51.6% [23.9%]; 3G mean [SD], 32.9% [20.3%]; F1,28 = 5.33, P = .03, power = .61). As shown in Figure 3, CII subjects recognized brief synthetic vowels with higher accuracy than 3G subjects (CII mean [SD], 67.8% [16.4%]; 3G mean [SD], 50.4% [13.3%]; F1,28 = 10.15, P = .004, power = .87).

Place holder to copy figure label and caption
Figure 2.

Test performance for subjects using CII and 3G devices as a function of type of test (see legend to Figure 1 for description of devices). Asterisk indicates a significant difference in performance. Error bars indicate ±1 SD. HINT indicates Hearing In Noise Test sentences; CUNY, City University of New York sentences; AzBio, AzBio sentences; and SNR, signal-to-noise ratio.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 3.

Test performance for subjects using CII (A) and 3G (B) devices as a function of type of test (see legend to Figure 1 for description of devices). Asterisk indicates a significant difference in performance. Error bars indicate ±1 SD.

Graphic Jump Location

As shown in Figure 4, CII subjects tended to have similar scores in noise and in the low signal level condition, ie, the majority of scores differed by 20 percentage points or less. In contrast, 3G subjects generally evidenced a greater difference between conditions—for 4 of the 15 subjects the scores differed by greater than 50 percentage points. In most cases performance in noise was poorer than at a low signal level. These outcomes can be summarized in the following way: The CII subjects showed a more "balanced" performance in the 2 difficult listening situations.

Place holder to copy figure label and caption
Figure 4.

Difference in percent correct scores for sentences at +10 dB signal-to-noise ratio and at 54 dB SPL (sound pressure level) in quiet in subjects using CII (A) and 3G (B) devices (see legend to Figure 1 for description of devices). Individual bars indicate a single subject's difference score.

Graphic Jump Location

The scores in Figure 4 do not take into account the level of performance in quiet. The measure of robustness, shown in Figure 5, takes this into account. On this measure, CII subjects scored higher than 3G subjects (CII mean [SD], 75.2% [17.0%]; 3G mean [SD], 60.8% [12.6%]; F1,28 = 6.90, P = .01, power = .72).

Place holder to copy figure label and caption
Figure 5.

Mean scores for the measure of robustness, ie, the averaged performance in difficult listening situations (low sound pressure level and in noise) divided by performance in quiet and multiplied by 100 for subjects fit with the CII and 3G devices (see legend to Figure 1 for description of devices). The asterisk indicates a significant difference in performance.

Graphic Jump Location

For all other conditions, ie, consonant place, manner and voicing, sentences in quiet, voice discrimination, and melody recognition, the differences in levels of performance between groups were not significant.

For most tasks, the group mean scores for the 2 groups did not differ significantly. However, group mean scores did differ significantly on 3 tasks and 1 measure: recognition of brief vowels, recognition of relatively difficult sentences at +10 and +5 dB SNR, and on the measure of robustness. We believe that the differences in performance are due, most likely, to differences in signal processing. Other accounts, eg, an unrepresentative sample of patients in one group, or failure to match patients on a variable that effects performance in noise to a different extent than performance in quiet, seem, to us, less likely.

At this early point in our research, we have achieved our first goal—the identification of tasks on which there are between-group differences in performance. The reasons, in terms of processor design or strategy implementation, for the between-group differences are not clear. We will know a great deal more when we have tested subjects who use the MedEl Combi 40+ cochlear implant with TEMPO+ behind-the-ear speech processor (MedEl Corporation, Durham, NC). From a signal processing standpoint this system functions more like the CII device than the 3G device. If differences in performance are attributable to differences in signal processing, then individuals using the TEMPO+ device should behave in a fashion similar to CII subjects.

Finally, our results speak only to those who score in the upper half of the population of individuals who receive implants, ie, those with 45% CNC scores or better. We chose not to include subjects with test CNC scores lower than 45% in our analyses because when tested in noise with difficult sentence materials, the subjects' scores "fell to the floor" and we lost the ability to accurately measure the drop in performance.

Corresponding author and reprints: Michael F. Dorman, PhD, Department of Speech and Hearing Sciences, Arizona State University, Tempe, AZ 85287-0102 (e-mail: mdorman@asu.edu).

Submitted for publication September 3, 2003; final revision received February 6, 2003; accepted February 10, 2004.

This study was presented at the Ninth Symposium on Cochlear Implants in Children; April 25, 2003; Washington, DC.

Dubno  JRHorwitz  ARAhlstrom  JB Benefit of modulated maskers for speech recognition by younger and older adults with normal hearing. J Acoust Soc Am.2002;111:2897-2907.
PubMed
Kirk  KIHouston  DMPisoni  DBSprunger  ABKim-Lee  Y Talker discrimination and spoken word recognition by adults with cochlear implants.  Presented at: Mid-Winter Meeting for the Association for Research in Otolaryngology; January 27-31, 2002; St Petersburg Beach, Fla.
Hartmann  WMJohnson  D Stream segregation and peripheral channeling. Music Percept.1991;9:155-184.
Klatt  D Software for a cascade/parallel formant synthesizer. J Acoust Soc Am.1980;67:971-995.
Dorman  MDankowski  KMcCandless  GSmith  L Identification of synthetic vowels by subjects using the Symbion multichannel cochlear implant. Ear Hear.1989;10:40-43.
PubMed
Nillson  MSoli  SSullivan  J Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am.1994;95:1085-1099.
PubMed
Boothroyd  AHanin  LHnath  T A Sentence Test of Speech Perception: Reliability, Set Equivalence, and Short-term Learning.  New York: City University of New York; 1985. Internal Report RCI 10.
PubMed

Figures

Place holder to copy figure label and caption
Figure 3.

Test performance for subjects using CII (A) and 3G (B) devices as a function of type of test (see legend to Figure 1 for description of devices). Asterisk indicates a significant difference in performance. Error bars indicate ±1 SD.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 4.

Difference in percent correct scores for sentences at +10 dB signal-to-noise ratio and at 54 dB SPL (sound pressure level) in quiet in subjects using CII (A) and 3G (B) devices (see legend to Figure 1 for description of devices). Individual bars indicate a single subject's difference score.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

Test performance for subjects using CII and 3G devices as a function of type of test (see legend to Figure 1 for description of devices). Asterisk indicates a significant difference in performance. Error bars indicate ±1 SD. HINT indicates Hearing In Noise Test sentences; CUNY, City University of New York sentences; AzBio, AzBio sentences; and SNR, signal-to-noise ratio.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 1.

Consonant-vowel-consonant word scores for 15 subjects using the Advanced Bionics Corporation Hi-Focus electrode array with positioner, CII Bionic Ear behind-the-ear speech processor, and Hi-Resolution sound processing strategy (CII) and 15 subjects using the Cochlear Corporation Nucleus 24 electrode array, ESPrit 3G behind-the-ear speech processor, and Advanced Combination Encoder speech coding strategy (3G). Each circle indicates the score for 1 subject.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 5.

Mean scores for the measure of robustness, ie, the averaged performance in difficult listening situations (low sound pressure level and in noise) divided by performance in quiet and multiplied by 100 for subjects fit with the CII and 3G devices (see legend to Figure 1 for description of devices). The asterisk indicates a significant difference in performance.

Graphic Jump Location

Tables

References

Dubno  JRHorwitz  ARAhlstrom  JB Benefit of modulated maskers for speech recognition by younger and older adults with normal hearing. J Acoust Soc Am.2002;111:2897-2907.
PubMed
Kirk  KIHouston  DMPisoni  DBSprunger  ABKim-Lee  Y Talker discrimination and spoken word recognition by adults with cochlear implants.  Presented at: Mid-Winter Meeting for the Association for Research in Otolaryngology; January 27-31, 2002; St Petersburg Beach, Fla.
Hartmann  WMJohnson  D Stream segregation and peripheral channeling. Music Percept.1991;9:155-184.
Klatt  D Software for a cascade/parallel formant synthesizer. J Acoust Soc Am.1980;67:971-995.
Dorman  MDankowski  KMcCandless  GSmith  L Identification of synthetic vowels by subjects using the Symbion multichannel cochlear implant. Ear Hear.1989;10:40-43.
PubMed
Nillson  MSoli  SSullivan  J Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am.1994;95:1085-1099.
PubMed
Boothroyd  AHanin  LHnath  T A Sentence Test of Speech Perception: Reliability, Set Equivalence, and Short-term Learning.  New York: City University of New York; 1985. Internal Report RCI 10.
PubMed

Correspondence

CME
Also Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
Please click the checkbox indicating that you have read the full article in order to submit your answers.
Your answers have been saved for later.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.

Multimedia

Some tools below are only available to our subscribers or users with an online account.

2,306 Views
80 Citations
×

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
Jobs