News

OTOCUBE is looking for Distributors

If you are interested in becoming a OTOCUBE...

A milestone in Fitting

Get the best out of a cochlear implant by using...

Nomination Innovation Award

Nominated for the Herman Wijffels Innovation...

Contact

e-mail: mailto:sales@otocube.com

TELEPHONE: + 31(0)182-353310

P.O. BOX 120
4930 AC  GEERTRUIDENBERG
THE NETHERLANDS

Facebook: www.facebook.com/otocubenl
Twitter: 

 

Contribution to advancement of knowledge / technological progress From model to optimizing agent

The described automated fitting method was recently tested in a pilot study (Govaerts1, Govaerts2, Govaerts3, Nun4). The preliminary results are very encouraging, as a number of case studies suggest that the proposed automated procedure yields better results than those obtained through manual fitting. These good results obtained by the algorithm indicate That manual fitting, even When performed by an expert practitioner, is suboptimal. Based on insights thesis, it seems indeed possible to use systematic outcome-based modifications of the different electrical parameters to obtain a direct and positive effect on performance outcomes. Currently, in order to automate, optimize and validate the fitting process, the input-output functions of the numerous electrical parameters and their measurable auditory effects are being listed and analyzed with special attention to complex inter-parametric interactions. This results in a theoretical mathematical model fitting that describes the relation between the expected outcome after modification of the processor parameters. In the current version of FOX®, this model is based on deterministic logic, ie a number of preset rules setting the values ​​of one of the following parameters: electrical thresholds (T levels) and upper loudness limits (M levels), input dynamic range , gain, electrode (de) activation, processing strategy (HiRes, HiRes 120 etc.), pulse, rate, bandpass filter boundaries, automatic gain control, sensitivity and volume. In a paper recently submitted to the leading international journal Otology and Neurotology, Govaerts et al describe a case example of a 22-year-old woman who was diagnosed with a 60 dB sensorineural hearing loss of unknown aethiology at the age of 3 The patient, who had received hearing aids JPG, had hearing thresholds that turther deteriorated to a loss of 90 dB by the age of 12. The processor or her cochlear implant was fitted with FOX® in three postoperative sessions each time resulting in substantial improvements in outcomes measured by means of the Auditory Speech Sounds Evaluation Test (A§E®; Govaerts et al 1996).

From steady to fluctuating speech noise

With respect to Innovation strategy 2, the development of a language-independent speech-in-noise perception test, primary piloting was done with respect to the effect of different types of interfering noise on speech perception in listeners with sensor neural hearing impairment. As a matter of fact, it is obvious that sounds interfering with speech perception may range from steady-state noise over fluctuating noise to one or more competing voices. Studies comparing speech identification in steady speech noise and fluctuating speech noise demonstrate that normally hearing listeners perform better in fluctuating than in steady-state backgrounds (e.g. Baer & Moore 1994; Miller & Licklider 1950). Listeners have a capacity called “dip listening”: they are able to glimpse speech in background noise valleys and are able to decide whether a speech signal in the dips of the noise is part of the target speech (Moore 2008). They have been said to do so thanks to information derived from fluctuations in the temporal fine structure (TFS) of speech sounds. Cochlear damage degrades the ability to code TFS information (Lorenzi e.a. 2006, Buss e.a. 2004).This implies that listeners with sensor neural hearing loss do not benefit from the dips in fluctuating noise to achieve better speech understanding. Cochlear implants are not able to restore the information obtained from TFS. Therefore, it can be expected that speech understanding in steady noise and in fluctuating noise will be similar in cochlear implant users, because of the lack of the “dip listening” capacity.

In pilot study with a group of 10 normally hearing subjects and four groups of each 10 cochlear implant subjects (using the Nucleus Freedom, Digisonic SP, Clarion HiRes and Clarion HiRes 120 device respectively), we tested speech identification (CVC words) in three types of background noise: steady speech noise, spectral modulated speech noise (with “dips” in the frequency spectrum of 3/6 ERB) and temporal modulated speech noise (with “dips” in amplitude with a modulation rate of 8 Hz). The results confirm the prediction, i.e.whereas in normally hearing listeners speech perception clearly improves when tested in spectral modulated noise in comparison to steady noise, in CI users elevated thresholds are obtained without an appreciable effect for noise fluctuations as compared to steady noise. These results are in line with the expectations that listeners with a well-functioning cochlea use TFS cues to glimpse speech in background noise. When these TFS cues are missing, as in the case of CI users having a damaged cochlea, no such beneficial effects of noise modulation can be found. However, some deaf patients equipped with particular brands of cochlear implants, such as the Clarion HiRes and HiRes 120 users did demonstrate “dip listening” effects which cannot yet be explained. Probably, some unique stimulation parameters, like speech algorithm, large dynamic range, high stimulation rate, etc. could play a role. This needs further investigation with the proposed research activity.

Intra-subject improvement