News

OTOCUBE is looking for Distributors

If you are interested in becoming a OTOCUBE...

A milestone in Fitting

Get the best out of a cochlear implant by using...

Nomination Innovation Award

Nominated for the Herman Wijffels Innovation...

Contact

e-mail: mailto:sales@otocube.com

TELEPHONE: + 31(0)182-353310

P.O. BOX 120
4930 AC  GEERTRUIDENBERG
THE NETHERLANDS

Facebook: www.facebook.com/otocubenl
Twitter: 

 

Hearing and deafness.

In healthy humans, the perception of sound is the result of a complex mechanism in which acoustic waves are transformed into nerve impulses on their way from the outer ear to the brain. The outer ear picks up sound waves that travel through the ear canal and cause the eardrum and the three bones within the middle ear to vibrate. In the cochlea or inner ear, a cavity filled with fluid, these vibrations lead to the displacement of a flexible membrane (“basilar membrane”). The tiny hair cells attached to this membrane are mechanoreceptors that release a chemical neurotransmitter when stimulated and cause neurons to fire and transmit information about the acoustic signal to the hearing nerve and the brain.

If the link between sound and brain is broken at some point in the auditory chain, this may lead to different types, degrees and configurations of hearing loss: a conductive hearing loss occurs when the sound cannot reach the inner ear (occlusion of the ear canal, perforation of the tympanic membrane, middle ear pathology etc.); sensori-neural hearing loss is often caused by damaged hair cells in the cochlea, preventing acoustic pressure waves to be transformed into nerve pulses. Also, the two types of hearing loss can occur at the same time, leading to a mixed loss. Depending on the degree of hearing loss, as measured by hearing thresholds (in dB) on pure tone audiometry, the hearing impairment may be identified as being mild (< 30 dB), moderate (30-50 dB), moderately severe (50-70 dB), severe (70-90 dB) or profound (> 90 dB).

The configuration of hearing loss gives an overall picture of hearing ability at differentfrequencies, distinguishing high (> 1000 Hz) from low (< 1000 Hz) and flat (at all frequencies) hearing loss.

Communicating with current hearing devices.

A scientific survey (Shield 2006) shows that the regular use of an adequate hearing aid increases the chance of keeping hearing impaired patients communicatively, socially and economically active. Today, advances in hearing devices enable intervention by means of a conventional hearing aid (HA) or an implantable inner ear cochlear implant. The choice of the type of device depends, amongst others, on the type, degree and configuration of the patient‟s hearing loss: (i) individuals suffering from mild to moderate hearing loss can usually benefit from a classical hearing aid, a sophisticated acoustic amplifier which typically fits in or behind the user's ear, amplifying and modulating sounds. This acoustic stimulation is particularly effective in the low frequencies. More severe hearing losses (>70dB HL) in the higher frequencies are usually beyond amplification range of classical acoustic stimulation;

(ii) patients suffering from sensori-neural hearing loss caused by damaged hair cells in the cochlea and diagnosed as being profoundly deaf, are potential candidates for cochlear implantation. The development of such a device is based on the idea that in most deaf patients, in spite of the damaged cochlea, enough auditory nerve fibres are left that may be stimulated directly, bypassing normal hearing mechanisms. Several devices have been developed during the last decades. They pick up sound by an external microphone [1], convert it into an electronic code in a speech processor [2], send it to an internal receiver [3] and further on to an array of electrodes [4].

The electrodes provide electrical stimulation of multiple loci in the cochlea, based on the characteristics of the code. High- and low-frequency signals stimulate respectively the base and the apex of the cochlea. The combination of amplitude and place of stimulation in the cochlea enables the brain of deaf individuals to interpret the incoming signal as having a particular pitch and loudness.

Cochlear implemantation

intelligibility. New tests are highly needed that solve these issues. They should be as much language and cognition independent as possible and take into account the masking differences between steady and modulated noise. Communicating with current hearing devices. A scientific survey (Shield 2006) shows that the regular use of an adequate hearing aid increases the chance of keeping hearing impaired patients communicatively, socially and economically active. Today, advances in hearing devices enable intervention by means of a conventional hearing aid (HA) or an implantable inner ear cochlear implant. The choice of the type of device depends, amongst others, on the type, degree and configuration of the patient‟s hearing loss: (i) individuals suffering from mild to moderate hearing loss can usually benefit from a classical hearing aid, a sophisticated acoustic amplifier which typically fits in or behind the user's ear, amplifying and modulating sounds. This acoustic stimulation is particularly effective in the low frequencies. More severe hearing losses (>70dB HL) in the higher frequencies are usually beyond amplification range of classical acoustic stimulation;

(ii) patients suffering from sensori-neural hearing loss caused by damaged hair cells in the cochlea and diagnosed as being profoundly deaf, are potential candidates for cochlear implantation. The development of such a device is based on the idea that in most deaf patients, in spite of the damaged cochlea, enough auditory nerve fibres are left that may be stimulated directly, bypassing normal hearing mechanisms. Several devices have been developed during the last decades. They pick up sound by an external microphone [1], convert it into an electronic code in a speech processor [2], send it to an internal receiver [3] and further on to an array of electrodes [4].

The electrodes provide electrical stimulation of multiple loci in the cochlea, based on the characteristics of the code. High- and low-frequency signals stimulate respectively the base and the apex of the cochlea. The combination of amplitude and place of stimulation in the cochlea enables the brain of deaf individuals to interpret the incoming signal as having a particular pitch and loudness.

© Prof. Dr. Paul Govaerts, Phd

Healthy hearing and language, a vector for communication.

All auditory communication, in animals as well as in humans, crucially depends on sound. The human hearing is attuned to the sounds of language: over a large range of audible sound, healthy individuals can discriminate minor differences in loudness and spectral and temporal structure that enable them to make vital meaning differences in words and sentences. It is well known that children with congenital or early acquired hearing impairment have little access to the speech signal, making them particularly prone to deficits in the development of their oral language. As strong language skills are crucial to literacy development, which is in turn the foundation for further academic success (Chute & Nevins 2003, Geers 2003), the negative effect of hearing impairment on the developing child does not stop at the level of spoken language itself: hearing impaired children are known to have more difficulties to build up the strong oral language skills necessary for mainstreaming, for reading and writing tasks, and to access mathematical concepts (Spencer e.a. 2003, Brannon 2005). Moreover, a growing body of research shows that language delays that are typically observed in deaf children are also causally related to delays in major aspects of the development of cognitive functions that are unique to the human species (Schick, de Villiers, de Villiers, Hoffmeister 2007). Children who do not have the auditory skills necessary to pick up “incidental” linguistic information are often unable to understand how their own thoughts and beliefs may differ from those around them. However, the day-to-day situations in which listeners are confronted with such “incidental” linguistic information are often ”difficult listening situations”: people do not stop talking in the presence of background music, in the car, in a restaurant, in noisy classrooms, etc. The clinical assessment of hearing in these situations poses a major problem. Typical tests used are speech in noise tests. The speech lists used should be phonetically balanced, intensity equilibrated and they should come with normative data. Such lists are only available in a few languages. But even then the results contain large intra- and inter-individual variability due to their dependence on the linguistic, dialectal and cognitive capacities of the test subject. Finally the noise used is typically steady noise (white noise, pink noise, narrow band noise or speech noise). This is not representative for typical environmental noise, which modulates in the time and frequency domains. Hearing subjects are known to use the spectral and temporal troughs to improve their speech detection and intelligibility. New tests are highly needed that solve these issues. They should be as much language and cognition independent as possible and take into account the masking differences between steady and modulated noise.