By Daniel Furman
If you were standing on the tarmac of Ben Gurion Airport as planes took to the skies all around you, what would you think? Probably, that it is loud. The roar of jet engines is indeed amongst the loudest sounds one can ever be exposed to, and at 150 decibels, it is almost unfathomable for a person with healthy hearing to imagine a nearby engine sounding like the buzz of a distant fly. Yet this is precisely the scale of the sound as it is experienced by the severely deaf, who can only faintly perceive the difference between a jet at take-off, and that of a jet at rest.
As if by miracle however, more than 300,000 deaf individuals with this level of hearing are able to have conversations, to produce and understand speech sounds that are more than one-million times quieter than those generated by jet engines. How is this possible? Magic may seem at play in this marvelous scenario, but in truth, it is simply the progress of modern neuroscience and engineering.
The foundation for the technological innovations that provide a sense of sound to the severely deaf starts with fundamental physics, and the concept that all sound is, is waves of pressure. Anatomical understanding complements this concept, by contributing the knowledge that pressure waves are amplified by the contours of our hearing system, beginning with the curls of the outer ear.
Our outer ear’s curling exterior, the cartilaginous pinna, welcomes each wave with its swirling curvature, smoothes the mixture of frequencies, and gently propels the processed wave into the ear canal. This canal acts as a resonant hallway, funneling the wave from one side to the other, amplifying it all the while, until it collides into the biological door named the tympanic membrane.
The tympanic membrane, commonly called the eardrum, is continuously knocked at and pounded by these amplified pressure waves, whose vibratory momentum shakes the drum, and rattles three small bones that are attached on its opposite side. These three bones, the malleus, incus and stapes then clatter and percuss in coded rhythm upon the hearing system’s most significant component: the cochlea, amplifying the vibration further.
The cochlea is a nautilus shell-like spiral checkered with microscopic protrusions, the tentacle-like stereocilia. When the three small bones beat upon the cochlea, stereocilia spanning the entire structure sway like kelp in current. This swaying motion is the method by which pressure waves are transformed into the sensation of sound.
If aligned side-by-side, four-hundred stereocilia would be needed to stretch the diameter of an average strand of human hair. They are a beautiful, natural nanotechnology. In the cochlea, these nanotechnological tentacles cluster together into groups on the microscale, aptly named hair cells for their resemblance to the hairs atop our heads.
When sound waves arrive, sterocilia sway, and their activity summates through an orchestra of nearly 20,000 hair cells whose behaviors produce electrical fluctuations that strike the auditory nerve. From here the signal is transmitted upstream to the many dimensions of consciousness.
During its travels through the world to the pinna, up the undulatory path of the ear canal to the tympanic membrane and the three small bones, sound remains a mechanical entity until it reaches the translation station of the cochlea. At this critical point, the rhythms of the three small bones dictate the spatial and temporal activity of the stereocilia clustered into hair cells, and the hair cell’s collective dynamics transduce the pressure waves into electrical impulses. Which is to say, the hair cells translate from the mechanical domain of sound into the electrical language of the brain’s neurons.
Sensorineural hearing loss, which affects 140 million people worldwide, is typically caused by dysfunctional hair cells. An individual’s pinna is fine, the ear canal is fine, the tympanic membrane is fine, and the three small bones are fine. The sound pressure waves are processed perfectly, but at the last stage, when sound is encoded into the language of neurons, there is a malfunction.
When Shimon Peres envisioned Israel as a “Brain Nation,” he surely saw academics, investors, entrepreneurs, engineers and members of the medical community all coming together to focus on malfunctions such as those occurring in hair cells that cause deafness.
Currently, the leading solution is an extraordinary electronic device called a cochlear implant, which uses a miniature microphone to record from the environment, and algorithms to translate the record into patterns of electric pulses. The normal channels are then bypassed as pulses are delivered directly to the neurons in the cochlea through a matrix of electrodes that are wound around the essential structure. These pulses functionally supplant those that healthy hair cells would otherwise produce, and in doing so, provide the sound sense.
This solution is an emblem of neurotechnological advance, and has brought untold improvement to the lives of more than 300,000 individuals who with their cochlear implants are able to hear everything from airplanes to people speaking. But cochlear implants are by no means a perfect solution.
Most glaring is perhaps the fact that approximately one in every three people with a cochlear implant suffer symptoms of vertigo due to surgery-induced damage to vestibular organs that neighbor the cochlear. Can a superior surgical approach be devised and adopted? Can more malleable, delicate devices be deployed?
The capacity to encode the direction a sound is coming from is also still absent from current implants. Without the ability to localize sound, implanted individuals lack a major evolutionary advantage of healthy hearing: environmental awareness. Consider the Ben Gurion tarmac, where the directionality of sounds intrinsically provides warning of where the fast moving metal object is and where it is heading. The safest place to stand is quickly known and survival made easier.
Or for a lighter, yet much more common, example, consider a cocktail party, where it’s nice to be able to differentiate the words spoken by the individual in front of you from those popping out of the background noise. It’s actually beyond “nice,” it is necessary if one wants to fully participate in the festivities and socialize seamlessly with those who have healthy hearing. Despite its importance, sound localization remains a functionality that most cochlear implants systems are unable to provide.
The work of Dr. Michal Luntz, Clinical Associate Professor of Otolaryngology at the Technion – Israel Institute of Technology, is helping to forge the way forward towards new systems that do provide implanted individuals with the ability to localize sound. Technological strategies include bimodal fitting, in which a cochlear implant is installed in one ear, and a standard hearing aid in the other, as well as bilateral fitting, in which each ear receives a cochlear implant.
A related issue is that cochlear implant systems commonly process and transmit sound with a crude pitch resolution, making voices more robotic than human and music more menacing than enjoyable. Can better algorithms be developed for translating recorded sounds into electric pulses? Or electrode arrays made to deliver the pulses to the cochlea in a more targeted manner? The skill sets of electrical engineers and materials scientists are needed to address these specific challenges.
In parallel, the work of aural rehabilitation experts is similarly required for the field to optimally tend to the need for improved hearing experiences. Aural rehabilitation is post-surgery training that aims to help individuals adapt to the new inputs being provided by their cochlear implant. Innovations in this space must accompany any new hardware solution, and even with current implants, innovations in aural rehabilitation strategies can enhance the neurotechnology’s benefit.
A clear example of such enhancement can be found in the research of Dr. Dikla Kerem, a Music Therapist at Meir Shfeya Youth Village and Lecturer at Oranim Academic College of Education, who found that spontaneous communicative interactions improve in implanted toddlers whose aural rehabilitation also includes musical therapy. Can therapeutic services that build upon this finding be created and offered in the clinic? Can EdTech companies go further and design products that enable parents and children to rehabilitate their own hearing at their own home?
It must at the same time be acknowledged that in the clinic deafness presents in complex ways, and cochlear implantation followed by aural rehabilitation is not always appropriate. For this reason, poor outcomes are at times attributable not to a cochlear implant’s functionality deficiencies or ineffectual rehabilitation efforts, but rather, to the very initial phase of diagnosis. Dr. Rafi Shemesh, a Clinical Audiologist and Professor at the University of Haifa’s Department of Communication Sciences and Disorders, offers a wonderful review of this domain in his International Encyclopedia of Rehabilitation article. Can information technologies provide additional support to practicing physicians who must identify and manage the many multifaceted issues present in hard-of-hearing patients?
And what about the promise of gene therapy? It has been observed since the 1500s that deafness has a hereditary component, and so isn’t it reasonable to think that by now, in 2014, deafness could be predicted and addressed before it has a chance to present in the clinic? Unfortunately, the promise of gene therapy for deafness remains in the future of medical practice, though it may be not so distant.
Karen Avraham’s laboratory within the Department of Human Molecular Genetics and Biochemistry at Tel Aviv University is a leader in the application of genomic technologies to the study of inherited hearing loss, and their work exemplifies the type of efforts that will eventually bring deafness-ameliorating gene therapies to life. Through the group’s article, “Hearing loss: a common disorder caused by many rare alleles” one can appreciate the depth of insight that has been achieved thus far in the field, as well as the need for patience while this insight is developed into applications.
It is tempting to finally wonder whether achievements in synthetic biology will soon make moot the limitations of current cochlear implants and challenges of clinical practice. Can structures simply be synthesized that mimic components of the healthy hearing system and replace those that are faulty? Would aural rehabilitation still be necessary in this case? Can not the entire cochlea with its orchestra of hair cells be made artificially? This seems more straightforward than complex gene therapies, but is it in actuality?
These questions and many more are open and active, with at least 140 million deaf individuals waiting worldwide for new neurotechnological solutions.