Assistive Technologies and AI: Improving Hearing Aids Using Artificial Intelligence

In recent years, there has been significant discourse surrounding the development and application of AI-powered assistive technologies (AT). In a 2020 conference led by the Global Disabilities Innovation Hub (GDI Hub), a panel consisting of industry leaders, policymakers, and assistive technology users concluded that “advances in AI offer the potential to develop and enhance [assistive technologies]” as well as “enhance inclusion, participation, and independence for people with disabilities” (1). Hearing aids, an assistive technology marketed towards those with mild to moderate hearing loss, can serve as a valuable precedent for the successful integration of AI into assistive technologies.

According to the National Institute on Deafness and Other Communications Disorders (NIDCD), over 28 million adults in the United States could benefit from the use of hearing aids (2). However, less than 30% of people who would benefit from hearing aids regularly use them (2), a particularly worrying statistic in light of the cognitive, social, and emotional impact untreated hearing loss has been shown to engender (3). In a collation of studies released by the American Speech-Language-Hearing Association (ASHA) it was revealed that untreated hearing loss is positively correlated with increased rates of social isolation, depression, anxiety, and comorbidity with cognitive disorders - namely, Alzheimer’s and dementia (3). The synergistic relationship between hearing, cognition, and emotional wellbeing is complex, and to say that addressing hearing loss would unequivocally solve the problems listed would be to generalize the individualistic nature of human health; however,

there is growing evidence to support that the use of assistive technologies such as hearing aids could improve the quality of life and decrease the rate of cognitive decline in those with hearing loss (4)(5).

Why then, is there an observable reluctance to use hearing aids? Individual reasoning varies, but there is a persistent through line amongst the responses given to this question: a lack of confidence in the value of hearing aids, specifically owing to perceived poor sound quality, and underperformance in noisy environments (6). Although not broadly implemented, AI provides many innovative means of refining user experience and improving the acoustic performance of hearing aids.

The human ear, working in tandem with the auditory cortex, is particularly adept at isolating conversations or specific threads of auditory information in the presence of noise pollution, but those who use hearing aids often struggle to separate large quantities of auditory information (7). The difficulty stems, in part, from symptoms associated with hearing loss, but limitations of current digital hearing aid technology also play a significant role. Sensorineural hearing loss is the most common form of hearing loss, and is generally caused by damage to the inner ear or auditory nerve (8). It might be tempting to think of sensorineural hearing loss simply as an overall loss of audibility, and while that is not inherently wrong, the symptoms are far more nuanced and may also include a reduction in dynamic range, and a distortion of the auditory periphery (9). The former results in the reduced audibility of soft sounds and the uninhibited perception of loud sounds, while the latter results in the inability of the auditory system to distinguish audio signals from noise (9). Amplifying all sound does little to accommodate these challenges, and in fact, is likely to compound them (9).

While it is not uncommon for hearing aids to incorporate some form of acoustic balancing or noise reduction technology as a means of solving these problems, many of the solutions have distinct limitations. For example, in order to balance the dynamic range, many hearing aids utilize wide dynamic range compression (WDRC), a system designed to boost the gain of soft sounds and decrease the gain of loud sounds (9). However, as this solution necessarily reduces contrast in the sound spectrum, it can further distort the audio periphery by blurring the distinction between audio signals and noise (9).

Noise reduction technologies are similarly imperfect. Digital noise reduction (DNR) is based on the assumption that speech has a greater amplitude and a lower frequency of modulations than does noise - i.e. speech has many slow-paced volume increases and decreases while noise does not - and is designed to amplify speech and attenuate noise (10). However, this binary form of classification is inherently flawed, as background noise is exceptionally varied and can include sounds that have a similar frequency to speech; conversely, other important audio signals like music can have a similar frequency to noise. Some studies have even gone so far as to suggest that DNR provides no observable benefit to the user’s perception of speech or music (11).

Artificial intelligence has avoided some of these pitfalls by moving away from WDRC and DNR technologies in favor of more robust sound recognition algorithms. Arguably the most important factor contributing to better audibility in noisy environments is acoustic environmental classification (AEC) (9). AEC is a machine learning approach whereby an algorithm is used to identify patterns amongst audio features for the purpose of categorizing different sounds (9). AEC, working alongside noise reduction algorithms, allows the most important sounds to be brought to the foreground, and those that provide little auditory information to be diminished in the background (9).

Where machine learning powered AEC differs from other sound separation technologies like DNR, is in its more nuanced evaluation of incoming sound. Instead of dividing sounds into two broad categories - speech or noise -, companies including Starkey Hearing Technologies use AEC to subdivide acoustic environments into eight categories - speech, noise, quiet, wind, music, machines, and varying levels of speech-in-noise (12). Each category receives “discrete adjustments in gain, compression, directionality, noise management, and other parameters appropriate for each specific class” (12). With specific settings corresponding to each category, there is a lower chance that important audio like music will get unnecessarily attenuated or otherwise obscured by noise suppression technologies.

In order for an AEC model to accurately classify incoming sound, it must be trained off of vast datasets consisting of various acoustic scenes (9).

With enough time and data, machine learning algorithms have been shown to accurately classify sounds with as great as 90% accuracy (9).

While 90% accuracy is surely precise enough to navigate many different situations, one must speculate about the prospect of even stronger AEC algorithms. The implicit nature of machine learning is continued refinement, and so with enough data, and enough audio feature parameters, there is a lot of potential for the development of more advanced AEC systems capable of classifying sounds into additional categories and with greater accuracy.

Oticon, a prominent hearing aid manufacturer, has brought hearing aids closer to this ideal by implementing deep neural networks (DNNs) in a method that attempts to mimic the human auditory system (13). When our brain registers sound, it does not immediately identify the source; instead, a hierarchical structure in the auditory cortex analyzes the audio features in increasing depth, beginning with basic features like loudness, pitch, and frequency, and eventually ending with source identification (14). This process happens almost instantaneously, certainly too fast for us to be actively aware of, but analysis of this high-functioning system has provided an ideal basis for modeling deep neural networks in hearing aids.

Similar to their human counterparts, artificial DNNs exhibit a hierarchical structure, in which audio information is received at one end, and conveyed along layers of nodes (called neurons) that apply “weights” to audio features - the more weighted a feature the more influence it has on the next layer of nodes - until any superfluous information is removed and the most important information from all inputs is composited into a clear output (15). Based on the disparate elements it extracts from the input data, the DNN effectively arrives at a conclusion as to what sound it is “hearing”, in much the same way as our own brains would. Afterwards, the DNN receives feedback as to its accuracy, and the process is continuously repeated until the DNN can accurately identify a given sound (13). Training a DNN in this way is computationally intense, and requires extensive amounts of input data in the form of audio files. For instance, the DNN utilized in Oticon hearing aids was trained off of 12 million “sound scenes”, each consisting of a number of sounds one might encounter in daily life (13). 

Since most noise reduction technologies are designed to prioritize a single aspect of the sound scene, usually speech that is directly in front of the listener, and reduce all others, the listener only receives a narrow sample of auditory information (16). Conversely, Oticon’s AI is trained to adjust the gain on all the sounds it can recognize, which in addition to speech may include a range of pertinent environmental sounds like a glass clinking, a dog barking, or a car passing by (16). To put it a different way, Oticon is no longer thinking of noise as an all encompassing classifier, and is instead considering what individual sounds make up “noise”, and how these sounds should be balanced to paint a more complete picture of the user’s surroundings.

Additionally, as Oticon describes, the DNN is tested to ensure that it can make generalized decisions when confronted with unknown sources of sound (17). For example, if the DNN is trained to recognize and adjust the audio of a seagull in a specific way, it should be able to similarly adjust the audio of a pigeon (17). The result is a powerful AI, capable of not only balancing a wide array of recognizable sounds, but also making informed decisions when presented with new stimuli.

The world is composed of multi-faceted soundscapes in which the sounds present are subject to change at any given moment. For instance, consider the radically different listening experiences of talking with a friend, shopping at a grocery store, or walking along a busy street. In order for hearing aid users to receive the full scope of audio information, incoming audio must not only be clear, but also readily adjustable. Widex, a well established hearing aid manufacturer based in Denmark, offers a range of AI hearing aids designed with ease of adjustment in mind.

Building off their earlier SoundSense Learn model, Widex’s hearing aids have two equally important components: the hearing aids themselves, and the My Sound app (18). Widex hearing aids provide users with a staggering number of customizable audio settings; however this level of personalization comes at the expense of a protracted initial fitting and similarly tedious alterations. For example, if there are “three acoustic parameters: low, mid, and high frequencies, and they can each be set to 13 different levels…that totals 2,197 combinations of the three settings” (18).  Attempting to manually sift through the immense quantity of combinations to find the perfect settings for each scenario is understandably daunting, but the My Sound app’s machine learning algorithms promise to drastically reduce the time required to reach this ideal.

Through the My Sound interface, users are prompted to select a recommended sound setting based on their listening intention - e.g. relaxing at home, eating in a restaurant, walking through a park, etc (19). If the user wants more control, they can select Create Your Own, and interact with an AB comparison model, where they pick between a succession of recommended sound settings (19). It might be easier to imagine this sort of AB testing as a tournament-style bracket, in which the user picks a preferred sound setting that is subsequently compared to another sound setting - across a maximum of 20 comparisons - until a winner, the user’s optimal sound setting, is established.

Behind the scenes, the machine learning model is drawing on abundant data stores consisting of the sound preferences of other Widex hearing aid users (20). Once a user completes the Create Your Own process, their unique sound settings for a given scenario enters the data pool. Taken alone, the cumulative data does not offer any obvious insights, but by applying an algorithmic grouping method known as mean-shift clustering, the algorithm is able to ascertain the most common features across all listeners for a specific listening scenario (20). For example, in the “dining” scenario, the algorithm might notice that a significant portion of users prefer reduced middle and treble frequencies, and slightly increased bass, thereby informing the AI to recommend similar settings to future users (20). Instead of predetermined sound settings, the recommendations are the culmination of informed decisions made by users who have been in similar scenarios. The reciprocal nature of this process ensures that the data pool is continuously updated, which in turn accounts for greater levels of personalisation as well as changes in listening preferences over time.

Machine learning has also shown proficiency in addressing a related but less often explored complaint from hearing aid users: the unnatural perception of their own voice while wearing hearing aids (21).


While it may seem trivial, we are scarcely aware of our own voice when speaking, which can result in an uncanny feeling whenever we are confronted with a recording of our own voice. Since hearing aids are generally designed to amplify any significant source of incoming audio, with little regard for the source, users tend to hear an unfamiliar level of feedback whenever they are speaking (not dissimilar to the aforementioned audio recording). The perceived unnaturalness that users report is effectively a result of the hearing aids’ lack of distinction between the sounds that originate from the user and those that originate from other external sources (22).

Signia, a hearing aid manufacturer owned by the same parent company as Widex, developed a solution whereby the user’s own voice is processed separately from all other incoming sound (22). Signia’s Own Voice Processing (OVP) technology - recently upgraded to OVP 2.0 - operates off a deceptively simple design. In contrast to training a machine learning model to recognize each user’s voice, a lengthy endeavor that would require considerable quantities of data, the OVP algorithm is trained to recognize the location and distance of incoming sound signals (23). 

When getting fitted with a Signia AX hearing aid, the user is prompted to speak for a few seconds, during which time the OVP algorithm utilizes built-in sensors to construct a “3D acoustic model” of the area around the user’s head, and charts a path from the user’s mouth to the microphones located within the hearing aids (23). According to Signia,  “this path arises due to the interaction of the wearer’s voice with the unique physical characteristics of the head and associated wearing position of the hearing aids” (23). Being able to differentiate the source of the user’s own voice from all other incoming noise allows the OVP algorithm to automatically switch between settings and adjust gain whenever the user is speaking (23). Studies regarding OVP’s efficacy have been exceptionally positive, with one study finding that 88% of users were satisfied with the “naturalness” of their voice when OVP was activated, as opposed to 52% when OVP was deactivated (24).

Although artificial intelligence has primarily been used as a means of improving audio quality and hearing aid performance, that is far from its only application. 

AI is transforming hearing aids into a comprehensive tool for monitoring health and wellness.

As previously mentioned, there is an increasing body of medical literature that suggests that cognitive decline in later life is more prevalent in those with hearing loss (25). While the correlation is relatively clear, the mechanism by which one leads to the other is not entirely understood, with many different hypotheses being posited. Amongst the most prevailing theories is the idea that sensory deprivation causes portions of the brain to become under-stimulated while others overcompensate, leading to a reorganization of brain function that is not conducive to ideal cognitive performance (25). Another popular theory is that hearing loss leads to social isolation which in turn is linked to uncommonly high rates of cognitive decline (26).

With that in mind, it becomes paramount to not only use one's hearing aids, but to also use them in a variety of social situations (27). Using AI acoustic environment classification (AEC), Starkey hearing aids can identify numerous different listening conditions including the presence of wind, music, silence, speech, and varying levels of speech-in-noise (12). Where Starkey’s classification model differs from more traditional AEC systems is in the addition of an automatic data logging component that actively tracks the environments in which the hearing aids are being used (12). Through the companion Thrive app, users can monitor their daily hearing aid use, time spent using hearing aids in the presence of speech, and the diversity of listening environments (based on the AEC classifications), all of which is aggregated into a “social engagement score” (12). The hope is that the convenience of this system will both incentivize users to participate in social situations, as well as provide users with the information required to monitor their own health. Furthermore, as AEC improves and more specific sources of sound become classifiable, it may be possible to narrow down specific listening experiences that present the most trouble for the listener, thereby giving them the information needed to optimize their hearing aid settings.

While artificial intelligence can help to allay many concerns with existing hearing aid technology, hearing aids are ultimately a tool in service to the user, and are only helpful insofar as the user decides to employ them - a formidable challenge given the aforementioned low rates of use (2). However, there is hope that the observable improvements achieved through AI integration by numerous high-profile hearing aid developers will inspire greater confidence in those currently unsure of the utility of hearing aid services. Taken as a microcosm for assistive technologies as a whole, AI hearing aids showcase the advantages of marrying existing assistive technologies with artificial intelligence. Ideally this will pave the way for the development of similarly innovative products and encourage further adoption of assistive technologies by those who would benefit from them the most.

SOURCES:

1. Holloway, C., Shawe-Taylor, J., & Moledo, A. (2021). Powering Inclusions: Artificial Intelligence and Assistive Technology. UCL Department of Science, Technology, Engineering, and Public Policy. https://www.ucl.ac.uk/steapp/collaborate/policy-impact-unit/current-projects/policy-brief-powering-inclusion-artificial

2. U.S. Department of Health and Human Services. (2021, March 25). Quick Statistics About Hearing. National Institute of Deafness and Other Communication Disorders. Retrieved May 31, 2022, from https://www.nidcd.nih.gov/health/statistics/quick-statistics-hearing#8  

3. Oyler, A. (2012, January). Untreated Hearing Loss in Adults. American Speech-Language-Hearing Association. Retrieved May 31, 2022, from https://www.asha.org/articles/untreated-hearing-loss-in-adults/  

4. Sarant, J., Harris, D., Busby, P., Maruff, P., Schembri, A., Lemke, U., & Launer, S. (2020). The Effect of Hearing Aid Use on Cognition in Older Adults: Can We Delay Decline or Even Improve Cognitive Function?. Journal of clinical medicine, 9(1), 254. https://doi.org/10.3390/jcm9010254 

5. Bucholc, M, Bauermeister, S, Kaur, D, McClean, PL, Todd, S. (2022, February 22). The impact of hearing impairment and hearing aid use on progression to mild cognitive impairment in cognitively healthy adults: An observational cohort study. Alzheimer's Dement. 8(1). https://doi.org/10.1002/trc2.12248  

6. McCormack A., & Fortnum, H. (2013, March 11). Why do people fitted with hearing aids not wear them? International Journal of Audiology. 52(5), 360-368, https://doi.org/10.3109/14992027.2013.769066.

7. Health24. (2017, October 20). Selective hearing: How humans focus on what they want to hear. Health24. Retrieved May 31, 2022, from https://www.news24.com/health24/medical/hearing-management/news/selective-hearing-how-humans-focus-on-what-they-want-to-hear-20171020  

8. Sensorineural hearing loss. (n.d.). American Speech-Language-Hearing Association. Retrieved May 31, 2022, from https://www.asha.org/public/hearing/sensorineural-hearing-loss/  

9. Alexander J. M. (2021). Hearing Aid Technology to Improve Speech Intelligibility in Noise. Seminars in hearing, 42(3), 175–185. https://doi.org/10.1055/s-0041-1735174

10. Digital Noise Reduction Processing in hearing aids: How much and where? (2008, March 4). The Hearing Review. Retrieved May 31, 2022, from https://hearingreview.com/practice-building/practice-management/digital-noise-reduction-processing-in-hearing-aids-how-much-and-where 

11. Kim, H. J., Lee, J. H., & Shim, H. J. (2020). Effect of Digital Noise Reduction of Hearing Aids on Music and Speech Perception. Journal of audiology & otology, 24(4), 180–190. https://doi.org/10.7874/jao.2020.00031

12. Fabry, D. A., & Bhowmik, A. K. (2021). Improving Speech Understanding and Monitoring Health with Hearing Aids Using Artificial Intelligence and Embedded Sensors. Seminars in hearing, 42(3), 295–308. https://doi.org/10.1055/s-0041-1735136 

13. What is a Deep Neural Network? (n.d.) Oticon. Retrieved May 31, 2022, from https://www.oticon.com/blog/what-is-a-deep-neural-network-dnn 

14. Dambroski, S. (2019, March 15). Understanding how the brain makes sense of sound. NSF. Retrieved May 31, 2022, from https://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=297993&org=NSF&from=news 

15. Moolayil, J. J (2019, July 24). A Layman’s Guide to Deep Neural Networks. Towards Data Science. https://towardsdatascience.com/a-laymans-guide-to-deep-neural-networks-ddcea24847fb

16. Santurrete, S., & Behrens, T. (2020). The Audiology of Oticon More [White paper]. Oticon. https://wdh01.azureedge.net/-/media/oticon/main/pdf/master/whitepaper/69619uk_wp_oticon_more_audiology.pdf?la=en&rev=BDC0&hash=D3C850D07BA5CD049F38E0FB06CDA2A6

17. Brændgaard, M. (2020). An Introduction to MoreSound Intelligence [White paper]. Oticon. https://wdh01.azureedge.net/-/media/oticon/main/pdf/master/whitepaper/69674uk_tech_paper_moresound_intelligence.pdf?la=en&rev=3F19&hash=6A1037B3951F262345E45C4A725D3CC7

18. Would You Like Some AI Or Machine Learning With Your Hearing Aids? (2018, November 23). Widex Pro. Retrieved May 31, 2022 from, https://www.widexpro.com/en-us/blog/global/ai-machine-learning-with-hearing-aids/

19. Townsend O., Nielsen J.B., Balslev D. (2018, May 22). Soundsense Learn-Listening Intention and Machine Learning. The Hearing Review. https://hearingreview.com/hearing-products/hearing-aids/real-life-applications-machine-learning-hearing-aids-2#comments

20. The Difference Is In The Data. (2021, September 1). Widex Pro. Retrieved May 31, 2022 from, https://www.widexpro.com/en/blog/global/2021-09-01-my-sound-the-difference-is-in-the-data/

21. Froehlich, M., Powers, T.A., Branda, E., Weber, J. (2018, April 30). Perception of Own Voice Wearing Hearing Aids: Why “Natural” is the New Normal. Audiologyonline. https://www.audiologyonline.com/articles/perception-own-voice-wearing-hearing-22822

22. How Signia Improved Machine Learning in Hearing Aids. (2019, June 1). Signia. Retrieved May 31, 2022 from, https://www.signia-pro.com/en-ca/blog/global/2019-06-01-how-signia-improved-machine-learning-in-hearing-aids/

23. Signia AX Own Voice Processing 2.0. (2022, May). Signia. Retrieved May 31, 2022 from, https://www.signia-library.com/scientific_marketing/signia-ax-own-voice-processing-2-0/

24. Høydel, E.H. (2017, October 31). A New Own Voice Processing System for Optimizing Communication. The Hearing Review. https://hearingreview.com/practice-building/marketing/new-voice-processing-system-optimizing-communication

25. Powell, D.S., Oh, E.S., Reed, N.S., Lin, F.R., Deal, J.A. (2022, February 28). Hearing Loss and Cognition: What We Know and Where We Need to Go. Frontiers in Aging Neuroscience. https://doi.org/10.3389/fnagi.2021.76940
26. National Academies of Sciences, Engineering, and Medicine. (2020). Social Isolation and Loneliness in Older Adults: Opportunities for the Health Care System. The National Academies Press. https://doi.org/10.17226/25663