1,409 research outputs found

    Smartphone Apps in the Context of Tinnitus: Systematic Review

    Get PDF
    Smartphones containing sophisticated high-end hardware and offering high computational capabilities at extremely manageable costs have become mainstream and an integral part of users' lives. Widespread adoption of smartphone devices has encouraged the development of many smartphone applications, resulting in a well-established ecosystem, which is easily discoverable and accessible via respective marketplaces of differing mobile platforms. These smartphone applications are no longer exclusively limited to entertainment purposes but are increasingly established in the scientific and medical field. In the context of tinnitus, the ringing in the ear, these smartphone apps range from relief, management, self-help, all the way to interfacing external sensors to better understand the phenomenon. In this paper, we aim to bring forth the smartphone applications in and around tinnitus. Based on the PRISMA guidelines, we systematically analyze and investigate the current state of smartphone apps, that are directly applied in the context of tinnitus. In particular, we explore Google Scholar, CiteSeerX, Microsoft Academics, Semantic Scholar for the identification of scientific contributions. Additionally, we search and explore Google’s Play and Apple's App Stores to identify relevant smartphone apps and their respective properties. This review work gives (1) an up-to-date overview of existing apps, and (2) lists and discusses scientific literature pertaining to the smartphone apps used within the context of tinnitus

    Deep learning-based denoising streamed from mobile phones improves speech-in-noise understanding for hearing aid users

    Full text link
    The hearing loss of almost half a billion people is commonly treated with hearing aids. However, current hearing aids often do not work well in real-world noisy environments. We present a deep learning based denoising system that runs in real time on iPhone 7 and Samsung Galaxy S10 (25ms algorithmic latency). The denoised audio is streamed to the hearing aid, resulting in a total delay of around 75ms. In tests with hearing aid users having moderate to severe hearing loss, our denoising system improves audio across three tests: 1) listening for subjective audio ratings, 2) listening for objective speech intelligibility, and 3) live conversations in a noisy environment for subjective ratings. Subjective ratings increase by more than 40%, for both the listening test and the live conversation compared to a fitted hearing aid as a baseline. Speech reception thresholds, measuring speech understanding in noise, improve by 1.6 dB SRT. Ours is the first denoising system that is implemented on a mobile device, streamed directly to users' hearing aids using only a single channel as audio input while improving user satisfaction on all tested aspects, including speech intelligibility. This includes overall preference of the denoised and streamed signal over the hearing aid, thereby accepting the higher latency for the significant improvement in speech understanding

    Automatic User Preferences Selection of Smart Hearing Aid Using BioAid

    Get PDF
    Noisy environments, changes and variations in the volume of speech, and non-face-to-face conversations impair the user experience with hearing aids. Generally, a hearing aid amplifies sounds so that a hearing-impaired person can listen, converse, and actively engage in daily activities. Presently, there are some sophisticated hearing aid algorithms available that operate on numerous frequency bands to not only amplify but also provide tuning and noise filtering to minimize background distractions. One of those is the BioAid assistive hearing system, which is an open-source, freely available downloadable app with twenty-four tuning settings. Critically, with this device, a person suffering with hearing loss must manually alter the settings/tuning of their hearing device when their surroundings and scene changes in order to attain a comfortable level of hearing. However, this manual switching among multiple tuning settings is inconvenient and cumbersome since the user is forced to switch to the state that best matches the scene every time the auditory environment changes. The goal of this study is to eliminate this manual switching and automate the BioAid with a scene classification algorithm so that the system automatically identifies the user-selected preferences based on adequate training. The aim of acoustic scene classification is to recognize the audio signature of one of the predefined scene classes that best represent the environment in which it was recorded. BioAid, an open-source biological inspired hearing aid algorithm, is used after conversion to Python. The proposed method consists of two main parts: classification of auditory scenes and selection of hearing aid tuning settings based on user experiences. The DCASE2017 dataset is utilized for scene classification. Among the many classifiers that were trained and tested, random forests have the highest accuracy of 99.7%. In the second part, clean speech audios from the LJ speech dataset are combined with scenes, and the user is asked to listen to the resulting audios and adjust the presets and subsets. A CSV file stores the selection of presets and subsets at which the user can hear clearly against the scenes. Various classifiers are trained on the dataset of user preferences. After training, clean speech audio was convolved with the scene and fed as input to the scene classifier that predicts the scene. The predicted scene was then fed as input to the preset classifier that predicts the user’s choice for preset and subset. The BioAid is automatically tuned to the predicted selection. The accuracy of random forest in the prediction of presets and subsets was 100%. This proposed approach has great potential to eliminate the tedious manual switching of hearing assistive device parameters by allowing hearing-impaired individuals to actively participate in daily life by automatically adjusting hearing aid settings based on the acoustic scen

    Connected Hearing Devices and Audiologists: The User-Centered Development of Digital Service Innovations

    Get PDF
    Today, medical technology manufacturers enter the service market through the development of digital service innovations. In the field of audiology, these developments increasingly shift the service capacities from audiologists to manufacturers and technical systems. However, the technology-driven developments of manufacturers lack acceptance of hearing device users and undermine the important role of audiologists within the service provision. By following a user-centered design approach in order to deal with the technological and social challenges of disruptive services, we aim to develop service innovations on an integrated service platform in the field of tele-audiology. To ensure the acceptance of technology-driven service innovations among hearing device users and audiologists, we systematically integrated these actors in a participatory innovation process. With qualitative and quantitative data we identified several requirements and preferences for different service innovations in the field of tele-audiology. According to the preferences of the different actors, we proposed a service platform approach based on a connected hearing device in three pillars of application: 1) one-to-one (1:1) service innovations based on a remote fitting concept directly improve the availability of services offered by audiologists without being physically present. Based on this, 2) one-to-many (1:N) service innovations allow the use of the connected hearing device as an indirect data source for training a machine learning algorithm that empowers users through the automation of service processes. A centralized server system collects the data and performs the training of this algorithm. As a future outlook, we show potentials to use the connected hearing device for 3) cross-industry (N:M) service innovations in contexts outside the healthcare domain and give practical implications for the market launch of successful service innovations in the field of tele-audiology

    Improvement of Speech Perception for Hearing-Impaired Listeners

    Get PDF
    Hearing impairment is becoming a prevalent health problem affecting 5% of world adult populations. Hearing aids and cochlear implant already play an essential role in helping patients over decades, but there are still several open problems that prevent them from providing the maximum benefits. Financial and discomfort reasons lead to only one of four patients choose to use hearing aids; Cochlear implant users always have trouble in understanding speech in a noisy environment. In this dissertation, we addressed the hearing aids limitations by proposing a new hearing aid signal processing system named Open-source Self-fitting Hearing Aids System (OS SF hearing aids). The proposed hearing aids system adopted the state-of-art digital signal processing technologies, combined with accurate hearing assessment and machine learning based self-fitting algorithm to further improve the speech perception and comfort for hearing aids users. Informal testing with hearing-impaired listeners showed that the testing results from the proposed system had less than 10 dB (by average) difference when compared with those results obtained from clinical audiometer. In addition, Sixteen-channel filter banks with adaptive differential microphone array provides up to six-dB SNR improvement in the noisy environment. Machine-learning based self-fitting algorithm provides more suitable hearing aids settings. To maximize cochlear implant users’ speech understanding in noise, the sequential (S) and parallel (P) coding strategies were proposed by integrating high-rate desynchronized pulse trains (DPT) in the continuous interleaved sampling (CIS) strategy. Ten participants with severe hearing loss participated in the two rounds cochlear implants testing. The testing results showed CIS-DPT-S strategy significantly improved (11%) the speech perception in background noise, while the CIS-DPT-P strategy had a significant improvement in both quiet (7%) and noisy (9%) environment

    Keskusteluavustimen kehittäminen kuulovammaisia varten automaattista puheentunnistusta käyttäen

    Get PDF
    Understanding and participating in conversations has been reported as one of the biggest challenges hearing impaired people face in their daily lives. These communication problems have been shown to have wide-ranging negative consequences, affecting their quality of life and the opportunities available to them in education and employment. A conversational assistance application was investigated to alleviate these problems. The application uses automatic speech recognition technology to provide real-time speech-to-text transcriptions to the user, with the goal of helping deaf and hard of hearing persons in conversational situations. To validate the method and investigate its usefulness, a prototype application was developed for testing purposes using open-source software. A user test was designed and performed with test participants representing the target user group. The results indicate that the Conversation Assistant method is valid, meaning it can help the hearing impaired to follow and participate in conversational situations. Speech recognition accuracy, especially in noisy environments, was identified as the primary target for further development for increased usefulness of the application. Conversely, recognition speed was deemed to be sufficient and already surpass the transcription speed of human transcribers.Keskustelupuheen ymmärtäminen ja keskusteluihin osallistuminen on raportoitu yhdeksi suurimmista haasteista, joita kuulovammaiset kohtaavat jokapäiväisessä elämässään. Näillä viestintäongelmilla on osoitettu olevan laaja-alaisia negatiivisia vaikutuksia, jotka heijastuvat elämänlaatuun ja heikentävät kuulovammaisten yhdenvertaisia osallistumismahdollisuuksia opiskeluun ja työelämään. Työssä kehitettiin ja arvioitiin apusovellusta keskustelupuheen ymmärtämisen ja keskusteluihin osallistumisen helpottamiseksi. Sovellus käyttää automaattista puheentunnistusta reaaliaikaiseen puheen tekstittämiseen kuuroja ja huonokuuloisia varten. Menetelmän toimivuuden vahvistamiseksi ja sen hyödyllisyyden tutkimiseksi siitä kehitettiin prototyyppisovellus käyttäjätestausta varten avointa lähdekoodia hyödyntäen. Testaamista varten suunniteltiin ja toteutettiin käyttäjäkoe sovelluksen kohderyhmää edustavilla koekäyttäjillä. Saadut tulokset viittaavat siihen, että työssä esitetty Keskusteluavustin on toimiva ja hyödyllinen apuväline huonokuuloisille ja kuuroille. Puheentunnistustarkkuus erityisesti meluisissa olosuhteissa osoittautui ensisijaiseksi kehityskohteeksi apusovelluksen hyödyllisyyden lisäämiseksi. Puheentunnistuksen nopeus arvioitiin puolestaan jo riittävän nopeaksi, ylittäen selkeästi kirjoitustulkkien kirjoitusnopeuden

    ЦИФРОВЫЕ БАНКИ ФИЛЬТРОВ ДЛЯ СОВРЕМЕННЫХ ЗАДАЧ ОБРАБОТКИ ЗВУКОВЫХ СИГНАЛОВ

    Get PDF
    The paper reviews techniques of digital filter bank synthesis that can be applied for contemporary speech processing challenges. The paper describes practical experience of using digital filter banks in original systems of sound processing, namely, musical player with noise-aware audio enhancement and hearing aid application for a smartphone.В работе выполнен обзор способов синтеза цифровых банков фильтров, которые могут применяться для решения современных прикладных задач обработки звуковых сигналов. Описывается практический опыт использования цифровых банков фильтров в оригинальных системах обработки звука: музыкальном плеере с функцией повышения разборчивости звучания при прослушивании в шумной акустической обстановке, а также слуховом аппарате на базе смартфона

    First steps towards solving the café problem

    Get PDF
    Hearing loss, and assistive technologies to compensate for the loss, are becoming more and more regular. Hearing aids have improved the quality of life for many suffering from hearing loss but are still insufficient in some social settings. The café problem rises when there are a group of people talking in a relatively noisy environment where one person has hearing aids. Even with modern advancements, such as speech recognition and noise cancellation, people using hearing aids have difficulties differentiating the group's conversation from other noises. This thesis will provide the architecture, design, implementation and evaluation of a mobile application as a first step in creating a system that can counter this café problem. A critical factor in a system like this is to reduce the audio latency to a minimum. We investigate where latency is introduced in the system by creating an experimental setup and evaluating the system. We implement a prototype system and use the experimental setup to identify latency-inducing components. We discuss how this latency can be reduced and bring forward future steps that must be made in completing the system

    Crescendo: Developing a smartphone based hearing system

    Get PDF
    Undergraduate thesis submitted to the Department of Computer Science, Ashesi University, in partial fulfillment of Bachelor of Science degree in Computer Science, May 2020The ability to hear is an integral part of being a human being. It plays a vital role in various facets of the human experience, communication, listening to music and even being aware of one’s environment. Unfortunately, not everyone is born with this gift, and the hard of hearing or deaf are thus isolated from society; simply because they cannot communicate and build social connection in the same way as the hearing population. The development of hearing aids was the attempt to bridge that gap; however, the considerable expense and lack of production mean that these assistive technologies are not accessible to those that really need it. Although this has continuously been an issue, the adoption of smartphones across the globe means that technology is more accessible to people no matter what part of the world they live in. Discoveries in digital signal processing algorithms also suggest that sound can be manipulated to augment users hearing. The combination of these accessible devices and improvements in technology means that there should be a way to provide low cost, accessible hearing devices to all individuals, even those from low-income countries. This research will explore how a functional application may be created both to test a users hearing and function as a hearing aid.Ashesi Universit

    Exploring the use of speech in audiology: A mixed methods study

    Get PDF
    This thesis aims to advance the understanding of how speech testing is, and can be, used for hearing device users within the audiological test battery. To address this, I engaged with clinicians and patients to understand the current role that speech testing plays in audiological testing in the UK, and developed a new listening test, which combined speech testing with localisation judgments in a dual task design. Normal hearing listeners and hearing aid users were tested, and a series of technical measurements were made to understand how advanced hearing aid settings might determine task performance. A questionnaire was completed by public and private sector hearing healthcare professionals in the UK to explore the use of speech testing. Overall, results revealed this assessment tool was underutilised by UK clinicians, but there was a significantly greater use in the private sector. Through a focus group and semi structured interviews with hearing aid users I identified a mismatch between their common listening difficulties and the assessment tools used in audiology and highlighted a lack of deaf awareness in UK adult audiology. The Spatial Speech in Noise Test (SSiN) is a dual task paradigm to simultaneously assess relative localisation and word identification performance. Testing on normal hearing listeners to investigate the impact of the dual task design found the SSiN to increase cognitive load and therefore better reflect challenging listening situations. A comparison of relative localisation and word identification performance showed that hearing aid users benefitted less from spatially separating speech and noise in the SSiN than normal hearing listeners. To investigate how the SSiN could be used to assess advanced hearing aid features, a subset of hearing aid users were fitted with the same hearing aid type and completed the SSiN once with adaptive directionality and once with omnidirectionality. The SSiN results differed between conditions but a larger sample size is needed to confirm these effects. Hearing aid technical measurements were used to quantify how hearing aid output changed in response to the SSiN paradigm
    corecore