493 research outputs found

    Exploring the use of speech in audiology: A mixed methods study

    Get PDF
    This thesis aims to advance the understanding of how speech testing is, and can be, used for hearing device users within the audiological test battery. To address this, I engaged with clinicians and patients to understand the current role that speech testing plays in audiological testing in the UK, and developed a new listening test, which combined speech testing with localisation judgments in a dual task design. Normal hearing listeners and hearing aid users were tested, and a series of technical measurements were made to understand how advanced hearing aid settings might determine task performance. A questionnaire was completed by public and private sector hearing healthcare professionals in the UK to explore the use of speech testing. Overall, results revealed this assessment tool was underutilised by UK clinicians, but there was a significantly greater use in the private sector. Through a focus group and semi structured interviews with hearing aid users I identified a mismatch between their common listening difficulties and the assessment tools used in audiology and highlighted a lack of deaf awareness in UK adult audiology. The Spatial Speech in Noise Test (SSiN) is a dual task paradigm to simultaneously assess relative localisation and word identification performance. Testing on normal hearing listeners to investigate the impact of the dual task design found the SSiN to increase cognitive load and therefore better reflect challenging listening situations. A comparison of relative localisation and word identification performance showed that hearing aid users benefitted less from spatially separating speech and noise in the SSiN than normal hearing listeners. To investigate how the SSiN could be used to assess advanced hearing aid features, a subset of hearing aid users were fitted with the same hearing aid type and completed the SSiN once with adaptive directionality and once with omnidirectionality. The SSiN results differed between conditions but a larger sample size is needed to confirm these effects. Hearing aid technical measurements were used to quantify how hearing aid output changed in response to the SSiN paradigm

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2, an introduction and reports on fifteen research projects.National Institutes of Health Grant RO1 DC00117National Institutes of Health Grant RO1 DC02032National Institutes of Health Contract P01-DC00361National Institutes of Health Contract N01-DC22402National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant 2 R01 DC00126National Institutes of Health Grant 2 R01 DC00270National Institutes of Health Contract N01 DC-5-2107National Institutes of Health Grant 2 R01 DC00100U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-94-C-0087U.S. Navy - Office of Naval Research/Naval Air Warfare Center Contract N61339-95-K-0014U.S. Navy - Office of Naval Research/Naval Air Warfare Center Grant N00014-93-1-1399U.S. Navy - Office of Naval Research/Naval Air Warfare Center Grant N00014-94-1-1079U.S. Navy - Office of Naval Research Subcontract 40167U.S. Navy - Office of Naval Research Grant N00014-92-J-1814National Institutes of Health Grant R01-NS33778U.S. Navy - Office of Naval Research Grant N00014-88-K-0604National Aeronautics and Space Administration Grant NCC 2-771U.S. Air Force - Office of Scientific Research Grant F49620-94-1-0236U.S. Air Force - Office of Scientific Research Agreement with Brandeis Universit

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use

    Acoustic source separation based on target equalization-cancellation

    Full text link
    Normal-hearing listeners are good at focusing on the target talker while ignoring the interferers in a multi-talker environment. Therefore, efforts have been devoted to build psychoacoustic models to understand binaural processing in multi-talker environments and to develop bio-inspired source separation algorithms for hearing-assistive devices. This thesis presents a target-Equalization-Cancellation (target-EC) approach to the source separation problem. The idea of the target-EC approach is to use the energy change before and after cancelling the target to estimate a time-frequency (T-F) mask in which each entry estimates the strength of target signal in the original mixture. Once the mask is calculated, it is applied to the original mixture to preserve the target-dominant T-F units and to suppress the interferer-dominant T-F units. On the psychoacoustic modeling side, when the output of the target-EC approach is evaluated with the Coherence-based Speech Intelligibility Index (CSII), the predicted binaural advantage closely matches the pattern of the measured data. On the application side, the performance of the target-EC source separation algorithm was evaluated by psychoacoustic measurements using both a closed-set speech corpus and an open-set speech corpus, and it was shown that the target-EC cue is a better cue for source separation than the interaural difference cues

    Deep learning-based denoising streamed from mobile phones improves speech-in-noise understanding for hearing aid users

    Full text link
    The hearing loss of almost half a billion people is commonly treated with hearing aids. However, current hearing aids often do not work well in real-world noisy environments. We present a deep learning based denoising system that runs in real time on iPhone 7 and Samsung Galaxy S10 (25ms algorithmic latency). The denoised audio is streamed to the hearing aid, resulting in a total delay of around 75ms. In tests with hearing aid users having moderate to severe hearing loss, our denoising system improves audio across three tests: 1) listening for subjective audio ratings, 2) listening for objective speech intelligibility, and 3) live conversations in a noisy environment for subjective ratings. Subjective ratings increase by more than 40%, for both the listening test and the live conversation compared to a fitted hearing aid as a baseline. Speech reception thresholds, measuring speech understanding in noise, improve by 1.6 dB SRT. Ours is the first denoising system that is implemented on a mobile device, streamed directly to users' hearing aids using only a single channel as audio input while improving user satisfaction on all tested aspects, including speech intelligibility. This includes overall preference of the denoised and streamed signal over the hearing aid, thereby accepting the higher latency for the significant improvement in speech understanding

    Understanding sorting algorithms using music and spatial distribution

    Get PDF
    This thesis is concerned with the communication of information using auditory techniques. In particular, a music-based interface has been used to communicate the operation of a number of sorting algorithms to users. This auditory interface has been further enhanced by the creation of an auditory scene including a sound wall, which enables the auditory interface to utilise music parameters in conjunction with 2D/3D spatial distribution to communicate the essential processes in the algorithms. The sound wall has been constructed from a grid of measurements using a human head to create a spatial distribution. The algorithm designer can therefore communicate events using pitch, rhythm and timbre and associate these with particular positions in space. A number of experiments have been carried out to investigate the usefulness of music and the sound wall in communicating information relevant to the algorithms. Further, user understanding of the six algorithms has been tested. In all experiments the effects of previous musical experience has been allowed for. The results show that users can utilise musical parameters in understanding algorithms and that in all cases improvements have been observed using the sound wall. Different user performance was observed with different algorithms and it is concluded that certain types of information lend themselves more readily to communication through auditory interfaces than others. As a result of the experimental analysis, recommendations are given on how to improve the sound wall and user understanding by improved choice of the musical mappings

    Immersive brain entrainment in virtual worlds: actualizing meditative states

    Get PDF
    Virtual Reality with associated hardware and software advances is becoming a viable tool in neuroscience and similar fields. Technology has been harnessed to modify a user’s state of mind for some time through different approaches. Combining this background with merged reality systems, it is possible to develop intelligent tools which can manipulate brain states and enhance training mechanisms
    • …
    corecore