895 research outputs found

    Level discrimination of speech sounds by hearing-impaired individuals with and without hearing amplification

    Get PDF
    Objectives: The current study was designed to see how hearing-impaired individuals judge level differences between speech sounds with and without hearing amplification. It was hypothesized that hearing aid compression should adversely affect the user's ability to judge level differences. Design: Thirty-eight hearing-impaired participants performed an adaptive tracking procedure to determine their level-discrimination thresholds for different word and sentence tokens, as well as speech-spectrum noise, with and without their hearing aids. Eight normal-hearing participants performed the same task for comparison. Results: Level discrimination for different word and sentence tokens was more difficult than the discrimination of stationary noises. Word level discrimination was significantly more difficult than sentence level discrimination. There were no significant differences, however, between mean performance with and without hearing aids and no correlations between performance and various hearing aid measurements. Conclusions: There is a clear difficulty in judging the level differences between words or sentences relative to differences between broadband noises, but this difficulty was found for both hearing-impaired and normal-hearing individuals and had no relation to hearing aid compression measures. The lack of a clear adverse effect of hearing aid compression on level discrimination is suggested to be due to the low effective compression ratios of currently fit hearing aids

    Manipulation of Auditory Feedback in Individuals with Normal Hearing and Hearing Loss

    Get PDF
    Auditory feedback, the hearing of one’s own voice, plays an important role in the detection of speech errors and the regulation of speech production. The limited auditory cues available with a hearing loss can reduce the ability of individuals with hearing loss to use their auditory feedback. Hearing aids are a common assistive device that amplifies inaudible sounds. Hearing aids can also change auditory feedback through digital signal processing, such as frequency lowering. Frequency lowering moves high frequency information of an incoming auditory stimulus into a lower frequency region where audibility may be better. This can change how speech sounds are perceived. For example, the high frequency information of /s/ is moved closer to the lower frequency area of /ʃ/. As well, real-time signal processing in a laboratory setting can also manipulate various aspects of speech cues, such as intensity and vowel formants. These changes in auditory feedback may result in changes in speech production as the speech motor control system may perceive these perturbations as speech errors. A series of experiments were carried out to examine changes in speech production as a result of perturbations in the auditory feedback in individuals with normal hearing and hearing loss. Intensity and vowel formant perturbations were conducted using real-time signal processing in the laboratory. As well, changes in speech production were measured using auditory feedback that was processed with frequency lowering technology in hearing aids. Acoustic characteristics of intensity of vowels, sibilant fricatives, and first and second formants were analyzed. The results showed that the speech motor control system is sensitive to changes in auditory feedback because perturbations in auditory feedback can result in changes in speech production. However, speech production is not completely controlled by auditory feedback and other feedback systems, such as the somatosensory system, are also involved. An impairment of the auditory system can reduce the ability of the speech motor control system to use auditory feedback in the detection of speech errors, even when aided with hearing aids. Effects of frequency lowering in hearing aids on speech production depend on the parameters used and acclimatization time

    Speech with pauses sounds deceptive to listeners with and without hearing impairment

    Get PDF
    Purpose: Communication is as much persuasion as it is the transfer of information. This creates a tension between the interests of the speaker and those of the listener as dishonest speakers naturally attempt to hide deceptive speech, and listeners are faced with the challenge of sorting truths from lies. Hearing impaired listeners in particular may have differing levels of access to the acoustical cues that give away deceptive speech. A greater tendency towards speech pauses has been hypothesised to result from the cognitive demands of lying convincingly. Higher vocal pitch has also been hypothesised to mark the increased anxiety of a dishonest speaker.// Method: listeners with or without hearing impairments heard short utterances from natural conversations some of which had been digitally manipulated to contain either increased pausing or raised vocal pitch. Listeners were asked to guess whether each statement was a lie in a two alternative forced choice task. Participants were also asked explicitly which cues they believed had influenced their decisions.// Results: Statements were more likely to be perceived as a lie when they contained pauses, but not when vocal pitch was raised. This pattern held regardless of hearing ability. In contrast, both groups of listeners self-reported using vocal pitch cues to identify deceptive statements, though at lower rates than pauses.// Conclusions: Listeners may have only partial awareness of the cues that influence their impression of dishonesty. Hearing impaired listeners may place greater weight on acoustical cues according to the differing degrees of access provided by hearing aids./

    Pilot Study Validation of Hear Mobile Hearing Screening Application in the General Population

    Get PDF
    The purpose of this pilot study was to test the quality of data collected by a mobile hearing screening application (hEAR) against the gold standard of pure tone audiometry administered by a certified audiologist. hEAR used 7 pre-set frequencies (125 Hz, 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz and 8000 Hz), which were the independent variables, and recorded measurements as sound pressure levels in decibels (dB) during three trials. In total, 30 subjects were recruited from the general population at Texas A&M University. Subjects were randomly assigned and counterbalanced in their assignment to a “quiet” room and a “noisy” room. Subjects used the hEAR mobile hearing screening application to self-administer hearing screening tests. Subjects also had hearing screening examinations performed by a certified audiologist at the identified pre-set frequencies. Data were analyzed using a mixed effect model and testing for repeated measures at 95% confidence intervals, results were separated by room. It was found that the hEAR trials differed from the audiologist trial at almost all frequencies in a noisy environment, but only at 2000 Hz and 8000 Hz for the quiet environment. It was also found that the app trials were very similar to one another (trials 1&2, trials 1&3 and trials 2&3 similar to each other) in the noisy environment; while they statistically differed from one another at almost all frequencies except 125 Hz in a quiet environment. Further research is needed so as to develop hEAR as an effective alternative to an audiologist-administered pure tone hearing test, which can consequently be used for better compliance with OSHA’s hearing screening requirements

    Effect of the number of amplitude-compression channels and compression speed on speech recognition by listeners with mild to moderate sensorineural hearing loss.

    Get PDF
    The use of a large number of amplitude-compression channels in hearing aids has potential advantages, such as the ability to compensate for variations in loudness recruitment across frequency and provide appropriate frequency-response shaping. However, sound quality and speech intelligibility could be adversely affected due to reduction of spectro-temporal contrast and distortion, especially when fast-acting compression is used. This study assessed the effect of the number of channels and compression speed on speech recognition when the multichannel processing was used solely to implement amplitude compression, and not for frequency-response shaping. Computer-simulated hearing aids were used. The frequency-dependent insertion gains for speech with a level of 65 dB sound pressure level were applied using a single filter before the signal was filtered into compression channels. Fast-acting (attack, 10 ms; release, 100 ms) or slow-acting (attack, 50 ms; release, 3000 ms) compression using 3, 6, 12, and 22 channels was applied subsequently. Using a sentence recognition task with speech in two- and eight-talker babble at three different signal-to-babble ratios (SBRs), 20 adults with sensorineural hearing loss were tested. The number of channels and compression speed had no significant effect on speech recognition, regardless of babble type or SBR.This work was supported by the H. B. Allen Trust and the Engineering and Physical Sciences Research Council (UK; Grant No. RG78536). M.A.S. was co-funded by the National Institute of Health Research Manchester Biomedical Research Centre and Trust Charitable funds of the Central Manchester University Hospitals National Health Service Foundation Trust

    TBI-Apps.com: Teaching Caregivers How to Use Mobile Applications as Compensatory Cognitive Aids for Traumatic Brain Injury

    Get PDF
    Long-term cognitive deficits resulting from traumatic brain injury (TBI) can profoundly impact a person’s role competence and ability to perform daily activities (AOTA, 2014a). Mobile technologies, including smartphones and tablets, have shown potential as effective compensatory aids for memory and executive functioning in individuals with TBI (Waite, 2012). A website was created to provide caregivers tools to independently select, program, and use Apple iOS devices with TBI survivors. The website featured five tutorials for iOS applications, one tutorial for an iOS accessibility feature, and tips for teaching application use to individuals with TBI. It also included general information on the effects of TBI and ways iOS devices might be adapted for TBI survivors. The website was piloted with five people to assess its effectiveness. Piloters completed a quiz on website content and provided feedback and suggestions for expansion. Resources that encourage using everyday technology to improve the match between a person’s abilities, the environment, and occupational demands may help individuals with TBI increase occupational engagement and performance

    Prediction of perceptual audio reproduction characteristics

    Get PDF

    Shaping the auditory peripersonal space with motor planning in immersive virtual reality

    Get PDF
    Immersive audio technologies require personalized binaural synthesis through headphones to provide perceptually plausible virtual and augmented reality (VR/AR) simulations. We introduce and apply for the first time in VR contexts the quantitative measure called premotor reaction time (pmRT) for characterizing sonic interactions between humans and the technology through motor planning. In the proposed basic virtual acoustic scenario, listeners are asked to react to a virtual sound approaching from different directions and stopping at different distances within their peripersonal space (PPS). PPS is highly sensitive to embodied and environmentally situated interactions, anticipating the motor system activation for a prompt preparation for action. Since immersive VR applications benefit from spatial interactions, modeling the PPS around the listeners is crucial to reveal individual behaviors and performances. Our methodology centered around the pmRT is able to provide a compact description and approximation of the spatiotemporal PPS processing and boundaries around the head by replicating several well-known neurophysiological phenomena related to PPS, such as auditory asymmetry, front/back calibration and confusion, and ellipsoidal action fields

    Does knowing speaker sex facilitate vowel recognition at short durations?

    Get PDF
    A man, woman or child saying the same vowel do so with very different voices. The auditory system solves the complex problem of extracting what the man, woman or child has said despite substantial differences in the acoustic properties of their voices. Much of the acoustic variation between the voices of men and woman is due to changes in the underlying anatomical mechanisms for producing speech. If the auditory system knew the sex of the speaker then it could potentially correct for speaker sex related acoustic variation thus facilitating vowel recognition. This study measured the minimum stimulus duration necessary to accurately discriminate whether a brief vowel segment was spoken by a man or woman, and the minimum stimulus duration necessary to accurately recognise what vowel was spoken. Results showed that reliable vowel recognition precedes reliable speaker sex discrimination, thus questioning the use of speaker sex information in compensating for speaker sex related acoustic variation in the voice. Furthermore, the pattern of performance across experiments where the fundamental frequency and formant frequency information of speaker’s voices were systematically varied, was markedly different depending on whether the task was speaker-sex discrimination or vowel recognition. This argues for there being little relationship between perception of speaker sex (indexical information) and perception of what has been said (linguistic information) at short durations
    corecore