845 research outputs found

    Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder

    Get PDF
    OBJECTIVES: Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. DESIGN: This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. RESULTS: All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. CONCLUSIONS: Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training

    The factors affectng the psychometric function for speech intelligibility

    Get PDF
    Older listeners often report difficulties understanding speech in noisy environments. Increasing the level of the speech relative to the background - e.g. by way of a hearing aid - usually leads to an increase in intelligibility. The amount of perceptual benefit that can be gained from a given improvement in signal-to-noise ratio (SNR), however, is not fixed: it instead depends entirely on the slope of the psychometric function. The shallower the slope, the less benefit the listener will receive. The aim of the research presented in this thesis was to better understand the factors which lead to shallow slopes. A systematic survey of published psychometric functions considered the factors which affect slope. Speech maskers, modulated-noise maskers, and target/masker confusability were all found to contribute to shallow slopes. Experiment 1 examined the role of target/masker confusion by manipulating masker intelligibility. Intelligible maskers were found to give shallower slopes than unintelligible ones but subsequent acoustic analysis demonstrated that modulation differences between the maskers were responsible for this effect. This was supported by the fact that the effect was seen at low SNRs. Experiment 2 confirmed that the effects of modulation and target/masker confusion occur at different SNRs. Experiments 3 and 4 demonstrated that directing attention to the target speech could "undo" the effects of target/masker confusion. In Experiments 5 and 6 a new method was developed to study whether slope effects are relevant to "real-world" situations. The results suggested that using continuous speech targets gave shallower slopes than standard speech-in-noise tests. There was little evidence found to suggest that shallow slopes are exacerbated for older or hearing-impaired listeners. It is concluded that in the complex demands of everyday listening environments the perceptual benefit received from a given gain in SNR may be considerably less than would be predicted by standard speech-in-noise paradigms.Older listeners often report difficulties understanding speech in noisy environments. Increasing the level of the speech relative to the background - e.g. by way of a hearing aid - usually leads to an increase in intelligibility. The amount of perceptual benefit that can be gained from a given improvement in signal-to-noise ratio (SNR), however, is not fixed: it instead depends entirely on the slope of the psychometric function. The shallower the slope, the less benefit the listener will receive. The aim of the research presented in this thesis was to better understand the factors which lead to shallow slopes. A systematic survey of published psychometric functions considered the factors which affect slope. Speech maskers, modulated-noise maskers, and target/masker confusability were all found to contribute to shallow slopes. Experiment 1 examined the role of target/masker confusion by manipulating masker intelligibility. Intelligible maskers were found to give shallower slopes than unintelligible ones but subsequent acoustic analysis demonstrated that modulation differences between the maskers were responsible for this effect. This was supported by the fact that the effect was seen at low SNRs. Experiment 2 confirmed that the effects of modulation and target/masker confusion occur at different SNRs. Experiments 3 and 4 demonstrated that directing attention to the target speech could "undo" the effects of target/masker confusion. In Experiments 5 and 6 a new method was developed to study whether slope effects are relevant to "real-world" situations. The results suggested that using continuous speech targets gave shallower slopes than standard speech-in-noise tests. There was little evidence found to suggest that shallow slopes are exacerbated for older or hearing-impaired listeners. It is concluded that in the complex demands of everyday listening environments the perceptual benefit received from a given gain in SNR may be considerably less than would be predicted by standard speech-in-noise paradigms

    The Effect of Visual Perceptual Load on Auditory Processing

    Get PDF
    Many fundamental aspects of auditory processing occur even when we are not attending to the auditory environment. This has led to a popular belief that auditory signals are analysed in a largely pre-attentive manner, allowing hearing to serve as an early warning system. However, models of attention highlight that even processes that occur by default may rely on access to perceptual resources, and so can fail in situations when demand on sensory systems is particularly high. If this is the case for auditory processing, the classic paradigms employed in auditory attention research are not sufficient to distinguish between a process that is truly automatic (i.e., will occur regardless of any competing demands on sensory processing) and one that occurs passively (i.e., without explicit intent) but is dependent on resource-availability. An approach that addresses explicitly whether an aspect of auditory analysis is contingent on access to capacity-limited resources is to control the resources available to the process; this can be achieved by actively engaging attention in a different task that depletes perceptual capacity to a greater or lesser extent. If the critical auditory process is affected by manipulating the perceptual demands of the attended task this suggests that it is subject to the availability of processing resources; in contrast a process that is automatic should not be affected by the level of load in the attended task. This approach has been firmly established within vision, but has been used relatively little to explore auditory processing. In the experiments presented in this thesis, I use MEG, pupillometry and behavioural dual-task designs to explore how auditory processing is impacted by visual perceptual load. The MEG data presented illustrate that both the overall amplitude of auditory responses, and the computational capacity of the auditory system are affected by the degree of perceptual load in a concurrent visual task. These effects are mirrored by the pupillometry data in which pupil dilation is found to reflect both the degree of load in the attended visual task (with larger pupil dilation to the high compared to the low load visual load task), and the sensory processing of irrelevant auditory signals (with reduced dilation to sounds under high versus low visual load). The data highlight that previous assumptions that auditory processing can occur automatically may be too simplistic; in fact, though many aspects of auditory processing occur passively and benefit from the allocation of spare capacity, they are not strictly automatic. Moreover, the data indicate that the impact of visual load can be seen even on the early sensory cortical responses to sound, suggesting not only that cortical processing of auditory signals is dependent on the availability of resources, but also that these resources are part of a global pool shared between vision and audition

    Individual differences in supra-threshold auditory perception - mechanisms and objective correlates

    Full text link
    Thesis (Ph.D.)--Boston UniversityTo extract content and meaning from a single source of sound in a quiet background, the auditory system can use a small subset of a very redundant set of spectral and temporal features. In stark contrast, communication in a complex, crowded scene places enormous demands on the auditory system. Spectrotemporal overlap between sounds reduces modulations in the signals at the ears and causes masking, with problems exacerbated by reverberation. Consistent with this idea, many patients seeking audiological treatment seek help precisely because they notice difficulties in environments requiring auditory selective attention. In the laboratory, even listeners with normal hearing thresholds exhibit vast differences in the ability to selectively attend to a target. Understanding the mechanisms causing these supra-threshold differences, the focus of this thesis, may enable research that leads to advances in treating communication disorders that affect an estimated one in five Americans. Converging evidence from human and animal studies points to one potential source of these individual differences: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Electrophysiological measures of sound encoding by the auditory brainstem in humans and animals support the idea that the temporal precision of the early auditory neural representation can be poor even when hearing thresholds are normal. Concomitantly, animal studies show that noise exposure and early aging can cause a loss (cochlear neuropathy) of a large percentage of the afferent population of auditory nerve fibers innervating the cochlear hair cells without any significant change in measured audiograms. Using behavioral, otoacoustic and electrophysiological measures in conjunction with computational models of sound processing by the auditory periphery and brainstem, a detailed examination of temporal coding of supra-threshold sound is carried out, focusing on characterizing and understanding individual differences in listeners with normal hearing thresholds and normal cochlear mechanical function. Results support the hypothesis that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests as deficits both behaviorally and in subcortical electrophysiological measures in humans. Based on these results, electrophysiological measures are developed that may yield sensitive, fast, objective measures of supra-threshold coding deficits that arise as a result of cochlear neuropathy

    Contributions of visual speech, visual distractors, and cognition to speech perception in noise for younger and older adults

    Get PDF
    Older adults report that understanding speech in noisy situations (e.g., a restaurant) is difficult. Repeated experiences of frustration in noisy situations may cause older adults to withdraw socially, increasing their susceptibility to mental and physical illness. Understanding the factors that contribute to older adults’ difficulty in noise, and in turn, what might be able to alleviate this difficulty, is therefore an important area of research. The experiments in this thesis investigated how sensory and cognitive factors, in particular attention, affect older and younger adults’ ability to understand speech in noise. First, the performance of older as well as younger adults on a standardised speech perception in noise task and on a series of cognitive and hearing tasks was assessed. A correlational analysis indicated that there was no reliable association between pure-tone audiometry and speech perception in noise performance but that there was some evidence of an association between auditory attention and speech perception in noise performance for older adults. Next, a series of experiments were conducted that aimed to investigate the role of attention in gaining a visual speech benefit in noise. These auditory-visual experiments were largely motivated by the idea that as the visual speech benefit is the largest benefit available to listeners in noisy situations, any reduction in this benefit, particularly for older adults, could exacerbate difficulties understanding speech in noise. For the first auditory-visual experiments, whether increasing the number of visual distractors displayed affected the visual speech benefit in noise for younger and older adults when the SNR was -6dB (Experiment 1) and when the SNR was -1dB (Experiment 2) was tested. For both SNRs, the magnitude of older adults’ visual speech benefit reduced by approximately 50% each time an additional visual distractor was presented. Younger adults showed the same pattern when the SNR was - 6dB, but unlike older adults, were able to get a full visual speech benefit when one distractor was presented and the SNR was -1dB. As discussed in Chapter 3, a possible interpretation of these results is that combining auditory and visual speech requires attentional resources. To follow up the finding that visual distractors had a detrimental impact on the visual speech benefit, particularly for older adults, the experiment in Chapter 4 tested whether presenting a salient visual cue that indicated the location of the target talker would help older adults get a visual speech benefit. The results showed that older adults did not benefit from the cue, whereas younger adults did. As older adults should have had sufficient time to switch their gaze and/or attention to the location of the target talker, the failure to find a cueing effect suggests that age related declines in inhibition likely affected older adults’ ability to ignore the visual distractor. The final experiment tested whether the visual speech benefit and the visual distraction effect found for older adults in Chapter 4 transferred to a conversationcomprehension style task (i.e., The Question-and-Answer Task). The results showed that younger and older adults’ performance improved on an auditory-visual condition in comparison to an auditory-only condition and that this benefit did not reduce when a visual distractor was presented. To explain the absence of a distraction effect, several properties of the visual distractor presented were discussed. Together, the experiments in this thesis suggest that the roles of attention and visual distraction should be considered when trying to understand the communication difficulties that older adults experience in noisy situations

    Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces

    Get PDF
    Visually impaired people rely upon audition for a variety of purposes, among these are the use of sound to identify the position of objects in their surrounding environment. This is limited not just to localising sound emitting objects, but also obstacles and environmental boundaries, thanks to their ability to extract information from reverberation and sound reflections- all of which can contribute to effective and safe navigation, as well as serving a function in certain assistive technologies thanks to the advent of binaural auditory virtual reality. It is known that head movements in the presence of sound elicit changes in the acoustical signals which arrive at each ear, and these changes can improve common auditory localisation problems in headphone-based auditory virtual reality, such as front-to-back reversals. The goal of the work presented here is to investigate whether the visually impaired naturally engage head movement to facilitate auditory perception and to what extent it may be applicable to the design of virtual auditory assistive technology. Three novel experiments are presented; a field study of head movement behaviour during navigation, a questionnaire assessing the self-reported use of head movement in auditory perception by visually impaired individuals (each comparing visually impaired and sighted participants) and an acoustical analysis of inter-aural differences and cross- correlations as a function of head angle and sound source distance. It is found that visually impaired people self-report using head movement for auditory distance perception. This is supported by head movements observed during the field study, whilst the acoustical analysis showed that interaural correlations for sound sources within 5m of the listener were reduced as head angle or distance to sound source were increased, and that interaural differences and correlations in reflected sound were generally lower than that of direct sound. Subsequently, relevant guidelines for designers of assistive auditory virtual reality are proposed

    The Genetic contribution to solving the cocktail-party problem

    Get PDF
    Communicating in everyday situations requires solving the cocktail-party problem, or segregating the acoustic mixture into its constituent sounds and attending to those of most interest. Humans show dramatic variation in this ability, leading some to experience real-world problems irrespective of whether they meet criteria for clinical hearing loss. Here, we estimated the genetic contribution to cocktail-party listening by measuring speech-reception thresholds (SRTs) in 425 people from large families and ranging in age from 18 to 91 years. Roughly half the variance of SRTs was explained by genes (h 2 = 0.567). The genetic correlation between SRTs and hearing thresholds (HTs) was medium (ρ G = 0.392), suggesting that the genetic factors influencing cocktail-party listening were partially distinct from those influencing sound sensitivity. Aging and socioeconomic status also strongly influenced SRTs. These findings may represent a first step toward identifying genes for hidden hearing loss, or hearing problems in people with normal HTs
    corecore