81 research outputs found
Auditory verbal working memory as a predictor of speech perception in modulated maskers in normal-hearing listeners
Purpose: Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios. Method: Speech perception in noise and AVWM were measured in 30 listeners (age range 31-67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and non-word repetition. Results: After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by non-word repetition) but only in the least favorable SNR. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions. Conclusions: AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds
Auditory Verbal Working Memory as a Predictor of Speech Perception in Modulated Maskers in Listeners With Normal Hearing
Purpose
Background noise can interfere with our ability to understand speech. Working memory capacity (WMC) has been shown to contribute to the perception of speech in modulated noise maskers. WMC has been assessed with a variety of auditory and visual tests, often pertaining to different components of working memory. This study assessed the relationship between speech perception in modulated maskers and components of auditory verbal working memory (AVWM) over a range of signal-to-noise ratios.
Method
Speech perception in noise and AVWM were measured in 30 listeners (age range 31–67 years) with normal hearing. AVWM was estimated using forward digit recall, backward digit recall, and nonword repetition.
Results
After controlling for the effects of age and average pure-tone hearing threshold, speech perception in modulated maskers was related to individual differences in the phonological component of working memory (as assessed by nonword repetition) but only in the least favorable signal-to-noise ratio. The executive component of working memory (as assessed by backward digit) was not predictive of speech perception in any conditions.
Conclusions
AVWM is predictive of the ability to benefit from temporal dips in modulated maskers: Listeners with greater phonological WMC are better able to correctly identify sentences in modulated noise backgrounds.
</jats:sec
DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool
Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data
Litter Use in an Aviary Laying Hen Housing System
Litter use by hens was investigated by recording the number of hens moving to and from the litter in an aviary housing system. Findings showed a difference in litter use between different times and pens. These findings are being contributed as one component of a comprehensive assessment of an aviary laying hen housing system
Improving the provision of hearing care to long-term care home residents with dementia: developing a behaviour change intervention for care staff
Context: Hearing loss disproportionately affects long-term care home (LTCH) residents with dementia, impacting their quality of life. Most residents with dementia rely on LTCH staff to provide hearing care. However, previous research shows provision is inconsistent. The Behaviour Change Wheel (BCW) can be used for developing behaviour-change interventions. Objective: To describe the structured, multistage development of an intervention to help LTCH staff provide hearing care to residents with dementia. Method: Using results from qualitative and quantitative studies and patient and public involvement sessions, we outlined problems associated with hearing care and determined the changes that should be made using the Capabilities, Opportunities, and Motivation-Behaviour Change Model. We then selected and specified five target behaviours for intervention, and identified relevant intervention functions, behaviour change techniques (BCTs), and modes of delivery. Findings: The multi-component intervention is designed to boost the psychological capability, reflective motivation, and physical opportunity of care assistants. The intervention functions deemed most appropriate were education, modelling, incentivisation, and environmental restructuring, alongside several specific BCTs. Limitations: Some of the larger-scale issues relating to hearing care, such as collaborations between LTCHs and audiology services and the costs of hearing devices, were not able to be addressed in this intervention. Conclusions: This study is the first to use the BCW to develop an intervention targeting the staff’s provision of hearing care to LTCH residents with dementia. This intervention addresses the wide-ranging barriers that staff experience when providing hearing care. Trialling this intervention will provide insight into its effectiveness and acceptability for residents and staff
Binaural summation of amplitude modulation involves weak interaural suppression
The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing
Divergent effects of listening demands and evaluative threat on listening effort in online and laboratory settings
ObjectiveListening effort (LE) varies as a function of listening demands, motivation and resource availability, among other things. Motivation is posited to have a greater influence on listening effort under high, compared to low, listening demands.MethodsTo test this prediction, we manipulated the listening demands of a speech recognition task using tone vocoders to create moderate and high listening demand conditions. We manipulated motivation using evaluative threat, i.e., informing participants that they must reach a particular “score” for their results to be usable. Resource availability was assessed by means of working memory span and included as a fixed effects predictor. Outcome measures were indices of LE, including reaction times (RTs), self-rated work and self-rated tiredness, in addition to task performance (correct response rates). Given the recent popularity of online studies, we also wanted to examine the effect of experimental context (online vs. laboratory) on the efficacy of manipulations of listening demands and motivation. We carried out two highly similar experiments with two groups of 37 young adults, a laboratory experiment and an online experiment. To make listening demands comparable between the two studies, vocoder settings had to differ. All results were analysed using linear mixed models.ResultsResults showed that under laboratory conditions, listening demands affected all outcomes, with significantly lower correct response rates, slower RTs and greater self-rated work with higher listening demands. In the online study, listening demands only affected RTs. In addition, motivation affected self-rated work. Resource availability was only a significant predictor for RTs in the online study.DiscussionThese results show that the influence of motivation and listening demands on LE depends on the type of outcome measures used and the experimental context. It may also depend on the exact vocoder settings. A controlled laboratory settings and/or particular vocoder settings may be necessary to observe all expected effects of listening demands and motivation
Magnified neural envelope coding predicts deficits in speech perception in noise
Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise
The association between subcortical and cortical fMRI and lifetime noise exposure in listeners with normal hearing thresholds
In animal models, exposure to high noise levels can cause permanent damage to hair-cell synapses (cochlear synaptopathy) for high-threshold auditory nerve fibers without affecting sensitivity to quiet sounds. This has been confirmed in several mammalian species, but the hypothesis that lifetime noise exposure affects auditory function in humans with normal audiometric thresholds remains unconfirmed and current evidence from human electrophysiology is contradictory. Here we report the auditory brainstem response (ABR), and both transient (stimulus onset and offset) and sustained functional magnetic resonance imaging (fMRI) responses throughout the human central auditory pathway across lifetime noise exposure. Healthy young individuals aged 25–40 years were recruited into high (n = 32) and low (n = 30) lifetime noise exposure groups, stratified for age, and balanced for audiometric threshold up to 16 kHz fMRI demonstrated robust broadband noise-related activity throughout the auditory pathway (cochlear nucleus, superior olivary complex, nucleus of the lateral lemniscus, inferior colliculus, medial geniculate body and auditory cortex). fMRI responses in the auditory pathway to broadband noise onset were significantly enhanced in the high noise exposure group relative to the low exposure group, differences in sustained fMRI responses did not reach significance, and no significant group differences were found in the click-evoked ABR. Exploratory analyses found no significant relationships between the neural responses and self-reported tinnitus or reduced sound-level tolerance (symptoms associated with synaptopathy). In summary, although a small effect, these fMRI results suggest that lifetime noise exposure may be associated with central hyperactivity in young adults with normal hearing thresholds
- …