122 research outputs found

    Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks

    Get PDF
    Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound), and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands

    An Investigation of the Effects of Categorization and Discrimination Training on Auditory Perceptual Space

    Full text link
    Psychophysical phenomena such as categorical perception and the perceptual magnet effect indicate that our auditory perceptual spaces are warped for some stimuli. This paper investigates the effects of two different kinds of training on auditory perceptual space. It is first shown that categorization training, in which subjects learn to identify stimuli within a particular frequency range as members of the same category, can lead to a decrease in sensitivity to stimuli in that category. This phenomenon is an example of acquired similarity and apparently has not been previously demonstrated for a category-relevant dimension. Discrimination training with the same set of stimuli was shown to have the opposite effect: subjects became more sensitive to differences in the stimuli presented during training. Further experiments investigated some of the conditions that are necessary to generate the acquired similarity found in the first experiment. The results of these experiments are used to evaluate two neural network models of the perceptual magnet effect. These models, in combination with our experimental results, are used to generate an experimentally testable hypothesis concerning changes in the brain's auditory maps under different training conditions.Alfred P. Sloan Foundation and the National institutes of Deafness and other Communication Disorders (R29 02852); Air Force Office of Scientific Research (F49620-98-1-0108

    Spatial Auditory Display: Comments on ShinnCunningham et al

    Get PDF
    ________________________________________________________________________ Spatial auditory displays have received a great deal of attention in the community investigating how to present information through sound. This short commentary discusses our 2001 ICAD paper (Shinn-Cunningham, Streeter, and Gyss), which explored whether it is possible to provide enhanced spatial auditory information in an auditory display. The discussion provides some historical context and discusses how work on representing information in spatial auditory displays has progressed over the last five years. HISTORICAL CONTEXT The next time you find yourself in a noisy, crowded environment like a cocktail party, plug one ear. Suddenly, your ability to sort out and understand the sounds in the environment collapses. This simple demonstration of the importance of spatial hearing to everyday behavior has motivated research in spatial auditory processing for decades. Perhaps unsurprisingly, spatial auditory displays have received a great deal of attention in the ICAD community. Sound source location is one stimulus attribute that can be easily manipulated; thus, spatial information can be used to represent arbitrary information in an auditory display. In addition to being used directly to encode data in an auditory display, spatial cues also are important in allowing a listener to focus attention on a source of interest when there are multiple sound sources competing for auditory attention . Although it is theoretically easy to produce accurate spatial cues in an auditory display, the signal processing required to render natural spatial cues in real time (and the amount of care required to render realistic cues) is prohibitive even with current technologies. Given both the important role that spatial auditory information can play in conveying acoustic information to a listener and the practical difficulties encountered when trying to include realistic spatial cues in a display, spatial auditory perception and technologies for rendering virtual auditory space have both been wellrepresented areas of research at every ICAD conference held to date (e.g., see Even with a good virtual auditory display, the amount of spatial auditory information that a listener can extract is limited compared to other senses. For instance, auditory localization accuracy is orders of magnitude worse than visual spatial resolution. The study reprinted here, originally reported at ICAD 2001, was motivated by a desire to increase the amount of spatial information a listener could extract from a virtual auditory display. The original idea was to see if spatial resolution could be improved in a virtual auditory display by emphasizing spatial acoustic cues. The questions we were interested in were: 1) Can listeners learn to accommodate a new mapping between exocentric location and acoustic cues, so that they do not mislocalize sounds after training? and 2) Do such remappings lead to improved spatial resolution, or is there some other factor limiting performance? RESEARCH PROCESS The reprinted study was designed to test a model that accounted for results from previous experiments investigating remapped spatial cues. The model predicted that spatial performance is restricted by central memory constraints, not by a low-level sensory limitation on spatial auditory resolution. However, the model failed for the experiments reported: listeners actually achieved better-than-normal spatial resolution following training with the remapped auditory cues (unlike in any previous studies). These results were encouraging on the one hand, as they suggested

    Individual Differences Reveal Correlates of Hidden Hearing Deficits

    Get PDF
    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.

    Cochlear neuropathy and the coding of supra-threshold sound

    Get PDF
    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation

    Near-infrared spectroscopy as a tool for marine mammal research and care

    Get PDF
    This project was partially funded by the Department for Business, Energy and Industrial Strategy Offshore Energy Strategic Environmental Assessment Programme. Supplementary funding supporting JM was provided by the US Office of Naval Research (ONR) grant nos. N00014-18-1-2062 and N00014-20-1-2709. Supplementary funding supporting AF and JM was provided by the US Office of Naval Research (ONR) grant no. N00014-19-1-2560. Supplementary funding supporting BS-C, JK, and AR was provided by the US Office of Naval Research (ONR) grant no. N00014-19-1-1223.Developments in wearable human medical and sports health trackers has offered new solutions to challenges encountered by eco-physiologists attempting to measure physiological attributes in freely moving animals. Near-infrared spectroscopy (NIRS) is one such solution that has potential as a powerful physio-logging tool to assess physiology in freely moving animals. NIRS is a non-invasive optics-based technology, that uses non-ionizing radiation to illuminate biological tissue and measures changes in oxygenated and deoxygenated hemoglobin concentrations inside tissues such as skin, muscle, and the brain. The overall footprint of the device is small enough to be deployed in wearable physio-logging devices. We show that changes in hemoglobin concentration can be recorded from bottlenose dolphins and gray seals with signal quality comparable to that achieved in human recordings. We further discuss functionality, benefits, and limitations of NIRS as a standard tool for animal care and wildlife tracking for the marine mammal research community.Publisher PDFPeer reviewe

    Individual Differences Reveal Correlates of Hidden Hearing Deficits

    Get PDF
    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.

    Evaluating feasibility of functional near-infrared spectroscopy in dolphins

    Get PDF
    SIGNIFICANCE: Using functional near-infrared spectroscopy (fNIRS) in bottlenose dolphins (Tursiops truncatus) could help to understand how echolocating animals perceive their environment and how they focus on specific auditory objects, such as fish, in noisy marine settings. AIM: To test the feasibility of near-infrared spectroscopy (NIRS) in medium-sized marine mammals, such as dolphins, we modeled the light propagation with computational tools to determine the wavelengths, optode locations, and separation distances that maximize sensitivity to brain tissue. APPROACH: Using frequency-domain NIRS, we measured the absorption and reduced scattering coefficient of dolphin sculp. We assigned muscle, bone, and brain optical properties from the literature and modeled light propagation in a spatially accurate and biologically relevant model of a dolphin head, using finite-element modeling. We assessed tissue sensitivities for a range of wavelengths (600 to 1700 nm), source-detector distances (50 to 120 mm), and animal sizes (juvenile model 25% smaller than adult). RESULTS: We found that the wavelengths most suitable for imaging the brain fell into two ranges: 700 to 900 nm and 1100 to 1150 nm. The optimal location for brain sensing positioned the center point between source and detector 30 to 50 mm caudal of the blowhole and at an angle 45 deg to 90 deg lateral off the midsagittal plane. Brain tissue sensitivity comparable to human measurements appears achievable only for smaller animals, such as juvenile bottlenose dolphins or smaller species of cetaceans, such as porpoises, or with source-detector separations ≫100  mm in adult dolphins. CONCLUSIONS: Brain measurements in juvenile or subadult dolphins, or smaller dolphin species, may be possible using specialized fNIRS devices that support optode separations of >100  mm. We speculate that many measurement repetitions will be required to overcome hemodynamic signals originating predominantly from the muscle layer above the skull. NIRS measurements of muscle tissue are feasible today with source-detector separations of 50 mm, or even less.Publisher PDFPeer reviewe

    Hearing the light: neural and perceptual encoding of optogenetic stimulation in the central auditory pathway

    Get PDF
    Optogenetics provides a means to dissect the organization and function of neural circuits. Optogenetics also offers the translational promise of restoring sensation, enabling movement or supplanting abnormal activity patterns in pathological brain circuits. However, the inherent sluggishness of evoked photocurrents in conventional channelrhodopsins has hampered the development of optoprostheses that adequately mimic the rate and timing of natural spike patterning. Here, we explore the feasibility and limitations of a central auditory optoprosthesis by photoactivating mouse auditory midbrain neurons that either express channelrhodopsin-2 (ChR2) or Chronos, a channelrhodopsin with ultra-fast channel kinetics. Chronos-mediated spike fidelity surpassed ChR2 and natural acoustic stimulation to support a superior code for the detection and discrimination of rapid pulse trains. Interestingly, this midbrain coding advantage did not translate to a perceptual advantage, as behavioral detection of midbrain activation was equivalent with both opsins. Auditory cortex recordings revealed that the precisely synchronized midbrain responses had been converted to a simplified rate code that was indistinguishable between opsins and less robust overall than acoustic stimulation. These findings demonstrate the temporal coding benefits that can be realized with next-generation channelrhodopsins, but also highlight the challenge of inducing variegated patterns of forebrain spiking activity that support adaptive perception and behavior
    • …
    corecore