67 research outputs found
Dissociable neural correlates of multisensory coherence and selective attention
Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using electroencephalography (EEG) while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disc was manipulated to control the audiovisual coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response (ERP) evoked by the transient deviants, independently of AV coherence. Finally, in an exploratory analysis, we identified a spatiotemporal component of ERP, in which temporal coherence enhanced the deviant-evoked responses only in the unattended stream. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.Significance StatementTemporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate AV coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on AV object formation
Relative sound localisation abilities in human listeners
Spatial acuity varies with sound-source azimuth, signal-to-noise ratio, and the spectral characteristics of the sound source. Here, the spatial localisation abilities of listeners were assessed using a relative localisation task. This task tested localisation ability at fixed angular separations throughout space using a two-alternative forced-choice design across a variety of listening conditions. Subjects were required to determine whether a target sound originated to the left or right of a preceding reference in the presence of a multi-source noise background. Experiment 1 demonstrated that subjects' ability to determine the relative location of two sources declined with less favourable signal-to-noise ratios and at peripheral locations. Experiment 2 assessed performance with both broadband and spectrally restricted stimuli designed to limit localisation cues to predominantly interaural level differences or interaural timing differences (ITDs). Predictions generated from topographic, modified topographic, and two-channel models of sound localisation suggest that for low-pass stimuli, where ITD cues were dominant, the two-channel model provides an adequate description of the experimental data, whereas for broadband and high frequency bandpass stimuli none of the models was able to fully account for performance. Experiment 3 demonstrated that relative localisation performance was uninfluenced by shifts in gaze direction
Egocentric and allocentric representations in auditory cortex
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position
Speech Cues Contribute to Audiovisual Spatial Integration
Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways
Grouping by feature of cross-modal flankers in temporal ventriloquism
Signals in one sensory modality can influence perception of another, for example the bias of visual timing by audition: temporal ventriloquism. Strong accounts of temporal ventriloquism hold that the sensory representation of visual signal timing changes to that of the nearby sound. Alternatively, underlying sensory representations do not change. Rather, perceptual grouping processes based on spatial, temporal, and featural information produce best-estimates of global event properties. In support of this interpretation, when feature-based perceptual grouping conflicts with temporal information-based in scenarios that reveal temporal ventriloquism, the effect is abolished. However, previous demonstrations of this disruption used long-range visual apparent-motion stimuli. We investigated whether similar manipulations of feature grouping could also disrupt the classical temporal ventriloquism demonstration, which occurs over a short temporal range. We estimated the precision of participants’ reports of which of two visual bars occurred first. The bars were accompanied by different cross-modal signals that onset synchronously or asynchronously with each bar. Participants’ performance improved with asynchronous presentation relative to synchronous - temporal ventriloquism - however, unlike the long-range apparent motion paradigm, this was unaffected by different combinations of cross-modal feature, suggesting that featural similarity of cross-modal signals may not modulate cross-modal temporal influences in short time scales
Encoding Odorant Identity by Spiking Packets of Rate-Invariant Neurons in Awake Mice
Background: How do neural networks encode sensory information? Following sensory stimulation, neural coding is commonly assumed to be based on neurons changing their firing rate. In contrast, both theoretical works and experiments in several sensory systems showed that neurons could encode information as coordinated cell assemblies by adjusting their spike timing and without changing their firing rate. Nevertheless, in the olfactory system, there is little experimental evidence supporting such model. Methodology/Principal Findings: To study these issues, we implanted tetrodes in the olfactory bulb of awake mice to record the odorant-evoked activity of mitral/tufted (M/T) cells. We showed that following odorant presentation, most M/T neurons do not significantly change their firing rate over a breathing cycle but rather respond to odorant stimulation by redistributing their firing activity within respiratory cycles. In addition, we showed that sensory information can be encoded by cell assemblies composed of such neurons, thus supporting the idea that coordinated populations of globally rateinvariant neurons could be efficiently used to convey information about the odorant identity. We showed that different coding schemes can convey high amount of odorant information for specific read-out time window. Finally we showed that the optimal readout time window corresponds to the duration of gamma oscillations cycles. Conclusion: We propose that odorant can be encoded by population of cells that exhibit fine temporal tuning of spiking activity while displaying weak or no firing rate change. These cell assemblies may transfer sensory information in spikin
Encoding of Temporal Information by Timing, Rate, and Place in Cat Auditory Cortex
A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (ISI). To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF) to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis
Auditory Cortex Tracks Both Auditory and Visual Stimulus Dynamics Using Low-Frequency Neuronal Phase Modulation
How is naturalistic multisensory information combined in the human brain? Based on MEG data we show that phase modulation of visual and auditory signals captures the dynamics of complex scenes
The auditory cortex of the bat Phyllostomus discolor: Localization and organization of basic response properties
<p>Abstract</p> <p>Background</p> <p>The mammalian auditory cortex can be subdivided into various fields characterized by neurophysiological and neuroarchitectural properties and by connections with different nuclei of the thalamus. Besides the primary auditory cortex, echolocating bats have cortical fields for the processing of temporal and spectral features of the echolocation pulses. This paper reports on location, neuroarchitecture and basic functional organization of the auditory cortex of the microchiropteran bat <it>Phyllostomus discolor </it>(family: Phyllostomidae).</p> <p>Results</p> <p>The auditory cortical area of <it>P. discolor </it>is located at parieto-temporal portions of the neocortex. It covers a rostro-caudal range of about 4800 μm and a medio-lateral distance of about 7000 μm on the flattened cortical surface.</p> <p>The auditory cortices of ten adult <it>P. discolor </it>were electrophysiologically mapped in detail. Responses of 849 units (single neurons and neuronal clusters up to three neurons) to pure tone stimulation were recorded extracellularly. Cortical units were characterized and classified depending on their response properties such as best frequency, auditory threshold, first spike latency, response duration, width and shape of the frequency response area and binaural interactions.</p> <p>Based on neurophysiological and neuroanatomical criteria, the auditory cortex of <it>P. discolor </it>could be subdivided into anterior and posterior ventral fields and anterior and posterior dorsal fields. The representation of response properties within the different auditory cortical fields was analyzed in detail. The two ventral fields were distinguished by their tonotopic organization with opposing frequency gradients. The dorsal cortical fields were not tonotopically organized but contained neurons that were responsive to high frequencies only.</p> <p>Conclusion</p> <p>The auditory cortex of <it>P. discolor </it>resembles the auditory cortex of other phyllostomid bats in size and basic functional organization. The tonotopically organized posterior ventral field might represent the primary auditory cortex and the tonotopically organized anterior ventral field seems to be similar to the anterior auditory field of other mammals. As most energy of the echolocation pulse of <it>P. discolor </it>is contained in the high-frequency range, the non-tonotopically organized high-frequency dorsal region seems to be particularly important for echolocation.</p
Recommended from our members
Transdermal drug delivery in horses: an in vitro comparison of skin structure and permeation of two model drugs at various anatomical sites
Abstract
Background Oral and parenteral drug delivery in horses can be difficult. Equine-specific transdermal drug formulations offer improved ease of treatment; development of such formulations requires a deeper understanding of the structural and chemical tissue barrier of horse skin.
Hypothesis/Objectives To compare the structural composition and barrier properties of equine skin. Animals Six warmblood horses (two males, four females) with no skin diseases.
Materials and Methods Routine histological and microscopic analyses were carried out with image analysis for skin from six different anatomical locations. In vitro drug permeation was analysed using a standard Franz diffusion cell protocol coupled with reversed phase-high-performance liquid chromatography detailing flux, lag times and tissue partitioning ratios of two model drug compounds. Results Epidermal and dermal thicknesses varied between sites. The dermal and epidermal thicknesses of the croup were 1764 ± 115 μm and 36 ± 3.6 μm, respectively, and were significantly different (p < 0.05) from the inner thigh thicknesses which were 824 ± 35 μm and 49 ± 3.6 μm. Follicular density and size also varied. The highest flux for the model hydrophilic molecule (caffeine) was for the flank (3.22 ± 0.36 μg/cm2/h), while that for the lipophilic molecule (ibuprofen) was for the inner thigh (0.12 ± 0.02 μg/cm2/h).
Conclusions and Clinical Relevance Anatomical location differences in equine skin structure and small molecule permeability were demonstrated. These results can aid in the development of transdermal therapies for horses
- …