38,859 research outputs found
Audio-visual detection benefits in the rat
Human psychophysical studies have described multisensory perceptual benefits such as enhanced detection rates and faster reaction times in great detail. However, the neural circuits and mechanism underlying multisensory integration remain difficult to study in the primate brain. While rodents offer the advantage of a range of experimental methodologies to study the neural basis of multisensory processing, rodent studies are still limited due to the small number of available multisensory protocols. We here demonstrate the feasibility of an audio-visual stimulus detection task for rats, in which the animals detect lateralized uni- and multi-sensory stimuli in a two-response forced choice paradigm. We show that animals reliably learn and perform this task. Reaction times were significantly faster and behavioral performance levels higher in multisensory compared to unisensory conditions. This benefit was strongest for dim visual targets, in agreement with classical patterns of multisensory integration, and was specific to task-informative sounds, while uninformative sounds speeded reaction times with little costs for detection performance. Importantly, multisensory benefits for stimulus detection and reaction times appeared at different levels of task proficiency and training experience, suggesting distinct mechanisms inducing these two multisensory benefits. Our results demonstrate behavioral multisensory enhancement in rats in analogy to behavioral patterns known from other species, such as humans. In addition, our paradigm enriches the set of behavioral tasks on which future studies can rely, for example to combine behavioral measurements with imaging or pharmacological studies in the behaving animal or to study changes of integration properties in disease models
Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?
Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and visual objects (V). Integration processes were expressed as the difference between these AV and (A + V) responses and were studied while attention was directed to one or both modalities or directed elsewhere. Results show that multisensory integration effects depend on the multisensory objects being fully attended—that is, when both the visual and auditory senses were attended. In this condition, a superadditive audiovisual integration effect was observed on the P50 component. When unattended, this effect was reversed; the P50 components of multisensory ERPs were smaller than the unisensory sum. Additionally, we found an enhanced late frontal negativity when subjects attended the visual component of a multisensory object. This effect, bearing a strong resemblance to the auditory processing negativity, appeared to reflect late attention-related processing that had spread to encompass the auditory component of the multisensory object. In conclusion, our results shed new light on how the brain processes multisensory auditory and visual information, including how attention modulates multisensory integration processes
The COGs (context, object, and goals) in multisensory processing
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications
Cathodal Occipital tDCS is unable to modulate The Sound Induced Flash Illusion in migraine
Migraine is a highly disabling disease characterized by recurrent pain.Despite an intensive effort, mechanisms of migraine pathophysiology, still represent an unsolved issue. Evidences from both animals and humans studies suggest that migraine is characterized by hyperresponsivity or hyperexcitability of sensory cortices, especially the visual cortex. This phenomenon, in turn, may affect multisensory processing. Indeed, migraineurs present with an abnormal, reduced, perception of the Sound-induced Flash Illusion (SiFI), a crossmodal illusion that relies on optimal integration of visual and auditory stimuli by the occipital visual cortex. Decreasing visual cortical excitability with transcranial direct current stimulation (tDCS) can increase the SiFI in healthy subjects. Moving from these issues , we applied cathodal tDCS over the visual cortex of migraineurs, with and without aura, in order to decrease cortical excitability and thus physiologically restoring the perception of a reliable SiFI. Differently from our expectations tDCS was unable to reliably modulate SiFI in migraine. The chronic, relatively excessive, visual cortex hyperexcitability, , featuring the migraineur brain, may render tDCS ineffective for restoring multisensory processing in this disease
Single-trial multisensory memories affect later auditory and visual object discrimination.
Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand
Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV/VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV/VA event-related potentials (ERPs) from the sum of their unisensory constituents, we run a time-resolved topographical representational similarity analysis (tRSA) comparing AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between AV- and VA-maps at each time point (500ms window post-stimulus) and then correlated with two alternative similarity model matrices: AVmaps=VAmaps vs. AVmaps≠VAmaps. The tRSA results favored the AVmaps≠VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems
Review: Do the Different Sensory Areas within the Cat Anterior Ectosylvian Sulcal Cortex Collectively Represent a Network Multisensory Hub?
Current theory supports that the numerous functional areas of the cerebral cortex are organized and function as a network. Using connectional databases and computational approaches, the cerebral network has been demonstrated to exhibit a hierarchical structure composed of areas, clusters and, ultimately, hubs. Hubs are highly connected, higher-order regions that also facilitate communication between different sensory modalities. One region computationally identified network hub is the visual area of the Anterior Ectosylvian Sulcal cortex (AESc) of the cat. The Anterior Ectosylvian Visual area (AEV) is but one component of the AESc that also includes the auditory (Field of the Anterior Ectosylvian Sulcus - FAES) and somatosensory (Fourth somatosensory representation - SIV). To better understand the nature of cortical network hubs, the present report reviews the biological features of the AESc. Within the AESc, each area has extensive external cortical connections as well as among one another. Each of these core representations is separated by a transition zone characterized by bimodal neurons that share sensory properties of both adjoining core areas. Finally, core and transition zones are underlain by a continuous sheet of layer 5 neurons that project to common output structures. Altogether, these shared properties suggest that the collective AESc region represents a multiple sensory/multisensory cortical network hub. Ultimately, such an interconnected, composite structure adds complexity and biological detail to the understanding of cortical network hubs and their function in cortical processing
Distortions of Subjective Time Perception Within and Across Senses
Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood.
Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations.
Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions
- …
