38,889 research outputs found

    Audio-visual detection benefits in the rat

    Get PDF
    Human psychophysical studies have described multisensory perceptual benefits such as enhanced detection rates and faster reaction times in great detail. However, the neural circuits and mechanism underlying multisensory integration remain difficult to study in the primate brain. While rodents offer the advantage of a range of experimental methodologies to study the neural basis of multisensory processing, rodent studies are still limited due to the small number of available multisensory protocols. We here demonstrate the feasibility of an audio-visual stimulus detection task for rats, in which the animals detect lateralized uni- and multi-sensory stimuli in a two-response forced choice paradigm. We show that animals reliably learn and perform this task. Reaction times were significantly faster and behavioral performance levels higher in multisensory compared to unisensory conditions. This benefit was strongest for dim visual targets, in agreement with classical patterns of multisensory integration, and was specific to task-informative sounds, while uninformative sounds speeded reaction times with little costs for detection performance. Importantly, multisensory benefits for stimulus detection and reaction times appeared at different levels of task proficiency and training experience, suggesting distinct mechanisms inducing these two multisensory benefits. Our results demonstrate behavioral multisensory enhancement in rats in analogy to behavioral patterns known from other species, such as humans. In addition, our paradigm enriches the set of behavioral tasks on which future studies can rely, for example to combine behavioral measurements with imaging or pharmacological studies in the behaving animal or to study changes of integration properties in disease models

    Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    Get PDF
    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and visual objects (V). Integration processes were expressed as the difference between these AV and (A + V) responses and were studied while attention was directed to one or both modalities or directed elsewhere. Results show that multisensory integration effects depend on the multisensory objects being fully attended—that is, when both the visual and auditory senses were attended. In this condition, a superadditive audiovisual integration effect was observed on the P50 component. When unattended, this effect was reversed; the P50 components of multisensory ERPs were smaller than the unisensory sum. Additionally, we found an enhanced late frontal negativity when subjects attended the visual component of a multisensory object. This effect, bearing a strong resemblance to the auditory processing negativity, appeared to reflect late attention-related processing that had spread to encompass the auditory component of the multisensory object. In conclusion, our results shed new light on how the brain processes multisensory auditory and visual information, including how attention modulates multisensory integration processes

    Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools

    Get PDF
    Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis

    Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex

    Get PDF
    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication

    Maximizing decision rate in multisensory integration

    Get PDF
    Effective decision-making in an uncertain world requires making use of all available information, even if distributed across different sensory modalities, as well as trading off the speed of a decision with its accuracy. In tasks with a fixed stimulus presentation time, animal and human subjects have previously been shown to combine information from several modalities in a statistically optimal manner. Furthermore, for easily discriminable stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, multimodal reaction times are typically faster than predicted from unimodal conditions when assuming independent (parallel) races for each modality. However, due to a lack of adequate ideal observer models, it has remained unclear whether subjects perform optimal cue combination when they are allowed to choose their response times freely.
Based on data collected from human subjects performing a visual/vestibular heading discrimination task, we show that the subjects exhibit worse discrimination performance in the multimodal condition than predicted by standard cue combination criteria, which relate multimodal discrimination performance to sensitivity in the unimodal conditions. Furthermore, multimodal reaction times are slower than those predicted by a parallel race model, opposite to what is commonly observed for easily discriminable stimuli.
Despite violating the standard criteria for optimal cue combination, we show that subjects still accumulate evidence optimally across time and cues, even when the strength of the evidence varies with time. Additionally, subjects adjust their decision bounds, controlling the trade-off between speed and accuracy of a decision, such that they feature correct decision rates close to the maximum achievable value

    LeviSense: a platform for the multisensory integration in levitating food and insights into its effect on flavour perception

    Get PDF
    Eating is one of the most multisensory experiences in everyday life. All of our five senses (i.e. taste, smell, vision, hearing and touch) are involved, even if we are not aware of it. However, while multisensory integration has been well studied in psychology, there is not a single platform for testing systematically the effects of different stimuli. This lack of platform results in unresolved design challenges for the design of taste-based immersive experiences. Here, we present LeviSense: the first system designed for multisensory integration in gustatory experiences based on levitated food. Our system enables the systematic exploration of different sensory effects on eating experiences. It also opens up new opportunities for other professionals (e.g., molecular gastronomy chefs) looking for innovative taste-delivery platforms. We describe the design process behind LeviSense and conduct two experiments to test a subset of the crossmodal combinations (i.e., taste and vision, taste and smell). Our results show how different lighting and smell conditions affect the perceived taste intensity, pleasantness, and satisfaction. We discuss how LeviSense creates a new technical, creative, and expressive possibilities in a series of emerging design spaces within Human-Food Interaction

    Cerebral correlates and statistical criteria of cross-modal face and voice integration

    Get PDF
    Perception of faces and voices plays a prominent role in human social interaction, making multisensory integration of cross-modal speech a topic of great interest in cognitive neuroscience. How to define po- tential sites of multisensory integration using functional magnetic resonance imaging (fMRI) is currently under debate, with three statistical criteria frequently used (e.g., super-additive, max and mean criteria). In the present fMRI study, 20 participants were scanned in a block design under three stimulus conditions: dynamic unimodal face, unimodal voice and bimodal face–voice. Using this single dataset, we examine all these statistical criteria in an attempt to define loci of face–voice integration. While the super-additive and mean criteria essentially revealed regions in which one of the unimodal responses was a deactivation, the max criterion appeared stringent and only highlighted the left hippocampus as a potential site of face– voice integration. Psychophysiological interaction analysis showed that connectivity between occipital and temporal cortices increased during bimodal compared to unimodal conditions. We concluded that, when investigating multisensory integration with fMRI, all these criteria should be used in conjunction with ma- nipulation of stimulus signal-to-noise ratio and/or cross-modal congruency

    Caveat Emptor: The Meaning of Perception and Integration in Speech Perception

    Get PDF
    A recent letter^1^ claimed integration of auditory and tactile information in speech perception. Although I have been an advocate of multisensory integration, neither perception nor integration was sufficiently formalized, operationalized, and tested to support this claim
    corecore