326 research outputs found

    Comparing Bayesian models for multisensory cue combination without mandatory integration

    Get PDF
    Bayesian models of multisensory perception traditionally address the problem of estimating an underlying variable that is assumed to be the cause of the two sensory signals. The brain, however, has to solve a more general problem: it also has to establish which signals come from the same source and should be integrated, and which ones do not and should be segregated. In the last couple of years, a few models have been proposed to solve this problem in a Bayesian fashion. One of these has the strength that it formalizes the causal structure of sensory signals. We first compare these models on a formal level. Furthermore, we conduct a psychophysics experiment to test human performance in an auditory-visual spatial localization task in which integration is not mandatory. We find that the causal Bayesian inference model accounts for the data better than other models

    Causal inference in multisensory perception and the brain

    Get PDF
    To build coherent and veridical multisensory representations of the environment, human observers consider the causal structure of multisensory signals: If they infer a common source of the signals, observers integrate them weighted by their reliability. Otherwise, they segregate the signals. Generally, observers infer a common source if the signals correspond structurally and spatiotemporally. In six projects, the current PhD thesis investigated this causal inference model with the help of audiovisual spatial signals presented to human observers in a ventriloquist paradigm. A first psychophysical study showed that sensory reliability determines causal inference via two mechanisms: Sensory reliability modulates how observers infer the causal structure from spatial signal disparity. Further, sensory reliability determines the weight of audiovisual signals if observers integrate the signals under assumption of a common source. Using multivariate decoding of fMRI signals, three PhD projects revealed that auditory and visual cortical hierarchies jointly implement causal inference. Specific regions of the hierarchies represented constituent spatial estimates of the causal inference model. In line with this model, anterior regions of intraparietal sulcus (IPS) represent audiovisual signals dependent on visual reliability, task-relevance, and spatial disparity of the signals. However, even in case of small signal discrepancies suggesting a common source, reliability-weighting in IPS was suboptimal as compared to a Maximum Estimation Likelihood model. By temporally manipulating visual reliability, the fifth PhD project demonstrated that human observers learn sensory reliability from current and past signals in order to weight audiovisual signals, consistent with a Bayesian learner. Finally, the sixth project showed that if visual flashes were rendered unaware by continuous flash suppression, the visual bias of the perceived auditory location was strongly reduced but still significant. The reduced ventriloquist effect was presumably mediated by the drop of visual reliability accompanying perceptual unawareness. In conclusion, the PhD thesis suggests that human observers integrate multisensory signals according to their causal structure and temporal regularity: They integrate the signals if a common source is likely by weighting them proportional to the reliability which they learnt from the signals’ history. Crucially, specific regions of cortical hierarchies jointly implement these multisensory processes

    The Speed, Precision and Accuracy of Human Multisensory Perception following Changes to the Visual Sense

    Get PDF
    Human adults can combine information from multiple senses to improve their perceptual judgments. Visual and multisensory experience plays an important role in the development of multisensory integration, however it is unclear to what extent changes in vision impact multisensory processing later in life. In particular, it is not known whether adults account for changes to the relative reliability of their senses, following sensory loss, treatment or training. Using psychophysical methods, this thesis studied the multisensory processing of individuals experiencing changes to the visual sense. Chapters 2 and 3 assessed whether patients implanted with a retinal prosthesis (having been blinded by a retinal degenerative disease) could use this new visual signal with non-visual information to improve their speed or precision on multisensory tasks. Due to large differences between the reliabilities of the visual and non-visual cues, patients were not always able to benefit from the new visual signal. Chapter 4 assessed whether patients with degenerative visual loss adjust the weight given to visual and non-visual cues during audio-visual localization as their relative reliabilities change. Although some patients adjusted their reliance on vision across the visual field in line with predictions based on cue relative reliability, others - patients with visual loss limited to their central visual field only - did not. Chapter 5 assessed whether training with either more reliable or less reliable visual feedback could enable normally sighted adults to overcome an auditory localization bias. Findings suggest that visual information, irrespective of reliability, can be used to overcome at least some non-visual biases. In summary, this thesis documents multisensory changes following changes to the visual sense. The results improve our understanding of adult multisensory plasticity and have implications for successful treatments and rehabilitation following sensory loss

    Bayesian Cognitive Science, Unification, and Explanation

    Get PDF
    It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal-mechanical explanation

    Bayesian Cognitive Science, Unification, and Explanation

    Get PDF
    It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix

    Bayesian priors are encoded independently from likelihoods in human multisensory perception

    Get PDF
    It has been shown that human combination of crossmodal information is highly consistent with an optimal Bayesian model performing causal inference. These findings have shed light on the computational principles governing crossmodal integration/segregation. Intuitively, in a Bayesian framework priors represent a priori information about the environment, i.e., information available prior to encountering the given stimuli, and are thus not dependent on the current stimuli. While this interpretation is considered as a defining characteristic of Bayesian computation by many, the Bayes rule per se does not require that priors remain constant despite significant changes in the stimulus, and therefore, the demonstration of Bayes-optimality of a task does not imply the invariance of priors to varying likelihoods. This issue has not been addressed before, but here we empirically investigated the independence of the priors from the likelihoods by strongly manipulating the presumed likelihoods (by using two drastically different sets of stimuli) and examining whether the estimated priors change or remain the same. The results suggest that the estimated prior probabilities are indeed independent of the immediate input and hence, likelihood

    Moving in time: Bayesian causal inference explains movement coordination to auditory beats.

    Get PDF
    Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved

    Multisensory Uncertainty Reduction for Hand Localization in Children and Adults

    Get PDF
    Adults can integrate multiple sensory estimates to reduce their uncertainty in perceptual and motor tasks. In recent studies, children did not show this ability until after 8 years. Here we investigated development of the ability to integrate vision with proprioception to localize the hand. We tested 109 4- to 12-year-olds and adults on a simple pointing task. Participants used an unseen hand beneath a table to point to targets presented on top of the table to vision alone, proprioception alone, or both together. Overall, 7- to 9-year-olds’ and adults’ points were significantly less variable given vision and proprioception together compared with either alone. However, this variance reduction was present at all ages in the subset of participants whose proprioceptive estimates were less than two times more variable than their visual. These results, together with analyses of cue weighting, indicate that all groups integrated vision and proprioception, but only 7- to 9-year-olds and adults consistently selected cue weights that were appropriate to their own single-cue reliabilities. Cue weights used at 4–6 and 10–12 years still allowed over half of participants at these ages to reduce their pointing variability. One explanation for poorer group-level cue weighting at 10–12 years is that this ages represents a period of relatively rapid physical growth. An existing Bayesian model of hand localization did not describe either adults’ or children’s data well, but the results suggest future improvements to the model

    Self-motion leads to mandatory cue fusion across sensory modalities

    Get PDF
    When perceiving properties of the world, we effortlessly combine multiple sensory cues into optimal estimates. Estimates derived from the individual cues are generally retained once the multisensory estimate is produced and discarded only if the cues stem from the same sensory modality (i.e., mandatory fusion). Does multisensory integration differ in that respect when the object of perception is one's own body, rather than an external variable? We quantified how humans combine visual and vestibular information for perceiving own-body rotations and specifically tested whether such idiothetic cues are subjected to mandatory fusion. Participants made extensive size comparisons between successive whole body rotations using only visual, only vestibular, and both senses together. Probabilistic descriptions of the subjects' perceptual estimates were compared with a Bayes-optimal integration model. Similarity between model predictions and experimental data echoed a statistically optimal mechanism of multisensory integration. Most importantly, size discrimination data for rotations composed of both stimuli was best accounted for by a model in which only the bimodal estimator is accessible for perceptual judgments as opposed to an independent or additive use of all three estimators (visual, vestibular, and bimodal). Indeed, subjects' thresholds for detecting two multisensory rotations as different from one another were, in pertinent cases, larger than those measured using either single-cue estimate alone. Rotations different in terms of the individual visual and vestibular inputs but quasi-identical in terms of the integrated bimodal estimate became perceptual metamers. This reveals an exceptional case of mandatory fusion of cues stemming from two different sensory modalities
    • …
    corecore