107 research outputs found

    Vestibular Facilitation of Optic Flow Parsing

    Get PDF
    Simultaneous object motion and self-motion give rise to complex patterns of retinal image motion. In order to estimate object motion accurately, the brain must parse this complex retinal motion into self-motion and object motion components. Although this computational problem can be solved, in principle, through purely visual mechanisms, extra-retinal information that arises from the vestibular system during self-motion may also play an important role. Here we investigate whether combining vestibular and visual self-motion information improves the precision of object motion estimates. Subjects were asked to discriminate the direction of object motion in the presence of simultaneous self-motion, depicted either by visual cues alone (i.e. optic flow) or by combined visual/vestibular stimuli. We report a small but significant improvement in object motion discrimination thresholds with the addition of vestibular cues. This improvement was greatest for eccentric heading directions and negligible for forward movement, a finding that could reflect increased relative reliability of vestibular versus visual cues for eccentric heading directions. Overall, these results are consistent with the hypothesis that vestibular inputs can help parse retinal image motion into self-motion and object motion components

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions

    A neural circuit model of decision uncertainty and change-of-mind

    Get PDF
    Decision-making is often accompanied by a degree of confidence on whether a choice is correct. Decision uncertainty, or lack in confidence, may lead to change-of-mind. Studies have identified the behavioural characteristics associated with decision confidence or change-of-mind, and their neural correlates. Although several theoretical accounts have been proposed, there is no neural model that can compute decision uncertainty and explain its effects on change-of-mind. We propose a neuronal circuit model that computes decision uncertainty while accounting for a variety of behavioural and neural data of decision confidence and change-of-mind, including testable model predictions. Our theoretical analysis suggests that change-of-mind occurs due to the presence of a transient uncertainty-induced choice-neutral stable steady state and noisy fluctuation within the neuronal network. Our distributed network model indicates that the neural basis of change-of-mind is more distinctively identified in motor-based neurons. Overall, our model provides a framework that unifies decision confidence and change-of-mind

    The threshold for the McGurk effect in audio-visual noise decreases with development

    Get PDF
    Across development, vision increasingly infuences audio-visual perception. This is evidenced in illusions such as the McGurk efect, in which a seen mouth movement changes the perceived sound. The current paper assessed the efects of manipulating the clarity of the heard and seen signal upon the McGurk efect in children aged 3–6 (n=29), 7–9 (n=32) and 10–12 (n=29) years, and adults aged 20–35 years (n=32). Auditory noise increased, and visual blur decreased, the likelihood of vision changing auditory perception. Based upon a proposed developmental shift from auditory to visual dominance we predicted that younger children would be less susceptible to McGurk responses, and that adults would continue to be infuenced by vision in higher levels of visual noise and with less auditory noise. Susceptibility to the McGurk efect was higher in adults compared with 3–6-year-olds and 7–9-yearolds but not 10–12-year-olds. Younger children required more auditory noise, and less visual noise, than adults to induce McGurk responses (i.e. adults and older children were more easily infuenced by vision). Reduced susceptibility in childhood supports the theory that sensory dominance shifts across development and reaches adult-like levels by 10 years of age

    Haptic adaptation to slant: No transfer between exploration modes

    Get PDF
    Human touch is an inherently active sense: to estimate an object’s shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective

    A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise

    Get PDF
    Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

    Get PDF
    Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion

    Visual area V5/hMT+ contributes to perception of tactile motion direction: a TMS study

    Get PDF
    Human imaging studies have reported activations associated with tactile motion perception in visual motion area V5/hMT+, primary somatosensory cortex (SI) and posterior parietal cortex (PPC; Brodmann areas 7/40). However, such studies cannot establish whether these areas are causally involved in tactile motion perception. We delivered double-pulse transcranial magnetic stimulation (TMS) while moving a single tactile point across the fingertip, and used signal detection theory to quantify perceptual sensitivity to motion direction. TMS over both SI and V5/hMT+, but not the PPC site, significantly reduced tactile direction discrimination. Our results show that V5/hMT+ plays a causal role in tactile direction processing, and strengthen the case for V5/hMT+ serving multimodal motion perception. Further, our findings are consistent with a serial model of cortical tactile processing, in which higher-order perceptual processing depends upon information received from SI. By contrast, our results do not provide clear evidence that the PPC site we targeted (Brodmann areas 7/40) contributes to tactile direction perception
    corecore