3,001 research outputs found

    Distortions of Subjective Time Perception Within and Across Senses

    Get PDF
    Background: The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings: We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance: These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions

    Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense

    Get PDF
    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV/VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV/VA event-related potentials (ERPs) from the sum of their unisensory constituents, we run a time-resolved topographical representational similarity analysis (tRSA) comparing AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between AV- and VA-maps at each time point (500ms window post-stimulus) and then correlated with two alternative similarity model matrices: AVmaps=VAmaps vs. AVmaps≠VAmaps. The tRSA results favored the AVmaps≠VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems

    Drifting perceptual patterns suggest prediction errors fusion rather than hypothesis selection: replicating the rubber-hand illusion on a robot

    Full text link
    Humans can experience fake body parts as theirs just by simple visuo-tactile synchronous stimulation. This body-illusion is accompanied by a drift in the perception of the real limb towards the fake limb, suggesting an update of body estimation resulting from stimulation. This work compares body limb drifting patterns of human participants, in a rubber hand illusion experiment, with the end-effector estimation displacement of a multisensory robotic arm enabled with predictive processing perception. Results show similar drifting patterns in both human and robot experiments, and they also suggest that the perceptual drift is due to prediction error fusion, rather than hypothesis selection. We present body inference through prediction error minimization as one single process that unites predictive coding and causal inference and that it is responsible for the effects in perception when we are subjected to intermodal sensory perturbations.Comment: Proceedings of the 2018 IEEE International Conference on Development and Learning and Epigenetic Robotic

    Multisensory integration in dynamical behaviors: maximum likelihood estimation across bimanual skill learning

    Get PDF
    Optimal integration of different sensory modalities weights each modality as a function of its degree of certainty (maximum likelihood). Humans rely on near-optimal integration in decision-making tasks (involving e.g., auditory, visual, and/or tactile afferents), and some support for these processes has also been provided for discrete sensorimotor tasks. Here, we tested optimal integration during the continuous execution of a motor task, using a cyclical bimanual coordination pattern in which feedback was provided by means of proprioception and augmented visual feedback (AVF, the position of both wrists being displayed as the orthogonal coordinates of a single cursor). Assuming maximum likelihood integration, the following predictions were addressed: (1) the coordination variability with both AVF and proprioception available is smaller than with only one of the two modalities, and should reach an optimal level; (2) if the AVF is artificially corrupted by noise, variability should increase but saturate toward the level without AVF; (3) if the AVF is imperceptibly phase shifted, the stabilized pattern should be partly adapted to compensate for this phase shift, whereby the amount of compensation reflects the weight assigned to AVF in the computation of the integrated signal. Whereas performance variability gradually decreased over 5 d of practice, we showed that these model-based predictions were already observed on the first day. This suggests not only that the performer integrated proprioceptive feedback and AVF online during task execution by tending to optimize the signal statistics, but also that this occurred before reaching an asymptotic performance level

    Multisensory perception and decision-making with a new sensory skill

    Get PDF
    It is clear that people can learn a new sensory skill – a new way of mapping sensory inputs onto world states. It remains unclear how flexibly a new sensory skill can become embedded in multisensory perception and decision-making. To address this, we trained typically-sighted participants (N=12) to use a new echo-like auditory cue to distance in a virtual world, together with a noisy visual cue. Using model-based analyses, we tested for key markers of efficient multisensory perception and decision-making with the new skill. We found that twelve of fourteen participants learned to judge distance using the novel auditory cue. Their use of this new sensory skill showed three key features: (1) it enhanced the speed of timed decisions; (2) it largely resisted interference from a simultaneous digit span task; and (3) it integrated with vision in a Bayes-like manner to improve precision. We also show some limits following this relatively short training: precision benefits were lower than the Bayesoptimal prediction, and there was no forced fusion of signals. We conclude that people already embed new sensory skills in flexible multisensory perception and decision-making after a short training period. A key application of these insights is to the development of sensory augmentation systems that can enhance human perceptual abilities in novel ways. The limitations we reveal (sub-optimality, lack of fusion) provide a foundation for further investigations of the limits of these abilities and their brain basis

    A MECHANISTIC APPROACH TO POSTURAL DEVELOPMENT IN CHILDREN

    Get PDF
    Upright standing is intrinsically unstable and requires active control. The central nervous system's feedback process is the active control that integrates multi-sensory information to generate appropriate motor commands to control the plant (the body with its musculotendon actuators). Maintaining standing balance is not trivial for a developing child because the feedback and the plant are both developing and the sensory inputs used for feedback are continually changing. Knowledge gaps exist in characterizing the critical ability of adaptive multi-sensory reweighting for standing balance control in children. Furthermore, the separate contributions of the plant and feedback and their relationship are poorly understood in children, especially when considering that the body is multi-jointed and feedback is multi-sensory. The purposes of this dissertation are to use a mechanistic approach to study multi-sensory abilities of typically developing (TD) children and children with Developmental Coordination Disorder (DCD). The specific aims are: 1) to characterize postural control under different multi-sensory conditions in TD children and children with DCD; 2) to characterize the development of adaptive multi-sensory reweighting in TD children and children with DCD; and, 3) to identify the plant and feedback for postural control in TD children and how they change in response to visual reweighting. In the first experiment (Aim 1), TD children, adults, and 7-year-old children with DCD are tested under four sensory conditions (no touch/no vision, with touch/no vision, no touch/with vision, and with touch/with vision). We found that touch robustly attenuated standing sway in all age groups. Children with DCD used touch less effectively than their TD peers and they also benefited from using vision to reduce sway. In the second experiment (Aim 2), TD children (4- to 10-year-old) and children with DCD (6- to 11-year-old) were presented with simultaneous small-amplitude touch bar and visual scene movement at 0.28 and 0.2 Hz, respectively, within five conditions that independently varied the amplitude of the stimuli. We found that TD children can reweight to both touch and vision from 4 years on and the amount of reweighting increased with age. However, multi-sensory fusion (i.e., inter-modal reweighting) was only observed in the older children. Children with DCD reweight to both touch and vision at a later age (10.8 years) than their TD peers. Even older children with DCD do not show advanced multisensory fusion. Two signature deficits of multisensory reweighting are a weak vision reweighting and a general phase lag to both sensory modalities. The final aim involves closed-loop system identification of the plant and feedback using electromyography (EMG) and kinematic responses to a high- or low-amplitude visual perturbation and two mechanical perturbations in children ages six and ten years and adults. We found that the plant is different between children and adults. Children demonstrate a smaller phase difference between trunk and leg than adults at higher frequencies. Feedback in children is qualitatively similar to adults. Quantitatively, children show less phase advance at the peak of the feedback curve which may be due to a longer time delay. Under the high and low visual amplitude conditions, children show less gain change (interpreted as reweighting) than adults in the kinematic and EMG responses. The observed kinematic and EMG reweighting are mainly due to the different use of visual information by the central nervous system as measured by the open-loop mapping from visual scene angle to EMG activity. The plant and the feedback do not contribute to reweighting

    The impact of joint attention on the sound-induced flash illusions

    Get PDF
    Humans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration

    Sensor Fusion in the Perception of Self-Motion

    No full text
    This dissertation has been written at the Max Planck Institute for Biological Cybernetics (Max-Planck-Institut für Biologische Kybernetik) in Tübingen in the department of Prof. Dr. Heinrich H. Bülthoff. The work has universitary support by Prof. Dr. Günther Palm (University of Ulm, Abteilung Neuroinformatik). Main evaluators are Prof. Dr. Günther Palm, Prof. Dr. Wolfgang Becker (University of Ulm, Sektion Neurophysiologie) and Prof. Dr. Heinrich Bülthoff.amp;lt;bramp;gt;amp;lt;bramp;gt; The goal of this thesis was to investigate the integration of different sensory modalities in the perception of self-motion, by using psychophysical methods. Experiments with healthy human participants were to be designed for and performed in the Motion Lab, which is equipped with a simulator platform and projection screen. Results from psychophysical experiments should be used to refine models of the multisensory integration process, with an mphasis on Bayesian (maximum likelihood) integration mechanisms.amp;lt;bramp;gt;amp;lt;bramp;gt; To put the psychophysical experiments into the larger framework of research on multisensory integration in the brain, results of neuroanatomical and neurophysiological experiments on multisensory integration are also reviewed
    • …
    corecore