438 research outputs found

    Predictability is necessary for closed-loop visual feedback delay adaptation

    Get PDF
    Rohde M, van Dam L, Ernst MO. Predictability is necessary for closed-loop visual feedback delay adaptation. Journal of Vision. 2014;14(3):4.In case of delayed visual feedback during visuomotor tasks, like in some sluggish computer games, humans can modulate their behavior to compensate for the delay. However, opinions on the nature of this compensation diverge. Some studies suggest that humans adapt to feedback delays with lasting changes in motor behavior (aftereffects) and a recalibration of time perception. Other studies have shown little or no evidence for such semipermanent recalibration in the temporal domain. We hypothesize that predictability of the reference signal (target to be tracked) is necessary for semipermanent delay adaptation. To test this hypothesis, we trained participants with a 200 ms visual feedback delay in a visually guided manual tracking task, varying the predictability of the reference signal between conditions, but keeping reference motion and feedback delay constant. In Experiment 1, we focused on motor behavior. Only training in the predictable condition brings about all of the adaptive changes and aftereffects expected from delay adaptation. In Experiment 2, we used a synchronization task to investigate perceived simultaneity (perceptuomotor learning). Supporting the hypothesis, participants recalibrated subjective visuomotor simultaneity only when trained in the predictable condition. Such a shift in perceived simultaneity was also observed in Experiment 3, using an interval estimation task. These results show that delay adaptation in motor control can modulate the perceived temporal alignment of vision and kinesthetically sensed movement. The coadaptation of motor prediction and target prediction (reference extrapolation) seems necessary for such genuine delay adaptation. This offers an explanation for divergent results in the literature

    The Rubber Hand Illusion: Feeling of Ownership and Proprioceptive Drift Do Not Go Hand in Hand

    Get PDF
    In the Rubber Hand Illusion, the feeling of ownership of a rubber hand displaced from a participant's real occluded hand is evoked by synchronously stroking both hands with paintbrushes. A change of perceived finger location towards the rubber hand (proprioceptive drift) has been reported to correlate with this illusion. To measure the time course of proprioceptive drift during the Rubber Hand Illusion, we regularly interrupted stroking (performed by robot arms) to measure perceived finger location. Measurements were made by projecting a probe dot into the field of view (using a semi-transparent mirror) and asking participants if the dot is to the left or to the right of their invisible hand (Experiment 1) or to adjust the position of the dot to that of their invisible hand (Experiment 2). We varied both the measurement frequency (every 10 s, 40 s, 120 s) and the mode of stroking (synchronous, asynchronous, just vision). Surprisingly, with frequent measurements, proprioceptive drift occurs not only in the synchronous stroking condition but also in the two control conditions (asynchronous stroking, just vision). Proprioceptive drift in the synchronous stroking condition is never higher than in the just vision condition. Only continuous exposure to asynchronous stroking prevents proprioceptive drift and thus replicates the differences in drift reported in the literature. By contrast, complementary subjective ratings (questionnaire) show that the feeling of ownership requires synchronous stroking and is not present in the asynchronous stroking condition. Thus, subjective ratings and drift are dissociated. We conclude that different mechanisms of multisensory integration are responsible for proprioceptive drift and the feeling of ownership. Proprioceptive drift relies on visuoproprioceptive integration alone, a process that is inhibited by asynchronous stroking, the most common control condition in Rubber Hand Illusion experiments. This dissociation implies that conclusions about feelings of ownership cannot be drawn from measuring proprioceptive drift alone

    Learning to integrate arbitrary signals from vision and touch

    Get PDF
    When different perceptual signals of the same physical property are integrated, for example, an objects' size, which can be seen and felt, they form a more reliable sensory estimate (e.g., M. O. . This, however, implies that the sensory system already knows which signals belong together and how they relate. In other words, the system has to know the mapping between the signals. In a Bayesian model of cue integration, this prior knowledge can be made explicit. Here, we ask whether such a mapping between two arbitrary sensory signals from vision and touch can be learned from their statistical co-occurrence such that they become integrated. In the Bayesian framework, this means changing the belief about the distribution of the stimuli. To this end, we trained subjects with stimuli that are usually unrelated in the worldVthe luminance of an object (visual signal) and its stiffness (haptic signal). In the training phase, we then presented subjects with combinations of these two signals, which were artificially correlated, and thus, we introduced a new mapping between them. For example, the stiffer the object, the brighter it was. We measured the influence of learning by comparing discrimination performance before and after training. The prediction is that integration makes discrimination worse for stimuli, which are incongruent with the newly learned mapping, because integration would cause this incongruency to disappear perceptually. The more certain subjects are about the new mapping, the stronger should the influence be on discrimination performance. Thus, learning in this context is about acquiring beliefs. We found a significant change in discrimination performance before and after training when comparing trials with congruent and incongruent stimuli. After training, discrimination thresholds for the incongruent stimuli are increased relative to thresholds for congruent stimuli, suggesting that subjects learned effectively to integrate the two formerly unrelated signals

    No need to touch this: Bimanual haptic slant adaptation does not require touch

    Get PDF
    In our daily life, we often interact with objects using both hands raising the question the question to what extent information between the hands is shared. It has, for instance, been shown that curvature adaptation aftereffects can transfer from the adapted hand to the non-adapted hand. However, this transfer only occurred for dynamic exploration, e.g. by moving a single finger over a surface, but not for static exploration when keeping static contact with the surface and combining the information from different parts of the hand. This raises the question to what extent adaptation to object shape is shared between the hands when both hands are used in static fashion simultaneously and the object shape estimates require information from both hands. Here we addressed this question in three experiments using a slant adaptation paradigm. In Experiment 1 we investigated whether an aftereffect of static bimanual adaptation occurs at all and whether it transfers to conditions in which one hand was moving. In Experiment 2 participants adapted either to a felt slanted surface or simply be holding their hands in mid-air at similar positions, to investigate to what extent the effects of static bimanual adaptation are posture-based rather than object based. Experiment 3 further explored the idea that bimanual adaptation is largely posture based. We found that bimanual adaptation using static touch did lead to aftereffects when using the same static exploration mode for testing. However, the aftereffect did not transfer to any exploration mode that included a dynamic component. Moreover, we found similar aftereffects both with and without a haptic surface. Thus, we conclude that static bimanual adaptation is of proprioceptive nature and does not occur at the level at which the object is represented

    Visual experience shapes the Bouba‑Kiki effect and the size‑weight illusion upon sight restoration from congenital blindness

    Get PDF
    The Bouba-Kiki effect is the systematic mapping between round/spiky shapes and speech sounds (“Bouba”/“Kiki”). In the size-weight illusion, participants judge the smaller of two equallyweighted objects as being heavier. Here we investigated the contribution of visual experience to the development of these phenomena. We compared three groups: early blind individuals (no visual experience), individuals treated for congenital cataracts years after birth (late visual experience), and typically sighted controls (visual experience from birth). We found that, in cataract-treated participants (tested visually/visuo-haptically), both phenomena are absent shortly after sight onset, just like in blind individuals (tested haptically). However, they emerge within months following surgery, becoming statistically indistinguishable from the sighted controls. This suggests a pivotal role of visual experience and refutes the existence of an early sensitive period: A short period of experience, even when gained only years after birth, is sufficient for participants to visually pick-up regularities in the environment, contributing to the development of these phenomena

    Hearing in slow-motion: Humans underestimate the speed of moving sounds.

    Get PDF
    Senna I, Parise C, Ernst MO. Hearing in slow-motion: Humans underestimate the speed of moving sounds. Scientific Reports. 2015;5(1): 14054.Perception can often be described as a statistically optimal inference process whereby noisy and incomplete sensory evidence is combined with prior knowledge about natural scene statistics. Previous evidence has shown that humans tend to underestimate the speed of unreliable moving visual stimuli. This finding has been interpreted in terms of a Bayesian prior favoring low speed, given that in natural visual scenes objects are mostly stationary or slowly-moving. Here we investigated whether an analogous tendency to underestimate speed also occurs in audition: even if the statistics of the visual environment seem to favor low speed, the statistics of the stimuli reaching the individual senses may differ across modalities, hence potentially leading to different priors. Here we observed a systematic bias for underestimating the speed of unreliable moving sounds. This finding suggests the existence of a slow-motion prior in audition, analogous to the one previously found in vision. The nervous system might encode the overall statistics of the world, rather than the specific properties of the signals reaching the individual senses

    Finding home. Landmark ambiguity in human navigation

    Get PDF
    Jetzschke S, Fröhlich J, Ernst MO, Boeddeker N. Finding home. Landmark ambiguity in human navigation. Frontiers in Behavioural Neuroscience. 2017;11: 132.Memories of places often include landmark cues, i.e., information provided by the spatial arrangement of distinct objects with respect to the target location. To study how humans combine landmark information for navigation, we conducted two experiments: To this end, participants were either provided with auditory landmarks while walking in a large sports hall or with visual landmarks while walking on a virtual-reality treadmill setup. We found that participants cannot reliably locate their home position due to ambiguities in the spatial arrangement when only one or two uniform landmarks provide cues with respect to the target. With three visual landmarks that look alike, the task is solved without ambiguity, while audio landmarks need to play three unique sounds for a similar performance. This reduction in ambiguity through integration of landmark information from 1, 2, and 3 landmarks is well modeled using a probabilistic approach based on maximum likelihood estimation. Unlike any deterministic model of human navigation (based e.g., on distance or angle information), this probabilistic model predicted both the precision and accuracy of the human homing performance. To further examine how landmark cues are integrated we introduced systematic conflicts in the visual landmark configuration between training of the home position and tests of the homing performance. The participants integrated the spatial information from each landmark near-optimally to reduce spatial variability. When the conflict becomes big, this integration breaks down and precision is sacrificed for accuracy. That is, participants return again closer to the home position, because they start ignoring the deviant third landmark. Relying on two instead of three landmarks, however, goes along with responses that are scattered over a larger area, thus leading to higher variability. To model the breakdown of integration with increasing conflict, the probabilistic model based on a simple Gaussian distribution used for Experiment 1 needed a slide extension in from of a mixture of Gaussians. All parameters for the Mixture Model were fixed based on the homing performance in the baseline condition which contained a single landmark. from the 1-Landmark Condition. This way we found that the Mixture Model could predict the integration performance and its breakdown with no additional free parameters. Overall these data suggest that humans use similar optimal probabilistic strategies in visual and auditory navigation, integrating landmark information to improve homing precision and balance homing precision with homing accuracy

    Illusory Tactile Motion Perception: An Analog of the Visual Filehne Illusion

    Get PDF
    Moscatelli A, Hayward V, Wexler M, Ernst MO. Illusory Tactile Motion Perception: An Analog of the Visual Filehne Illusion. Scientific Reports. 2015;5(1): 14584.We continually move our body and our eyes when exploring the world, causing our sensory surfaces, the skin and the retina, to move relative to external objects. In order to estimate object motion consistently, an ideal observer would transform estimates of motion acquired from the sensory surface into fixed, world-centered estimates, by taking the motion of the sensor into account. This ability is referred to as spatial constancy. Human vision does not follow this rule strictly and is therefore subject to perceptual illusions during eye movements, where immobile objects can appear to move. Here, we investigated whether one of these, the Filehne illusion, had a counterpart in touch. To this end, observers estimated the movement of a surface from tactile slip, with a moving or with a stationary finger. We found the perceived movement of the surface to be biased if the surface was sensed while moving. This effect exemplifies a failure of spatial constancy that is similar to the Filehne illusion in vision. We quantified this illusion by using a Bayesian model with a prior for stationarity, applied previously in vision. The analogy between vision and touch points to a modality-independent solution to the spatial constancy problem

    Exploration mode affects visuohaptic integration of surface orientation

    Get PDF
    Plaisier M, van Dam L, Glowania C, Ernst MO. Exploration mode affects visuohaptic integration of surface orientation. Journal of Vision. 2014;14(13):22.We experience the world mostly in a multisensory fashion using a combination of all of our senses. Depending on the modality we can select different exploration strategies for extracting perceptual information. For instance, using touch we can enclose an object in our hand to explore parts of the object in parallel. Alternatively, we can trace the object with a single finger to explore its parts in a serial fashion. In this study we investigated whether the exploration mode (parallel versus serial) affects the way sensory signals are combined. To this end, participants visually and haptically explored surfaces that varied in roll angle and indicated which side of the surface was perceived as higher. In Experiment 1, the exploration mode was the same for both modalities (i.e., both parallel or both serial). In Experiment 2, we introduced a difference in exploration mode between the two modalities (visual exploration was parallel while haptic exploration was serial or vice versa). The results showed that visual and haptic signals were combined in a statistically optimal fashion only when the exploration modes were the same. In case of an asymmetry in the exploration modes across modalities, integration was suboptimal. This indicates that spatial-temporal discrepancies in the acquisition of information in the two senses (i.e., haptic and visual) can lead to the break-down of sensory integration
    corecore