3 research outputs found
Perceived duration of Visual and Tactile Stimuli Depends on Perceived Speed
It is known that the perceived duration of visual stimuli is strongly influenced by speed: faster moving stimuli appear to last longer. To test whether this is a general property of sensory systems we asked participants to reproduce the duration of visual and tactile gratings, and visuo-tactile gratings moving at a variable speed (3.5–15 cm/s) for three different durations (400, 600, and 800 ms). For both modalities, the apparent duration of the stimulus increased strongly with stimulus speed, more so for tactile than for visual stimuli. In addition, visual stimuli were perceived to last approximately 200 ms longer than tactile stimuli. The apparent duration of visuo-tactile stimuli lay between the unimodal estimates, as the Bayesian account predicts, but the bimodal precision of the reproduction did not show the theoretical improvement. A cross-modal speed-matching task revealed that visual stimuli were perceived to move faster than tactile stimuli. To test whether the large difference in the perceived duration of visual and tactile stimuli resulted from the difference in their perceived speed, we repeated the time reproduction task with visual and tactile stimuli matched in apparent speed. This reduced, but did not completely eliminate the difference in apparent duration. These results show that for both vision and touch, perceived duration depends on speed, pointing to common strategies of time perception
Development of visuo-auditory integration in space and time
Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002) while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008). Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004) and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009). Unimodal and bimodal (conflictual or not conflictual) audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs) and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task
Investigating the ability to read others\u2019 intentions using humanoid robots
The ability to interact with other people hinges crucially on the possibility to anticipate how their actions would unfold. Recent evidence suggests that a similar skill may be grounded on the fact that we perform an action differently if different intentions lead it. Human observers can detect these differences and use them to predict the purpose leading the action. Although intention reading from movement observation is receiving a growing interest in research, the currently applied experimental paradigms have important limitations. Here, we describe a new approach to study intention understanding that takes advantage of robots, and especially of humanoid robots. We posit that this choice may overcome the drawbacks of previous methods, by guaranteeing the ideal trade-off between controllability and naturalness of the interactive scenario. Robots indeed can establish an interaction in a controlled manner, while sharing the same action space and guaranteeing contingent behaviors. To conclude, we discuss the advantages of this research strategy and the aspects to be taken in consideration when attempting to define which human (and robot) motion features allow for intention reading during social interactive tasks