5,613 research outputs found

    Bodily awareness and novel multisensory features

    Get PDF
    According to the decomposition thesis, perceptual experiences resolve without remainder into their different modality-specific components. Contrary to this view, I argue that certain cases of multisensory integration give rise to experiences representing features of a novel type. Through the coordinated use of bodily awareness—understood here as encompassing both proprioception and kinaesthesis—and the exteroceptive sensory modalities, one becomes perceptually responsive to spatial features whose instances couldn’t be represented by any of the contributing modalities functioning in isolation. I develop an argument for this conclusion focusing on two cases: 3D shape perception in haptic touch and experiencing an object’s egocentric location in crossmodally accessible, environmental space

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Optimal visual-haptic integration with articulated tools

    Get PDF
    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory ‘correspondence problem’ underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world—seeing and feeling the same thing—and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual–haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools’ properties. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00221-017-4896-5) contains supplementary material, which is available to authorized users

    Spatial-Temporal Characteristics of Multisensory Integration

    Get PDF
    abstract: We experience spatial separation and temporal asynchrony between visual and haptic information in many virtual-reality, augmented-reality, or teleoperation systems. Three studies were conducted to examine the spatial and temporal characteristic of multisensory integration. Participants interacted with virtual springs using both visual and haptic senses, and their perception of stiffness and ability to differentiate stiffness were measured. The results revealed that a constant visual delay increased the perceived stiffness, while a variable visual delay made participants depend more on the haptic sensations in stiffness perception. We also found that participants judged stiffness stiffer when they interact with virtual springs at faster speeds, and interaction speed was positively correlated with stiffness overestimation. In addition, it has been found that participants could learn an association between visual and haptic inputs despite the fact that they were spatially separated, resulting in the improvement of typing performance. These results show the limitations of Maximum-Likelihood Estimation model, suggesting that a Bayesian inference model should be used.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Effect of Terminal Haptic Feedback on the Sensorimotor Control of Visually and Tactile-Guided Grasping

    Get PDF
    When grasping a physical object, the sensorimotor system is able to specify grip aperture via absolute sensory information. In contrast, grasping to a location previously occupied by (no-target pantomime-grasp) or adjacent to (spatially dissociated pantomime-grasp) an object results in the specification of grip aperture via relative sensory information. It is important to recognize that grasping a physical object and pantomime-grasping differ not only in terms of their spatial properties but also with respect to the availability of haptic feedback. Thus, the objective of this dissertation was to investigate how terminal haptic feedback influences the underlying mechanisms that support goal-directed grasping in visual- and tactile-based settings. In Chapter Two I sought to determine whether absolute haptic feedback influences tactile-based cues supporting grasps performed to the location previously occupied by an object. Results demonstrated that when haptic feedback was presented at the end of the response absolute haptic signals were incorporated in grasp production. Such a finding indicates that haptic feedback supports the absolute calibration between a tactile defined object and the required motor output. In Chapter Three I examined whether haptic feedback influences the information supporting visually guided no-target pantomime-grasps in a manner similar to tactile-guided grasping. Results showed that haptic sensory signals support no-target pantomime-grasping when provided at the end of the response. Accordingly, my findings demonstrated that a visuo-haptic calibration supports the absolute specification of object size and highlights the role of multisensory integration in no-target pantomime-grasping. Importantly, however, Chapter Four demonstrated that a priori knowledge of haptic feedback is necessary to support the aforementioned calibration process. In Chapter Five I demonstrates that, unlike no-target pantomime-grasps, spatially dissociated pantomime-grasps precluded a visuo-haptic calibration. Accordingly, I propose that the top-down demands of decoupling stimulus-response relations in spatially dissociated pantomime-grasping renders aperture shaping via a visual percept that is immutable to the integration of haptic feedback. In turn, the decreased top-down demands of no-target pantomime-grasps allows haptic feedback to serve as a reliable sensory resource supporting an absolute visuo-haptic calibration
    • …
    corecore