125 research outputs found

    Visuotactile Integration for Depth Perception in Augmented Reality

    Get PDF
    Augmented reality applications using stereo head-mounted displays are not capable of perfectly blending real and virtual objects. For example, depth in the real world is perceived through cues such as accommodation and vergence. However, in stereo head-mounted displays these cues are disconnected since the virtual is generally projected at a static distance, while vergence changes with depth. This conflict can result in biased depth estimation of virtual objects in a real environment. In this research, we examined whether redundant tactile feedback can reduce the bias in perceived depth in a reaching task. In particular, our experiments proved that a tactile mapping of distance to vibration intensity or vibration position on the skin can be used to determine a virtual object's depth. Depth estimation when using only tactile feedback was more accurate than when using only visual feedback, and when using visuotactile feedback it was more precise and occurred faster than when using unimodal feedback. Our work demonstrates the value of multimodal feedback in augmented reality applications that require correct depth perception, and provides insights on various possible visuotactile implementations

    Immersioela ̈myksen kasvattaminen virtuaalilasien ja ruumiistairtaantumisil- luusion avulla

    Get PDF
    This thesis investigates the possibilities of immersive virtual reality as a tool in cognitive research and in the game industry. The immersion is experienced on a self-conscious level. The purpose of virtual reality is to simulate reality by tricking our cognitive mechanisms with artificial stimuli. These cognitive manipulations can alter the neurophysiological processes in our bodies. To investigate if a change can be produced and measured in the experienced immersion we create an experimental set-up that enables the induction of an out-of-body illusion by presenting synchronized visuotactile stimulation. The hypothesis is that repeated visuotactile stimulus induces an out-of-body illusion as a psychophysiological response. The study examined the psychophysiological response to a visual threat under this illusory state of the mind. The participants' electrodermal activity was recorded and the subjective experience was evaluated using a questionnaire. The results were not statistically significant due to the limited amount of participants in the study. However, the results were in line with the hypothesis that an out-of-body illusion can be induced. The questionnaire results supported the hypothesis.Tässä työssä tutkitaan immersiivisen virtuaalitodellisuuden hyödyntämistä työkaluna kognitiivisessa tutkimuksissa ja peliteolisuudessa. Immersioelämys koetaan itsetietoisuuden tasolla. Virtuaalitodellisuuden tarkoitus on simuloida todellisuutta huijaamalla kognitiivisia mekanismejamme keinotekoisten ärsykkeiden avulla. Nämä kognitiiviset manipuloinnit voivat muuttaa kehomme neurofysiologisia mekanismeja. Tutkiaksemme voidaanko immersion tunne havaita fysiologisesti ja voidaanko siihen vaikuttaa rakennamme koeasetelman, joka mahdollistaa ruumiistairtaantumisilluusion luomisen visuotaktiilisen ärsykkeen kautta. Hypoteesi on, että toistettu visuotaktiilinen stimulaatio synnyttää ruumiistairtaantumisilluusion psykofysiologisena reaktiona. Toteutettu tutkimus tutki psykofysiologista vastetta visuaaliseen ärsykkeeseen tämän illusorisen mielentilan aikana. Koehenkilöiden ihon sähkönjohtavuuden muutos mitattiin kokeen aikana ja subjektiivinen kokemus arvioitiin kyselyn avulla. Saadut tulokset eivät ole tilastollisesti merkitseviä pienen koehenkilömäärän takia. Tulokset olivat kuitenkin linjassa hypoteesin kanssa ruumiistairtaantumisilluusion aikaansaamisessa, jota kyselystä saadut tulokset tukivat

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Visual consciousness and bodily self-consciousness

    Get PDF
    Purpose of review: In recent years, consciousness has become a central topic in cognitive neuroscience. This review focuses on the relation between bodily self-consciousness - the feeling of being a subject in a body - and visual consciousness - the subjective experience associated with the perception of visual signals. Recent findings: Findings from clinical and experimental work have shown that bodily self-consciousness depends on specific brain networks and is related to the integration of signals from multiple sensory modalities including vision. In addition, recent experiments have shown that visual consciousness is shaped by the body, including vestibular, tactile, proprioceptive, and motor signals. Summary: Several lines of evidence suggest reciprocal relationships between vision and bodily signals, indicating that a comprehensive understanding of visual and bodily self-consciousness requires studying them in unison

    The rubber hand universe:On the impact of methodological differences in the rubber hand illusion

    Get PDF
    The rubber hand illusion (RHI) is a widely applied paradigm to investigate changes in body representations. Extensive scientific interest has produced a great variability in the observed results and many contradictory findings have been reported. Taking into account the numerous variations in the experimental implementation of the RHI, many of these contradictive findings can be reconciled, but to date a thorough analysis of the methodological differences between RHI studies is lacking. Here we summarize and analyse methodological differences between RHI studies. In distinction from other reviews focusing on the integration of findings from various studies, the present paper is devoted to the differences in (i) the experimental setup, (ii) the method used to induce the RHI, (iii) the quantification of its effects, and (iv) aspects of the experimental design and data analysis. This approach will provide a reference frame for the interpretation of previous studies as well as for the design of future studies

    From rubber hands to neuroprosthetics: Neural correlates of embodiment

    Get PDF
    © 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)Our interaction with the world rests on the knowledge that we are a body in space and time, which can interact with the environment. This awareness is usually referred to as sense of embodiment. For the good part of the past 30 years, the rubber hand illusion (RHI) has been a prime tool to study embodiment in healthy and people with a variety of clinical conditions. In this paper, we provide a critical overview of this research with a focus on the RHI paradigm as a tool to study prothesis embodiment in individuals with amputation. The RHI relies on well-documented multisensory integration mechanisms based on sensory precision, where parietal areas are involved in resolving the visuo-tactile conflict, and premotor areas in updating the conscious bodily representation. This mechanism may be transferable to prosthesis ownership in amputees. We discuss how these results might transfer to technological development of sensorised prostheses, which in turn might progress the acceptability by users.Peer reviewe

    Effect of Avatar Anthropomorphism on Body Ownership, Attractiveness and Collaboration in Immersive Virtual Environments

    Get PDF
    Effective collaboration in immersive virtual environments requires to be able to communicate flawlessly using both verbal and non-verbal communication. We present an experiment investigating the impact of anthropomorphism on the sense of body ownership, avatar attractiveness and performance in an asymmetric collaborative task. Using three avatars presenting different facial properties, participants have to solve a construction game according to their partner’s instructions. Results reveal no significant difference in terms of body ownership, but demonstrate significant differences concerning attractiveness and completion duration of the collaborative task. However the relative verbal interaction duration seems not impacted by the anthropomorphism level of the characters, meaning that participants were able to interact verbally independently of the way their character physically express their words in the virtual environment. Unexpectedly, correlation analyses also reveal a link between attractiveness and performance. The more attractive the avatar, the shorter the completion duration of the game. One could argue that, in the context of this experiment, avatar attractiveness could have led to an improvement in non-verbal communication as users could be more prone to observe their partner which translates into better performance in collaborative tasks. Other experiments must be conducted using gaze tracking to support this new hypothesis

    Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    Get PDF

    Personalized Digital Body: Enhancing Body Ownership and Spatial Presence in Virtual Reality

    Get PDF
    person\u27s sense of acceptance of a virtual body as his or her own is generally called virtual body ownership (VBOI). Having such a mental model of one\u27s own body transferred to a virtual human surrogate is known to play a critical role in one\u27s sense of presence in a virtual environment. Our focus in this dissertation is on top-down processing based on visual perception in both the visuomotor and the visuotactile domains, using visually personalized body cues. The visual cues we study here range from ones that we refer to as direct and others that we classify as indirect. Direct cues are associated with body parts that play a central role in the task we are performing. Such parts typically dominate a person\u27s foveal view and will include one or both of their hands. Indirect body cues come from body parts that are normally seen in our peripheral view, e.g., legs and torso, and that are often observed through some mediation and are not directly associated with the current task. This dissertation studies how and to what degree direct and indirect cues affect a person\u27s sense of VBOI for which they are receiving direct and, sometimes, inaccurate cues, and to investigate the relationship between enhanced virtual body ownership and task performance. Our experiments support the importance of a personalized representation, even for indirect cues. Additionally, we studied gradual versus instantaneous transition between one\u27s own body and a virtual surrogate body, and between one\u27s real-world environment and a virtual environment. We demonstrate that gradual transition has a significant influence on virtual body ownership and presence. In a follow-on study, we increase fidelity by using a personalized hand. Here, we demonstrate that a personalized hand significantly improves dominant visual illusions, resulting in more accurate perception of virtual object sizes
    corecore