369 research outputs found

    DEPTH PERCEPTION IN VIRTUAL PERIPERSONAL SPACE: AN INVESTIGATION OF MOTION PARALLAX ON PERCEPTION- VS ACTION-ESTIMATIONS

    Get PDF
    The goal of the current experiment was to investigate whether the addition of Motion Parallax will allow participants to make more accurate distance estimations, in both the real and virtual worlds, as well as to determine whether perception- and action-estimations were affected similarly. Due to rising number of COVID-19 cases in 2020, all in-person testing needed to cease with only one participant being tested with the full set of conditions in the final experimental configuration and one participant having been completed the motion parallax conditions only. As a result, the two participants were combined and only the motion parallax conditions were analyzed. Due to low statistical power, no significant main effects, nor significant interactions were discovered. Once the COVID-19 pandemic has subsidised, I am intending to collect data from all twenty-four participants with the full array of conditions in order to complete the current project. An increase in distance-estimation accuracy, especially in virtual reality conditions is still expected to be found

    Influence of hand position on the near-effect in 3D attention

    Get PDF
    Voluntary reorienting of attention in real depth situations is characterized by an attentional bias to locations near the viewer once attention is deployed to a spatially cued object in depth. Previously this effect (initially referred to as the ‘near-effect’) was attributed to access of a 3D viewer-centred spatial representation for guiding attention in 3D space. The aim of this study was to investigate whether the near-bias could have been associated with the position of the response-hand, always near the viewer in previous studies investigating endogenous attentional shifts in real depth. In Experiment 1, the response-hand was placed at either the near or far target depth in a depth cueing task. Placing the response-hand at the far target depth abolished the near-effect, but failed to bias spatial attention to the far location. Experiment 2 showed that the response-hand effect was not modulated by the presence of an additional passive hand, whereas Experiment 3 confirmed that attentional prioritization of the passive hand was not masked by the influence of the responding hand on spatial attention in Experiment 2. The pattern of results is most consistent with the idea that response preparation can modulate spatial attention within a 3D viewer-centred spatial representation

    Peripersonal Space in the Humanoid Robot iCub

    Get PDF
    Developing behaviours for interaction with objects close to the body is a primary goal for any organism to survive in the world. Being able to develop such behaviours will be an essential feature in autonomous humanoid robots in order to improve their integration into human environments. Adaptable spatial abilities will make robots safer and improve their social skills, human-robot and robot-robot collaboration abilities. This work investigated how a humanoid robot can explore and create action-based representations of its peripersonal space, the region immediately surrounding the body where reaching is possible without location displacement. It presents three empirical studies based on peripersonal space findings from psychology, neuroscience and robotics. The experiments used a visual perception system based on active-vision and biologically inspired neural networks. The first study investigated the contribution of binocular vision in a reaching task. Results indicated the signal from vergence is a useful embodied depth estimation cue in the peripersonal space in humanoid robots. The second study explored the influence of morphology and postural experience on confidence levels in reaching assessment. Results showed that a decrease of confidence when assessing targets located farther from the body, possibly in accordance to errors in depth estimation from vergence for longer distances. Additionally, it was found that a proprioceptive arm-length signal extends the robot’s peripersonal space. The last experiment modelled development of the reaching skill by implementing motor synergies that progressively unlock degrees of freedom in the arm. The model was advantageous when compared to one that included no developmental stages. The contribution to knowledge of this work is extending the research on biologically-inspired methods for building robots, presenting new ways to further investigate the robotic properties involved in the dynamical adaptation to body and sensing characteristics, vision-based action, morphology and confidence levels in reaching assessment.CONACyT, Mexico (National Council of Science and Technology

    Wearable augmented reality platform for aiding complex 3D trajectory tracing

    Get PDF
    Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space

    An exploratory fNIRS study with immersive virtual reality: a new method for technical implementation

    Get PDF
    For over two decades Virtual Reality (VR) has been used as a useful tool in several fields, from medical and psychological treatments, to industrial and military applications. Only in recent years researchers have begun to study the neural correlates that subtend VR experiences. Even if the functional Magnetic Resonance Imaging (fMRI) is the most common and used technique, it suffers several limitations and problems. Here we present a methodology that involves the use of a new and growing brain imaging technique, functional Near-infrared Spectroscopy (fNIRS), while participants experience immersive VR. In order to allow a proper fNIRS probe application, a custom-made VR helmet was created. To test the adapted helmet, a virtual version of the line bisection task was used. Participants could bisect the lines in a virtual peripersonal or extrapersonal space, through the manipulation of a Nintendo Wiimote ® controller in order for the participants to move a virtual laser pointer. Although no neural correlates of the dissociation between peripersonal and extrapersonal space were found, a significant hemodynamic activity with respect to the baseline was present in the right parietal and occipital areas. Both advantages and disadvantages of the presented methodology are discussed

    Les retours tactile et kinesthésique améliorent la perception de distance en réalité virtuelle

    Get PDF
    National audienceResearch spanning psychology, neuroscience and HCI found that depth perception distortion is a common problem in virtual reality. This distortion results in depth compression, where users perceive objects closer than their intended distance. Studies suggested that cues, such as audio and haptic, help to solve this issue. We focus on haptic feedback and investigate how force feedback compares to tactile feedback within peripersonal space in reducing depth perception distortion. Our study (N=12) compares the use of haptic force feedback, vibration haptic feedback, a combination of both or no feedback. Our results show that both vibration and force feedback improve depth perception distortion over no feedback (8.3 times better distance estimation than with no haptic feedback vs. 1.4 to 1.5 times better with either vibration or force feedback on their own). Participants also subjectively preferred using force feedback, or a combination of force and vibration feedback, over no feedback.Des recherches en psychologie, neurosciences et IHM ont montré que la distorsion de la perception des distances est un problème courant en réalité virtuelle. Cette distorsion entraîne une compression des profondeurs, et les utilisateurs perçoivent des objets plus proches qu'ils ne le sont. Dans ce papier, nous nous concentrons sur le retour haptique et examinons comment le retour de force se compare au retour tactile pour réduire la compression des profondeurs. Notre étude (N = 12) compare l'utilisation du retour de force, le retour tactile vibratoire, la combinaison des deux ou l'absence de retour. Nos résultats montrent que le retour tactile et le retour de force améliorent la perception de la profondeur. L'estimation de distance est 8.3 fois meilleure que sans retour, par rapport à 1.4-1.5 fois avec retour tactile vibratoire ou de force non-combinés. Les participants ont également préféré utiliser le retour de force, ou une combinaison de force et tactile

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    A Comparison of Neural Activity for Peripersonal and Extrapersonal Viewing in Real and Virtual Environments

    Get PDF
    This study involved a series of tests comparing the similarities and differences in neural activity at a subject’s peripheral and extrapersonal space in the real environment and virtual reality. We hypothesized that there would be similar brain activity at each of these environments depending on the focal distance of an object from the participant. Peripheral space is the visual space that is a reachable distance from a person. It has an overall neural pattern in the dorsal stream of the brain. Extrapersonal space is the visual space that does not directly surround a person and cannot be directly acted on. The overall neural pattern involved is in the ventral stream of the brain. In virtual reality, a person is able to interact with the virtual world presented just as they would with the real-world environment. They experience peripheral and extrapersonal space even though the device they wear is only inches away from their eyes. Data in this experiment was taken using electroencephalography (EEG). It was processed and analyzed using EEGLAB, a program of the computer software MATLAB. After data analysis, it was found that in the real environment and virtual reality in a person’s peripersonal space, there was neural activity in the intraparietal cortex of the brain and along the dorsal pathway of the brain. When studying visual perception in a person’s extrapersonal space, it was found that in the real environment and virtual reality, there was neural activity in the ventral occipital cortex and activity in the medial temporal cortex of the subject’s brain. This correlates to what we anticipated to happen after studying previous research on visual perception in the real world. In the future, we hope to be able to take data on a greater variety of participants as well as add trials studying the effect of augmented reality. This will give us an even better idea of the way the brain reacts while seeing objects at different focal distances in a variety of environments

    Comparing attention and eye movements towards real objects versus image displays

    Get PDF
    Images of objects are commonly used as proxies of real objects in studies testing attention and eye movements. However, a lot of modern research discovered neural and behavioral differences in perception of real objects and their pictorial representations. The goal of the current investigation is to verify if covert attentional orienting and patterns eye movements are influenced by proprieties of real objects such as stereoscopic cues and tangibility. In the first experiment a modified version of the Posner cueing task was used to verify differences in spatial orienting between real tools and fruits and vegetables and their pictorial representations. The result showed that participants were faster to detect a target on the left side of real objects rather than when displayed as images, however, only if real objects were presented in a reachable distance. Therefore, the first study showed that the graspability of stimulus magnifies the leftward bias of visuospatial attention also known as ‘pseudoneglect’. The second study compared patterns of eye movements in categorization and grasping task of real familiar tools and their images and stereoscopic displays. The results showed that if participants were asked to categorize objects then the display format of those items did not affect patterns of eye movements. However, when the participants were asked to grasp the objects then their eye movements were more focused on the handles of real objects rather than any other display format. Therefore, the both experiments showed the importance of tangibility of stimuli on perception. Moreover, the two studies used novel stimuli presentation systems that can be used in the future research studies testing other aspects of perception of real objects and their pictorial representations

    Convex Interaction : VR o mochiita kōdō asshuku ni yoru kūkanteki intarakushon no kakuchō

    Get PDF
    corecore