15 research outputs found

    Perception and Action in Peripersonal Space: A Comparison between Video and Optical See-Through Augmented Reality Devices

    No full text
    In this paper, we analyze how we perceive the peripersonal space when involved in a reaching task in an Augmented Reality (AR) environment. In particular, we aim to quantify whether distortions in perception of the spatial layout of the scene occur, by taking into consideration two different AR wearable devices, in particular head-mounted displays (HMD). We performed two tests, and compared the results between the subjects who used an Optical See-Through (OST) HMD (Metavision Meta 2) and those who used a Video See-Through (VST) HMD (a smartphone in conjunction with a headset like the Google Cardboard). The data has been collected from a total of 45 volunteer participants. In the experiment, the subjects had to perform a precision reaching task by overlapping the hand on the perceived target position. Then, we observed how the presence or absence of an internal feedback influenced the homing performance. Our results revealed a better depth estimation, thus a more precise interaction, when using the OST device, which also revealed a lower impact on eye strain and fatigue

    A Registration Framework for the Comparison of Video and Optical See-Through Devices in Interactive Augmented Reality

    No full text
    In this paper, we designed a registration framework that can be used to develop augmented reality environments, where all the real (including the users) and virtual elements are co-localized and registered in a common reference frame. The software is provided together with this paper, to contribute to the research community. The developed framework allows us to perform a quantitative assessment of interaction and egocentric perception in Augmented Reality (AR) environments. We assess perception and interaction in the peripersonal space through a 3D blind reaching task in a simple scenario and an interaction task in a kitchen scenario using both video (VST) and optical see-through (OST) head-worn technologies. Moreover, we carry out the same 3D blind reaching task in real condition (without head-mounted display and reaching real targets). This provides a baseline performance with which to compare the two augmented reality technologies. The blind reaching task results show an underestimation of distances with OST devices and smaller estimation errors in frontal spatial positions when the depth does not change. This happens with both OST and VST devices, compared with the real-world baseline. Such errors are compensated in the interaction kitchen scenario task. Thanks to the egocentric viewing geometry and the specific required task, which constrain the position perception on a table, both VST and OST have comparable and effective performance. Thus, our results show that such technologies have issues, though they can be effectively used in specific real tasks. This does not allow us to choose between VST and OST devices. Still, it provides a baseline and a registration framework for further studies and emphasizes the specificity of perception in interactive AR

    ASO Visual Abstract: New Robotic System with Wristed Micro-Instruments Allows Precise Reconstructive Microsurgery

    No full text
    A new microsurgery robot with small-wristed microinstruments successfully performs anastomoses with better precision than the manual technique. Its design may enhance surgeons’ natural dexterity by extending the range of motion beyond the ability of the human hand. This video abstract summarizes the main results of the original research related to use of this new robot for oncologic reconstructive microsurgery (Ballestín A, et al. New a robotic system with wristed micro-instruments allows precise reconstructive microsurgery: preclinical study

    New Robotic System with Wristed Microinstruments Allows Precise Reconstructive Microsurgery: Preclinical Study

    No full text
    Background: Microsurgery allows complex reconstruction of tissue defects after oncological resections or severe trauma. Performing these procedures may be limited by human tremor, precision, and manual dexterity. A new robot designed specifically for microsurgery with wristed microinstruments and motion scaling may reduce human tremor and thus enhance precision. This randomized controlled preclinical trial investigated whether this new robotic system can successfully perform microsurgical needle driving, suturing, and anastomosis. Methods: Expert microsurgeons and novices completed six needle passage exercises and performed six anastomoses by hand and six with the new robot. Experienced microsurgeons blindly assessed the quality of the procedures. Precision in microneedle driving and stitch placement was assessed by calculating suturing distances and angulation. Performance of microsurgical anastomoses was assessed by time, learning curves, and the Anastomosis Lapse Index score for objective performance assessment. Results: Refined precision in suturing was achieved with the robot when compared with the manual technique regarding suture distances (p = 0.02) and angulation (p < 0.01). The time required to perform microsurgical anastomoses was longer with the robot, however, both expert and novice microsurgeons reduced times with practice. The objective evaluation of the anastomoses performed by novices showed better results with the robot. Conclusions: This study demonstrated the feasibility of performing precise microsutures and anastomoses using a new robotic system. Compared to standard manual techniques, robotic procedures were longer in time, but showed greater precision

    A VR game-based system for multimodal emotion data collection

    No full text
    The rising popularity of learning techniques in data analysis has recently led to an increased need of large-scale datasets. In this study, we propose a system consisting of a VR game and a software platform designed to collect the player’s multimodal data, synchronized with the VR content, with the aim of creating a dataset for emotion detection and recognition. The game was implemented ad-hoc in order to elicit joy and frustration, following the emotion elicitation process described by Roseman’s appraisal theory. In this preliminary study, 5 participants played our VR game along with pre-existing ones and self-reported experienced emotions
    corecore