1,130 research outputs found

    A novel experimental design of a real-time VR tracking device

    Get PDF
    Virtual Reality (VR) is progressively adopted at different stages of design and product development. Consequently, evolving interaction requirements in engineering design and development for VR are essential for technology adoption. One of these requirements is real-time positional tracking. This paper aims to present an experimental design of a new real-time positional tracking device (tracker), that is more compact than the existing solution, while addressing factors such as wearability and connectivity. We compare the simulation of the proposed device and the existing solution, discuss the results, and the limitations. The new experimental shape of the device is tailored towards research, allowing the engineering designer to take advantage of a new tracker alternative in new ways, and opens the door to new VR applications in research and product development

    Evaluation of the Oculus Rift S tracking system in room scale virtual reality

    Get PDF
    In specific virtual reality applications that require high accuracy it may be advisable to replace the built-in tracking system of the HMD with a third party solution. The purpose of this research work is to evaluate the accuracy of the built-in tracking system of the Oculus Rift S Head Mounted Display (HMD) in room scale environments against a motion capture system. In particular, an experimental evaluation of the Oculus Rift S inside-out tracking technology was carried out, compared to the performance of an outside-in tracking method based on the OptiTrack motion capture system. In order to track the pose of the HMD using the motion capture system the Oculus Rift S was instrumented with passive retro-reflective markers and calibrated. Experiments have been performed on a dataset of multiple paths including simple motions as well as more complex paths. Each recorded path contained simultaneous changes in both position and orientation of the HMD. Our results indicate that in room-scale environments the average translation error for the Oculus Rift S tracking system is about 1.83 cm, and the average rotation error is about 0. 77°, which is 2 orders of magnitude higher than the performance that can be achieved using a motion capture system

    Development of a learning from demonstration environment using ZED 2i and HTC Vive Pro

    Get PDF
    Being able to teach complex capabilities, such as folding garments, to a bi-manual robot is a very challenging task, which is often tackled using learning from demonstration datasets. The few garment folding datasets available nowadays to the robotics research community are either gathered from human demonstrations or generated through simulation. The former have the huge problem of perceiving human action and transferring it to the dynamic control of the robot, while the latter requires coding human motion into the simulator in open loop, resulting in far-from-realistic movements. In this thesis, a novel virtual reality (VR) framework is proposed, based on Unity’s 3D platform and the use of HTC Vive Pro system, ZED mini, and ZED 2i cameras, and Leap motion’s hand-tracking module. The framework is capable of detecting and tracking objects, animals, and human bodies in a 3D environment. Moreover, the framework is also capable of simulating very realistic garments while allowing users to interact with them, in real-time, either through handheld controllers or the user’s real hands. By doing so, and thanks to the immersive experience, the framework gets rid of the gap between the human and robot perception-action loop, while simplifying data capture and resulting in more realistic samples. Finally, using the developed framework, a novel garment manipulation dataset will be recorded, containing samples with data and videos of nineteen different types of manipulation which aim to help tasks related to robot learning by demonstrationObjectius de Desenvolupament Sostenible::9 - Indústria, Innovació i Infraestructur

    Motion Generation and Planning System for a Virtual Reality Motion Simulator: Development, Integration, and Analysis

    Get PDF
    In the past five years, the advent of virtual reality devices has significantly influenced research in the field of immersion in a virtual world. In addition to the visual input, the motion cues play a vital role in the sense of presence and the factor of engagement in a virtual environment. This thesis aims to develop a motion generation and planning system for the SP7 motion simulator. SP7 is a parallel robotic manipulator in a 6RSS-R configuration. The motion generation system must be able to produce accurate motion data that matches the visual and audio signals. In this research, two different system workflows have been developed, the first for creating custom visual, audio, and motion cues, while the second for extracting the required motion data from an existing game or simulation. Motion data from the motion generation system are not bounded, while motion simulator movements are limited. The motion planning system commonly known as the motion cueing algorithm is used to create an effective illusion within the limited capabilities of the motion platform. Appropriate and effective motion cues could be achieved by a proper understanding of the perception of human motion, in particular the functioning of the vestibular system. A classical motion cueing has been developed using the model of the semi-circular canal and otoliths. A procedural implementation of the motion cueing algorithm has been described in this thesis. We have integrated all components together to make this robotic mechanism into a VR motion simulator. In general, the performance of the motion simulator is measured by the quality of the motion perceived on the platform by the user. As a result, a novel methodology for the systematic subjective evaluation of the SP7 with a pool of juries was developed to check the quality of motion perception. Based on the results of the evaluation, key issues related to the current configuration of the SP7 have been identified. Minor issues were rectified on the flow, so they were not extensively reported in this thesis. Two major issues have been addressed extensively, namely the parameter tuning of the motion cueing algorithm and the motion compensation of the visual signal in virtual reality devices. The first issue was resolved by developing a tuning strategy with an abstraction layer concept derived from the outcome of the novel technique for the objective assessment of the motion cueing algorithm. The origin of the second problem was found to be a calibration problem of the Vive lighthouse tracking system. So, a thorough experimental study was performed to obtain the optimal calibrated environment. This was achieved by benchmarking the dynamic position tracking performance of the Vive lighthouse tracking system using an industrial serial robot as a ground truth system. With the resolution of the identified issues, a general-purpose virtual reality motion simulator has been developed that is capable of creating custom visual, audio, and motion cues and of executing motion planning for a robotic manipulator with a human motion perception constraint

    Comparing technologies for conveying emotions through realistic avatars in virtual reality-based metaverse experiences

    Get PDF
    With the development of metaverse(s), industry and academia are searching for the best ways to represent users' avatars in shared Virtual Environments (VEs), where real-time communication between users is required. The expressiveness of avatars is crucial for transmitting emotions that are key for social presence and user experience, and are conveyed via verbal and non-verbal facial and body signals. In this paper, two real-time modalities for conveying expressions in Virtual Reality (VR) via realistic, full-body avatars are compared by means of a user study. The first modality uses dedicated hardware (i.e., eye and facial trackers) to allow a mapping between the user’s facial expressions/eye movements and the avatar model. The second modality relies on an algorithm that, starting from an audio clip, approximates the facial motion by generating plausible lip and eye movements. The participants were requested to observe, for both the modalities, the avatar of an actor performing six scenes involving as many basic emotions. The evaluation considered mainly social presence and emotion conveyance. Results showed a clear superiority of facial tracking when compared to lip sync in conveying sadness and disgust. The same was less evident for happiness and fear. No differences were observed for anger and surprise

    Robustness and static-positional accuracy of the SteamVR 1.0 virtual reality tracking system

    Get PDF
    The use of low-cost immersive virtual reality systems is rapidly expanding. Several studies started to analyse the accuracy of virtual reality tracking systems, but they did not consider in depth the effects of external interferences in the working area. In line with that, this study aimed at exploring the static-positional accuracy and the robustness to occlusions inside the capture volume of the SteamVR (1.0) tracking system. To do so, we ran 3 different tests in which we acquired the position of HTC Vive PRO Trackers (2018 version) on specific points of a grid drawn on the floor, in regular tracking conditions and with partial and total occlusions. The tracking system showed a high inter- and intra-rater reliability and detected a tilted surface with respect to the floor plane. Every acquisition was characterised by an initial random offset. We estimated an average accuracy of 0.5 +/- 0.2 cm across the entire grid (XY-plane), noticing that the central points were more accurate (0.4 +/- 0.1 cm) than the outer ones (0.6 +/- 0.1 cm). For the Z-axis, the measurements showed greater variability and the accuracy was equal to 1.7 +/- 1.2 cm. Occlusion response was tested using nonparametric Bland-Altman statistics, which highlighted the robustness of the tracking system. In conclusion, our results promote the SteamVR system for static measures in the clinical field. The computed error can be considered clinically irrelevant for exercises aimed at the rehabilitation of functional movements, whose several motor outcomes are generally measured on the scale of metres
    corecore