2,288 research outputs found
Recommended from our members
Application and Evaluation of Lighthouse Technology for Precision Motion Capture
This thesis presents the development towards a system that can capture and quantify motion for applications in biomechanical and medical fields demanding precision motion tracking using the lighthouse technology. Commercially known as SteamVR tracking, the lighthouse technology is a motion tracking system developed for virtual reality applications that makes use of patterned infrared light sources to highlight trackers (objects embedded with photodiodes) to obtain their pose or spatial position and orientation. Current motion capture systems such as the camera-based motion capture are expensive and not readily available outside of research labs. This thesis provides a case for low-cost motion capture systems. The technology is applied to quantify motion to draw inferences about biomechanics capture and analysis, quantification of gait, and prosthetic alignment. Possible shortcomings for data acquisition using this system for the stated applications have been addressed. The repeatability of the system has been established by determining the standard deviation error for multiple trials based on a motion trajectory using a seven degree-of-freedom robot arm. The accuracy testing for the system is based on cross-validation between the lighthouse technology data and transformations derived using joint angles by developing a forward kinematics model for the robot’s end-effector pose. The underlying principle for motion capture using this system is that multiple trackers placed on limb segments allow to record the position and orientation of the segments in relation to a set global frame. Joint angles between the segments can then be calculated from the recorded positions and orientations of each tracker using inverse kinematics. In this work, inverse kinematics for rigid bodies was based on calculating homogeneous transforms to the individual trackers in the model’s reference frame to find the respective Euler angles as well as using the analytical approach to solve for joint variables in terms of known geometric parameters. This work was carried out on a phantom prosthetic limb. A custom application-specific motion tracker was also developed using a hardware development kit which would be further optimized for subsequent studies involving biomechanics motion capture
Evaluation of a Low-Cost Virtual Reality Surround-Screen Projection System
[EN] Two of the most popular mediums for virtual reality are head-mounted displays and surround-screen projection systems, such as CAVE Automatic Virtual Environments. In recent years, HMDs suffered a significant reduction in cost and have become widespread consumer products. In contrast, CAVEs are still expensive and remain accessible to a limited number of researchers. This study aims to evaluate both objective and subjective characteristics of a CAVE-like monoscopic low-cost virtual reality surround-screen projection system compared to advanced setups and HMDs. For objective results, we measured the head position estimation accuracy and precision of a low-cost active infrared (IR) based tracking system, used in the proposed low-cost CAVE, relatively to an infrared marker-based tracking system, used in a laboratory-grade CAVE system. For subjective characteristics, we investigated the sense of presence and cybersickness elicited in users during a visual search task outside personal space, beyond arms reach, where the importance of stereo vision is diminished. Thirty participants rated their sense of presence and cybersickness after performing the VR search task with our CAVE-like system and a modern HMD. The tracking showed an accuracy error of 1.66 cm and .4 mm of precision jitter. The system was reported to elicit presence but at a lower level than the HMD, while causing significant lower cybersickness. Our results were compared to a previous study performed with a laboratory-grade CAVE and support that a VR system implemented with low-cost devices could be a viable alternative to laboratory-grade CAVEs for visual search tasks outside the users personal space.This work was supported by the Fundação para a Ciência e Tecnologia through the AHA project (CMUPERI/HCI/0046/2013), by the INTERREG program through the MACBIOIDI project (MAC/1.1.b/098),
LARSyS (UIDB/50009/2020), NOVA-LINCS (UID/CEC/04516/2019), by Fundació la Marató de la TV3
(201701-10), and the European Union through the Operational Program of the European Regional Development Fund (ERDF) of the Valencian Community 2014-2020 (IDIFEDER/2018/029)Gonçalves, A.; Borrego, A.; Latorre, J.; Llorens Rodríguez, R.; Bermúdez, S. (2021). Evaluation of a Low-Cost Virtual Reality Surround-Screen Projection System. IEEE Transactions on Visualization and Computer Graphics. 1-12. https://doi.org/10.1109/TVCG.2021.3091485S11
Recommended from our members
Open-World Virtual Reality Headset Tracking
A novel outdoor Virtual Reality (VR) concept called Open-World Virtual Reality (OWVR) is presented that combines precise GNSS positioning and a smartphone-grade inertial sensor to provide globally-referenced centimeter-and-degree-accurate tracking of the VR headset. Unlike existing augmented and virtual reality systems, which perform camera-based inside-out headset tracking relative to a local reference frame (e.g., an ad-hoc frame fixed to a living room), OWVR's globally-referenced tracking enables a novel VR experience in which the user's outdoor exploration is robust to extremes in lighting conditions and local visual texture. This paper introduces the OWVR concept and presents a prototype OWVR system with two candidate sensor fusion architectures, one loosely and one tightly coupled. Comparative performance is evaluated in terms of tracking accuracy and availability of an integer-aperture-test-validated fixed tracking solution. For scenarios with degraded GNSS availability, which will be typical for outdoor VR, the tightly-coupled architecture is shown to offer a critical tracking robustness advantage.Aerospace Engineering and Engineering Mechanic
Motion Generation and Planning System for a Virtual Reality Motion Simulator: Development, Integration, and Analysis
In the past five years, the advent of virtual reality devices has significantly influenced research in the field of immersion in a virtual world. In addition to the visual input, the motion cues play a vital role in the sense of presence and the factor of engagement in a virtual environment. This thesis aims to develop a motion generation and planning system for the SP7 motion simulator. SP7 is a parallel robotic manipulator in a 6RSS-R configuration. The motion generation system must be able to produce accurate motion data that matches the visual and audio signals. In this research, two different system workflows have been developed, the first for creating custom visual, audio, and motion cues, while the second for extracting the required motion data from an existing game or simulation. Motion data from the motion generation system are not bounded, while motion simulator movements are limited. The motion planning system commonly known as the motion cueing algorithm is used to create an effective illusion within the limited capabilities of the motion platform. Appropriate and effective motion cues could be achieved by a proper understanding of the perception of human motion, in particular the functioning of the vestibular system. A classical motion cueing has been developed using the model of the semi-circular canal and otoliths. A procedural implementation of the motion cueing algorithm has been described in this thesis. We have integrated all components together to make this robotic mechanism into a VR motion simulator. In general, the performance of the motion simulator is measured by the quality of the motion perceived on the platform by the user. As a result, a novel methodology for the systematic subjective evaluation of the SP7 with a pool of juries was developed to check the quality of motion perception. Based on the results of the evaluation, key issues related to the current configuration of the SP7 have been identified. Minor issues were rectified on the flow, so they were not extensively reported in this thesis. Two major issues have been addressed extensively, namely the parameter tuning of the motion cueing algorithm and the motion compensation of the visual signal in virtual reality devices. The first issue was resolved by developing a tuning strategy with an abstraction layer concept derived from the outcome of the novel technique for the objective assessment of the motion cueing algorithm. The origin of the second problem was found to be a calibration problem of the Vive lighthouse tracking system. So, a thorough experimental study was performed to obtain the optimal calibrated environment. This was achieved by benchmarking the dynamic position tracking performance of the Vive lighthouse tracking system using an industrial serial robot as a ground truth system. With the resolution of the identified issues, a general-purpose virtual reality motion simulator has been developed that is capable of creating custom visual, audio, and motion cues and of executing motion planning for a robotic manipulator with a human motion perception constraint
Real Time and High Fidelity Quadcopter Tracking System
This project was conceived as a desired to have an affordable, flexible and physically compact tracking system for high accuracy spatial and orientation tracking. Specifically, this implementation is focused on providing a low cost motion capture system for future research. It is a tool to enable the further creation of systems that would require the use of accurate placement of landing pads, payload acquires and delivery. This system will provide the quadcopter platform a coordinate system that can be used in addition to GPS.
Field research with quadcopter manufacturers, photographers, agriculture and research organizations were contact and interviewed for information on what components of a quadcopter system were lacking and what barriers currently limited desired drone operation. Distilling this information and after exploring various projects in the field of quadcopter and autonomous control, the idea was found to develop a system that could track the motion of quadcopters to jump start other projects.
Specifically, live feedback was explored to be used as hardware in the loop testing systems where commands are relayed to the quadcopter and its response can be accurately measured. This can be extremely beneficial in new equipment testing such as new propeller design, motor design, and frame response. A further stretch objective for this project is to unify input commands to the quadcopter with its physical position in order to train control systems to fly new platforms running “piloted” platforms such as BetaFlight, RaceFlight and KISS platforms typically associated with drone racing as well as hobby grade semi-autonomous flight controller such as ArduPilot Mega (APM) & PixHawk
Visual and spatial audio mismatching in virtual environments
This paper explores how vision affects spatial audio perception in virtual reality. We created four virtual environments with different reverb and room sizes, and recorded binaural clicks in each one. We conducted two experiments: one where participants judged the audio-visual match, and another where they pointed to the click direction. We found that vision influences spatial audio perception and that congruent audio-visual cues improve accuracy. We suggest some implications for virtual reality design and evaluation
Challenges in passenger use of mixed reality headsets in cars and other transportation
This paper examines key challenges in supporting passenger use of augmented and virtual reality headsets in transit. These headsets will allow passengers to break free from the restraints of physical displays placed in constrained environments such as cars, trains and planes. Moreover, they have the potential to allow passengers to make better use of their time by making travel more productive and enjoyable, supporting both privacy and immersion. However, there are significant barriers to headset usage by passengers in transit contexts. These barriers range from impediments that would entirely prevent safe usage and function (e.g. motion sickness) to those that might impair their adoption (e.g. social acceptability). We identify the key challenges that need to be overcome and discuss the necessary resolutions and research required to facilitate adoption and realize the potential advantages of using mixed reality headsets in transit
- …