878 research outputs found

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach

    How Wrong Can You Be:Perception of Static Orientation Errors in Mixed Reality

    Get PDF

    Infrared and Electro-Optical Stereo Vision for Automated Aerial Refueling

    Get PDF
    Currently, Unmanned Aerial Vehicles are unsafe to refuel in-flight due to the communication latency between the UAVs ground operator and the UAV. Providing UAVs with an in-flight refueling capability would improve their functionality by extending their flight duration and increasing their flight payload. Our solution to this problem is Automated Aerial Refueling (AAR) using stereo vision from stereo electro-optical and infrared cameras on a refueling tanker. To simulate a refueling scenario, we use ground vehicles to simulate a pseudo tanker and pseudo receiver UAV. Imagery of the receiver is collected by the cameras on the tanker and processed by a stereo block matching algorithm to calculate a position and orientation estimate of the receiver. GPS and IMU truth data is then used to validate these results

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems

    A Systematic Review of Weight Perception in Virtual Reality: Techniques, Challenges, and Road Ahead

    Get PDF
    Weight is perceived through the combination of multiple sensory systems, and a wide range of factors – including touch, visual, and force senses – can influence the perception of heaviness. There have been remarkable advancements in the development of haptic interfaces throughout the years. However, a number of challenges limit the progression to enable humans to sense the weight in virtual reality (VR). This article presents an overview of the factors that influence how weight is perceived and the phenomenon that contributes to various types of weight illusions. A systematic review has been undertaken to assess the development of weight perception in VR, underlying haptic technology that renders the mass of a virtual object, and the creation of weight perception through pseudo-haptic. We summarize the approaches from the perspective of haptic and pseudo-haptic cues that exhibit the sense of weight such as force, skin deformation, vibration, inertia, control–display ratio, velocity, body gestures, and audio–visual representation. The design challenges are underlined, and research gaps are discussed, including accuracy and precision, weight discrimination, heavyweight rendering, and absolute weight simulation. This article is anticipated to aid in the development of more realistic weight perception in VR and stimulated new research interest in this topic
    corecore