8,124 research outputs found

    The Role of Head-Up Display in Computer-Assisted Instruction

    Get PDF

    Beaming Displays

    Get PDF
    Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display. In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a user’s retina, which are then perceived as an augmented scene within a user’s field of view. In addition to providing the system design of the beaming display, we provide a physical prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the different aspects of the design space for our proposal

    Teleportal Face-To-Face System.

    Get PDF
    A teleportal system which provides remote communication between at least two users. A projective display and video capture system provides video images to the users. The video system obtains and transmits 3D images which are stereoscopic to remote users. The projective display unit provides an augmented reality environment to each user and allows users to view, unobstructed, the other local users, and view a local site in which they are located. A screen transmits to the user the images generated by the projective display via a retro-reflective fabric upon which images are projected and reflected back to the users eyes

    Development and preliminary evaluation of a novel low cost VR-based upper limb stroke rehabilitation platform using Wii technology.

    Get PDF
    Abstract Purpose: This paper proposes a novel system (using the Nintendo Wii remote) that offers customised, non-immersive, virtual reality-based, upper-limb stroke rehabilitation and reports on promising preliminary findings with stroke survivors. Method: The system novelty lies in the high accuracy of the full kinematic tracking of the upper limb movement in real-time, offering strong personal connection between the stroke survivor and a virtual character when executing therapist prescribed adjustable exercises/games. It allows the therapist to monitor patient performance and to individually calibrate the system in terms of range of movement, speed and duration. Results: The system was tested for acceptability with three stroke survivors with differing levels of disability. Participants reported an overwhelming connection with the system and avatar. A two-week, single case study with a long-term stroke survivor showed positive changes in all four outcome measures employed, with the participant reporting better wrist control and greater functional use. Activities, which were deemed too challenging or too easy were associated with lower scores of enjoyment/motivation, highlighting the need for activities to be individually calibrated. Conclusions: Given the preliminary findings, it would be beneficial to extend the case study in terms of duration and participants and to conduct an acceptability and feasibility study with community dwelling survivors. Implications for Rehabilitation Low-cost, off-the-shelf game sensors, such as the Nintendo Wii remote, are acceptable by stroke survivors as an add-on to upper limb stroke rehabilitation but have to be bespoked to provide high-fidelity and real-time kinematic tracking of the arm movement. Providing therapists with real-time and remote monitoring of the quality of the movement and not just the amount of practice, is imperative and most critical for getting a better understanding of each patient and administering the right amount and type of exercise. The ability to translate therapeutic arm movement into individually calibrated exercises and games, allows accommodation of the wide range of movement difficulties seen after stroke and the ability to adjust these activities (in terms of speed, range of movement and duration) will aid motivation and adherence - key issues in rehabilitation. With increasing pressures on resources and the move to more community-based rehabilitation, the proposed system has the potential for promoting the intensity of practice necessary for recovery in both community and acute settings.The National Health Service (NHS) London Regional Innovation Fund

    ROBOMIRROR: A SIMULATED MIRROR DISPLAY WITH A ROBOTIC CAMERA

    Get PDF
    Simulated mirror displays have a promising prospect in applications, due to its capability for virtual visualization. In most existing mirror displays, cameras are placed on top of the displays and unable to capture the person in front of the display at the highest possible resolution. The lack of a direct frontal capture of the subject\u27s face and the geometric error introduced by image warping techniques make realistic mirror image rendering a challenging problem. The objective of this thesis is to explore the use of a robotic camera in tracking the face of the subject in front of the display to obtain a high-quality image capture. Our system uses a Bislide system to control a camera for face capture, while using a separate color-depth camera for accurate face tracking. We construct an optical device in which a one-way mirror is used so that the robotic camera behind can capture the subject while the rendered images can be displayed by reflecting off the mirror from an overhead projector. A key challenge of the proposed system is the reduction of light due to the one-way mirror. The optimal 2D Wiener filter is selected to enhance the low contrast images captured by the camera

    SELF-IMAGE MULTIMEDIA TECHNOLOGIES FOR FEEDFORWARD OBSERVATIONAL LEARNING

    Get PDF
    This dissertation investigates the development and use of self-images in augmented reality systems for learning and learning-based activities. This work focuses on self- modeling, a particular form of learning, actively employed in various settings for therapy or teaching. In particular, this work aims to develop novel multimedia systems to support the display and rendering of augmented self-images. It aims to use interactivity (via games) as a means of obtaining imagery for use in creating augmented self-images. Two multimedia systems are developed, discussed and analyzed. The proposed systems are validated in terms of their technical innovation and their clinical efficacy in delivering behavioral interventions for young children on the autism spectrum

    REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer\u27s 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs

    How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality

    Get PDF
    Advances in computer graphics algorithms and virtual reality (VR) systems, together with the reduction in cost of associated equipment, have led scientists to consider VR as a useful tool for conducting experimental studies in fields such as neuroscience and experimental psychology. In particular virtual body ownership, where the feeling of ownership over a virtual body is elicited in the participant, has become a useful tool in the study of body representation, in cognitive neuroscience and psychology, concerned with how the brain represents the body. Although VR has been shown to be a useful tool for exploring body ownership illusions, integrating the various technologies necessary for such a system can be daunting. In this paper we discuss the technical infrastructure necessary to achieve virtual embodiment. We describe a basic VR system and how it may be used for this purpose, and then extend this system with the introduction of real-time motion capture, a simple haptics system and the integration of physiological and brain electrical activity recordings
    • …
    corecore