37 research outputs found

    Augmented Reality Interfaces for Procedural Tasks

    Get PDF
    Procedural tasks involve people performing established sequences of activities while interacting with objects in the physical environment to accomplish particular goals. These tasks span almost all aspects of human life and vary greatly in their complexity. For some simple tasks, little cognitive assistance is required beyond an initial learning session in which a person follows one-time compact directions, or even intuition, to master a sequence of activities. In the case of complex tasks, procedural assistance may be continually required, even for the most experienced users. Approaches for rendering this assistance employ a wide range of written, audible, and computer-based technologies. This dissertation explores an approach in which procedural task assistance is rendered using augmented reality. Augmented reality integrates virtual content with a user's natural view of the environment, combining real and virtual objects interactively, and aligning them with each other. Our thesis is that an augmented reality interface can allow individuals to perform procedural tasks more quickly while exerting less effort and making fewer errors than other forms of assistance. This thesis is supported by several significant contributions yielded during the exploration of the following research themes: What aspects of AR are applicable and beneficial to the procedural task problem? In answering this question, we developed two prototype AR interfaces that improve procedural task accomplishment. The first prototype was designed to assist mechanics carrying out maintenance procedures under field conditions. An evaluation involving professional mechanics showed our prototype reduced the time required to locate procedural tasks and resulted in fewer head movements while transitioning between tasks. Following up on this work, we constructed another prototype that focuses on providing assistance in the underexplored psychomotor phases of procedural tasks. This prototype presents dynamic and prescriptive forms of instruction and was evaluated using a demanding and realistic alignment task. This evaluation revealed that the AR prototype allowed participants to complete the alignment more quickly and accurately than when using an enhanced version of currently employed documentation systems. How does the user interact with an AR application assisting with procedural tasks? The application of AR to the procedural task problem poses unique user interaction challenges. To meet these challenges, we present and evaluate a novel class of user interfaces that leverage naturally occurring and otherwise unused affordances in the native environment to provide a tangible user interface for augmented reality applications. This class of techniques, which we call Opportunistic Controls, combines hand gestures, overlaid virtual widgets, and passive haptics to form an interface that was proven effective and intuitive during quantitative evaluation. Our evaluation of these techniques includes a qualitative exploration of various preferences and heuristics for Opportunistic Control-based designs

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    A Review of Multimodal Interaction Technique in Augmented Reality Environment

    Get PDF
    Augmented Reality (AR) has proposed several types of interaction techniques such as 3D interactions, natural interactions, tangible interactions, spatial awareness interactions and multimodal interactions. Usually, interaction technique in AR involve unimodal interaction technique that only allows user to interact with AR content by using one modality such as gesture, speech, click, etc. Meanwhile, the combination of more than one modality is called multimodal. Multimodal can contribute to human and computer interaction more efficient and will enhance better user experience. This is because, there are a lot of issues have been found when user use unimodal interaction technique in AR environment such as fat fingers. Recent research has shown that multimodal interface (MMI) has been explored in AR environment and has been applied in various domain. This paper presents an empirical study of some of the key aspects and issues in multimodal interaction augmented reality, touching on the interaction technique and system framework. We reviewed the question of what are the interaction techniques that have been used to perform a multimodal interaction in AR environment and what are the integrated components applied in multimodal interaction AR frameworks. These two questions were used to be analysed in order to find the trends in multimodal field as a main contribution of this paper. We found that gesture, speech and touch are frequently used to manipulate virtual object. Most of the integrated component in MMI AR framework discussed only on the concept of the framework components or the information centred design between the components. Finally, we conclude this paper by providing ideas for future work involving this field

    SPATIO-TEMPORAL REGISTRATION IN AUGMENTED REALITY

    Get PDF
    The overarching goal of Augmented Reality (AR) is to provide users with the illusion that virtual and real objects coexist indistinguishably in the same space. An effective persistent illusion requires accurate registration between the real and the virtual objects, registration that is spatially and temporally coherent. However, visible misregistration can be caused by many inherent error sources, such as errors in calibration, tracking, and modeling, and system delay. This dissertation focuses on new methods that could be considered part of "the last mile" of spatio-temporal registration in AR: closed-loop spatial registration and low-latency temporal registration: 1. For spatial registration, the primary insight is that calibration, tracking and modeling are means to an end---the ultimate goal is registration. In this spirit I present a novel pixel-wise closed-loop registration approach that can automatically minimize registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are minimized in both global world space via camera pose refinement, and local screen space via pixel-wise adjustments. This approach is presented in the context of Video See-Through AR (VST-AR) and projector-based Spatial AR (SAR), where registration results are measurable using a commodity color camera. 2. For temporal registration, the primary insight is that the real-virtual relationships are evolving throughout the tracking, rendering, scanout, and display steps, and registration can be improved by leveraging fine-grained processing and display mechanisms. In this spirit I introduce a general end-to-end system pipeline with low latency, and propose an algorithm for minimizing latency in displays (DLP DMD projectors in particular). This approach is presented in the context of Optical See-Through AR (OST-AR), where system delay is the most detrimental source of error. I also discuss future steps that may further improve spatio-temporal registration. Particularly, I discuss possibilities for using custom virtual or physical-virtual fiducials for closed-loop registration in SAR. The custom fiducials can be designed to elicit desirable optical signals that directly indicate any error in the relative pose between the physical and projected virtual objects.Doctor of Philosoph

    Système TAC : Télé-Assistance Collaborative. Réalité augmentée et NTIC au service des opérateurs et des experts dans le cadre d'une tâche de maintenance industrielle supervisée

    Get PDF
    Le système T.A.C. a été conçu avec l'objectif de proposer un nouveau mode de communication distant. Lorsqu'un opérateur a besoin d'être assisté sur une tâche manuelle, les supports techniques classiques comme le téléphone ou la visioconférence montrent rapidement leurs limites en termes d'interaction humaine. En permettant à un expert distant de voir exactement ce que l'opérateur voit, il peut interagir avec lui en temps réel grâce à la réalité augmentée via un paradigme d'interaction adéquat que nous avons nommé « Picking Outlining Adding ». En guidant l'opérateur grâce à des indices virtuels, l'expert lève les ambiguïtés de langage et les difficultés de communication sont évitées. Les opérations sont ainsi plus facilement exécutées. Les expérimentations que nous avons menées vont dans ce sens, montrant l'efficacité de ce nouveau mode de communication. Qui plus est, les principes d'interactions sont suffisamment généraux pour dépasser le cadre original de la maintenance industrielle.T.A.C. system was built in order to propose a new mode of remote communication. When an operator needs to be assisted on a manual task, classical technical support methods as phone or visioconference rapidly show their limits in terms of human interaction. By allowing a distant expert to see exactly what an operator is seeing, he could interact with him in interactive time thanks to augmented reality via an adequate interaction paradigm we named « Picking Outlining Adding ». In guiding the operator through visual guidance information, expert removes the ambiguities of language and communications difficulties are avoided. Operations are then easily performed. The experiments we have conducted go in this direction, showing the effectiveness of this new mode of communication. Moreover, interactions principles are sufficiently general to be applied to other use cases that go beyond the originally planned industrial maintenance

    2nd annual report

    Get PDF
    The Capsule Pipeline Research Center is devoted to performing research in capsule pipeline so that this emerging technology can be developed for early use to transport solids including coal, grain, other agricultural products, solid wastes (including hazardous wastes), machine parts and a host of other materials and commodities. The mission of the first four years is to focus on the coal log pipeline (CLP) technology. The Center is now near completion of its second-year research. Areas of research covered under Core Program of the second year include hydrodynamics of coal log flow, wear of coal logs in pipelines, pressure transients in capsule pipeline, pumping and control of coal log flow, fabrication and surface-treatment of coal logs, hydrophobic binder, and legal research in coal log pipeline. The Non-Core Program sponsored by the U.S. Department of Energy and the Electric Power Research Institute explores the economics and commercialization of CLP, and how to handle coal logs and treat CLP effluent water at power plants. Ten faculty members and more than 30 students from both the Columbia Campus and the Rolla Campus participated in the second-year research. Important research findings and accomplishments during the second year include: success in making durable binderless coal logs by compaction, initial success in binderless-log, underwater extrusion, improvement in the injection system and the pump-bypass scheme, advancement in the state-of-the-art of predicting the energy loss (pressure drop) along both stationary and moving capsules, improved understanding of the water absorption properties of coal logs, better control in coal log surface treatment, better understanding of the mechanism of coal log abrasion, and completion of aspects of legal research dealing with water rights, eminent domain right, and easement right on using existing oil pipelines for coal log transport. The second-year work also involved significant technology transfer activities including company seminars, involving companies in CLP research, preparation of a design/operational manual on CLP, issuance of a second newsletter, completion of a video tape on CLP, and presentation of research findings at several national meetings.Executive summary -- Research program -- Industrial collaboration/technology transfer -- Infrastructure and management -- Industrial advisory board (IAB) -- Contribution to state and local economic development strategies -- Support, financial management & budget -- Evaluator's report -- Appendix 1 : individual project descriptions -- Appendix 2 : attachments

    Beaming Displays

    Get PDF
    Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display. In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a user’s retina, which are then perceived as an augmented scene within a user’s field of view. In addition to providing the system design of the beaming display, we provide a physical prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the different aspects of the design space for our proposal
    corecore