2,107 research outputs found

    Remote maintenance assistance using real-time augmented reality authoring

    Get PDF
    Maintenance operations and lifecycle engineering have largely been considered one of the most expensive and time-consuming components for industrial equipment. Numerous organizations continually devote large quantities of resources towards maintaining equipment. As such, any optimizations that would reduce maintenance errors and expenses could lead to substantial time and cost savings. Unfortunately, there are often not enough specialists to meet the demand, forcing localized technicians to perform on-site maintenance on equipment outside their area of expertise. Augmented reality (AR) is one technology that has already been shown to improve the maintenance process. While powerful, AR has its own set of challenges, from content authoring to spatial perception. This work details a system that puts both the power of AR and the knowledge of a specialist directly into the hands of an on-site technician. An application was developed that enables a specialist to deliver AR instructions in real-time to assist a technician performing on-site maintenance. Using a novel and simplified authoring interface, specialists can create AR content in real-time, with little to no prior knowledge of augmented reality or the system itself. There has been ample research on different AR-supported processes, such as real-time authoring, video monitoring, and off-site assistance. However, much less work has been done to integrate them and leverage existing personnel knowledge to both author and deliver real-time AR instructions. This work details the development and implementation of such a system. A technical evaluation was also performed to ensure real-time connectivity in geographically distributed environments. Three network configurations were evaluated. A high-latency high-bandwidth network was used to represent a typical modern maintenance facility. A low-bandwidth network was evaluated to mimic older or more isolated maintenance environments. Lastly, a 4G LTE network was tested, showing the potential for the system to be used across global locations. Under all network configurations, the system effectively facilitated the complete disassembly of a hydraulic pump assembly

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    INSPIRE Newsletter Spring 2022

    Get PDF
    https://scholarsmine.mst.edu/inspire-newsletters/1010/thumbnail.jp

    ARtonomous: Introducing Middle School Students to Reinforcement Learning Through Virtual Robotics

    Full text link
    Typical educational robotics approaches rely on imperative programming for robot navigation. However, with the increasing presence of AI in everyday life, these approaches miss an opportunity to introduce machine learning (ML) techniques grounded in an authentic and engaging learning context. Furthermore, the needs for costly specialized equipment and ample physical space are barriers that limit access to robotics experiences for all learners. We propose ARtonomous, a relatively low-cost, virtual alternative to physical, programming-only robotics kits. With ARtonomous, students employ reinforcement learning (RL) alongside code to train and customize virtual autonomous robotic vehicles. Through a study evaluating ARtonomous, we found that middle-school students developed an understanding of RL, reported high levels of engagement, and demonstrated curiosity for learning more about ML. This research demonstrates the feasibility of an approach like ARtonomous for 1) eliminating barriers to robotics education and 2) promoting student learning and interest in RL and ML.Comment: In Proceedings of Interaction Design and Children (IDC '22

    INSPIRE Newsletter Fall 2022

    Get PDF
    https://scholarsmine.mst.edu/inspire-newsletters/1011/thumbnail.jp

    Task support system by displaying instructional video onto AR workspace

    Full text link
    This paper presents an instructional support system based on aug-mented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build elec-tric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By trans-forming an instructional video to display, according to the user’s view, and by overlaying the video onto the user’s view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the user’s view may be visually confused, we add var-ious visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain op-eration is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a user’s visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks
    corecore