29 research outputs found

    A Self-Tracking Augmented Reality System

    No full text
    We present a color-video-based augmented reality (AR) system that is designed to be self-tracking, that is, it requires no separate tracking subsystem. Rather, tracking is performed strictly from the video images acquired through the lens of the camera also used to view the real world. The methods for tracking are rooted in prior research in photogrammetry and computer vision. This approach to tracking for AR systems enables a variety of new applications in assembly guidance that are not feasible with current AR technology. Our initial application is in aircraft manufacturing. We outline our approaches to feature detection, correspondence, pose determination, and system calibration. The results obtained thus far are summarized along with the problems we encountered

    Multi-ring Color Fiducial Systems for Scalable Fiducial Tracking Augmented Reality

    No full text
    In Augmented Reality (AR), a user can see a virtual world as well as the real world. To avoid the registration problem, the user's pose in both worlds should be exactly the same. Therefore, to explore a large virtual world, the user should move correspondingly in the real world, and wide-area tracking is essential. Most of the developed fiducial tracking AR systems have very limited tracking ranges. In this paper, we propose multi-ring color fiducial systems for scalable fiducial tracking AR systems. We analyze the optimal ring width, and develop formulas to obtain the optimal fiducial set with system specific inputs. It is a simple and low-cost way to achieve wide-area tracking. 1 Introduction In Virtual Reality, where all scenes are computer-generated images, a virtual world could be explored by flying or steering treadmills without the same physical movements. In Augmented Reality (AR), a user sees a virtual world as well as the real world. Because the user's pose in the real worl..

    A Self-Tracking Augmented Reality System for Assembly-Guidance Applications

    No full text
    We present a color video-based augmented reality (AR) system that is designed to be self-tracking, that is, it requires no separate tracking subsystem. Rather, tracking is performed strictly from the video images acquired through the lens of the camera also used to view the real world. The methods for tracking are rooted in prior research in photogrammetry and computer vision. This approach to tracking for AR systems enables a variety of new applications in assembly guidance that are not feasible with current AR technology. Our initial application is in aircraft manufacturing. We outline our approaches to feature detection, correspondence, pose determination, and system calibration. The results obtained thus far are summarized along with the problems we encountered. 1. Introduction Augmented reality (AR) systems display virtual objects embedded in a real environment. The AR concept and implementations date back to Ivan Sutherland's "Ultimate Display" [Sutherland 68]. Recent systems ..

    World embedded interfaces for human-robot interaction

    No full text
    Human interaction with large numbers of robots or distributed sensors presents a number of difficult challenges including supervisory management, monitoring of individual and collective state, and apprehending situation awareness. A rich source of information about the environment can be provided even with robots that have no explicit representations or maps of their locale. To do this, we transform a robot swarm into a distributed interface embedded within the environment. Visually, each robot acts like a pixel within a much larger visual display space so that any robot need only communicate a small amount of information from its current location. Our approach uses Augmented Reality techniques for communicating information to humans from large numbers of small-scale robots to enable situation awareness, monitoring, and control for surveillance, reconnaissance, hazard detection, and path finding

    HRL Laboratories

    No full text
    Emerging technology now in use in the military enables air vehicles, dismounted soldiers, persistent sensors, and higher headquarters to share multimedia information in the form of alerts that will increase the awareness of a dismounted soldier conducting urban operations. The challenge is to manage the complexity of such a tactical alerting system so that soldiers are not overwhelmed or distracted by new technology. There is a large number of ways to present soldiers with any specific information in the form of an alert, including the choice not to present the information. An effective tactical alert management system must accurately learn the preferences of the soldier as well as policy from higher-level command from a small number of examples in a small amount of time with a simple interface. This paper presents the development of an information management engine (IME) that accomplishes this difficult goal. We present the system concept, an ontology for tactical alerts, a supervised learning approach, and results of pilot study experiments that give preliminary validation for the use of such a system

    Geometry-controllable graphene layers and their application for supercapacitors

    No full text
    A facile and ultrafast method for geometry controllable and vertically transformative 3D graphene architectures is demonstrated. The 2D stacked graphene layers produced by exfoliation of graphite were transformed, e.g., from horizontal to vertical, by applying electric charge (-2 V with 1-3 μAh/cm2). The three-dimensionally transformed graphene layers have maximized surface area as well as high specific capacitance, 410 F g-1 in LiClO4/PC electrolyte, which is 4.4 times higher than that of planar (stacked) graphene layers. Furthermore, they can remarkably exhibit 87% of retained capacitance as the scan rate is increased from 100 to 1000 mV s-1, unlike planar graphene, which displays 61% retention under the same conditions. © 2015 American Chemical Society1651sciescopu
    corecore