25 research outputs found

    Mobile augmented reality based 3D snapshots

    Get PDF
    We describe a mobile augmented reality application that is based on 3D snapshotting using multiple photographs. Optical square markers provide the anchor for reconstructed virtual objects in the scene. A novel approach based on pixel flow highly improves tracking performance. This dual tracking approach also allows for a new single-button user interface metaphor for moving virtual objects in the scene. The development of the AR viewer was accompanied by user studies confirming the chosen approach

    Exploiting Photogrammetric Targets for Industrial AR

    No full text
    In this work, we encourage the idea of using Photogrammetric targets for object tracking in Industrial Augmented Reality (IAR). Photogrammetric targets, especially uncoded circular targets, are widely used in the industry to perform 3D surface measurements. Therefore, an AR solution based on the uncoded circular targets can improve the work flow integration by reusing existing targets and saving time. These circular targets do not have coded patterns to establish unique 2D-3D correspondences between the targets on the model and their image projections. We solve this particular problem of 2D-3D correspondence of non-coplanar circular targets from a single image. We introduce a Conic pair descriptor, which computes the Eucledian invariants from circular targets in the model space and in the image space. A three stage method is used to compare the descriptors and compute the correspondences with up to 100% precision and 89% recall rates. We are able to achieve tracking performance of 3 FPS (2560x1920 pix) to 8 FPS (640x480 pix) depending on the camera resolution and the targets present in the scene

    A step closer to reality: Closed loop dynamic registration correction in SAR

    No full text
    In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. A computer generated model (CAD) of the real object is used to plan the positions where the virtual content is to be projected. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in the areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications

    Frustration Free Pose Computation For Spatial AR Devices in Industrial Scenario

    No full text
    The quest for finding the killer application for Industrial Augmented Reality(IAR) is still active. The existing solutions are ingenious but most can not be directly integrated with the existing industrial workflows. Generally, IAR applications require modifications in the industrial workflows depending on the tracking methodology. These modifications end up being an overhead for the users and deter them from using AR solutions. In this poster we propose a resourceful solution to achieve end-to-end workflow integration with minimum effort from the user end. The solution is suited for laser guided Spatial Augmented Reality(SAR) systems mainly preferred for industrial manufacturing applications. We also introduce a new concept for pose computation, which is inspired from existing mechanical concept of part alignment. The accuracy of our method is comparable to the classical marker based methods. The complete process of pose computation, from initialisation to refinement, is designed to be plug and play

    A Multi-Sensor Platform for Wide-area Tracking

    No full text
    Indoor tracking scenarios still face challenges in providing continuous tracking support in wide-area workplaces. This is especially the case in Augmented Reality since such augmentations generally require exact full 6DOF pose measurements in order to continuously display 3D graphics from user-related view points. Many single sensor systems have been explored but only few of them have the capability to track reliably in wide-area environments. We introduce a mobile multi-sensor platform to overcome the shortcomings of single sensor systems. The platform is equipped with a detachable optical camera and a rigidly mounted odometric measurement system providing relative positions and orientations with respect to the ground plane. The camera is used for marker-based as well as for marker-less (feature-based) inside-out tracking as part of a hybrid approach. We explain the principle tracking technologies in our competitive/cooperative fusion approach and show possible enhancements to further developments. This inside-out approach scales well with increasing tracking range, as opposed to stationary outside-in tracking
    corecore