2 research outputs found

    Efficient Line and Patch Feature Characterization and Management for Real-time Camera Tracking

    Get PDF
    One of the key problems of augmented reality is the tracking of the camera position and viewing direction in real-time. Current vision-based systems mostly rely on the detection and tracking of fiducial markers. Some markerless approaches exist, which are based on 3D line models or calibrated reference images. These methods require a high manual preprocessing work step, which is not applicable for the efficient development and design of industrial AR applications. The problem of the preprocessing overload is addressed by the development of vision-based tracking algorithms, which require a minimal workload of the preparation of reference data. A novel method for the automatic view-dependent generation of line models in real-time is presented. The tracking system only needs a polygonal model of a reference object, which is often available from the industrial construction process. Analysis-by-synthesis techniques are used with the support of graphics hardware to create a connection between virtual model and real model. Point-based methods which rely on optical flow-based template tracking are developed for the camera pose estimation in partially known scenarios. With the support of robust reconstruction algorithms a real-time tracking system for augmented reality applications is developed, which is able to run with only very limited previous knowledge about the scene. The robustness and real-time capability is improved with a statistical approach for a feature management system which is based on machine learning techniques

    Precise Depth Image Based Real-Time 3D Difference Detection

    Get PDF
    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the scan position closer to the object to inspect details or to bypass occlusions. The main research questions addressed by this thesis are: Q1: How can 3D differences be detected in real time and from arbitrary viewpoints using a single depth camera? Q2: Extending the first question, how can 3D differences be detected with a high precision? Q3: Which accuracy can be achieved with concrete setups of the proposed concept for real time, depth image based 3D difference detection? This thesis answers Q1 by introducing a real-time approach for depth image based 3D difference detection. The real-time difference detection is based on an algorithm which maps the 3D measurements of a depth camera onto an arbitrary 3D model in real time by fusing computer vision (depth imaging and pose estimation) with a computer graphics based analysis-by-synthesis approach. Then, this thesis answers Q2 by providing solutions for enhancing the 3D difference detection accuracy, both by precise pose estimation and by reducing depth measurement noise. A precise variant of the 3D difference detection concept is proposed, which combines two main aspects. First, the precision of the depth camera’s pose estimation is improved by coupling the depth camera with a very precise coordinate measuring machine. Second, measurement noise of the captured depth images is reduced and missing depth information is filled in by extending the 3D difference detection with 3D reconstruction. The accuracy of the proposed 3D difference detection is quantified by a quantitative evaluation. This provides an anwer to Q3. The accuracy is evaluated both for the basic setup and for the variants that focus on a high precision. The quantitative evaluation using real-world data covers both the accuracy which can be achieved with a time-of-flight camera (SwissRanger 4000) and with a structured light depth camera (Kinect). With the basic setup and the structured light depth camera, differences of 8 to 24 millimeters can be detected from one meter measurement distance. With the enhancements proposed for precise 3D difference detection, differences of 4 to 12 millimeters can be detected from one meter measurement distance using the same depth camera. By solving the challenges described by the three research question, this thesis provides a solution for precise real-time 3D difference detection based on depth images. With the approach proposed in this thesis, dense 3D differences can be detected in real time and from arbitrary viewpoints using a single depth camera. Furthermore, by coupling the depth camera with a coordinate measuring machine and by integrating 3D reconstruction in the 3D difference detection, 3D differences can be detected in real time and with a high precision
    corecore