71,641 research outputs found

    Investigation into Self-calibration Methods for the Vexcel UltraCam D Digital Aerial Camera

    Get PDF
    This paper provides an investigation into the camera calibration of a Vexcel UltraCam D digital aerial camera which was undertaken as part of the EuroSDR Digital Camera Calibration project. This paper will present results from two flights flown over a test site at Fredrikstad-Norway using established camera calibration techniques. Furthermore, it proposes an alternative approach. The "new" multi cone digital camera systems are geometrically complex. The image used for photogrammetric analysis is made up of a number of images produced by a cluster of camera cones and possibly various groups of CCD arrays. This produces a resultant image which is not just based on traditional single lens/focal plane camera geometries, but depends on the joining of images from multiple lens (different perspectives), handling groups of focal planes and the matching of overlapping image areas. Some of the requirements from camera calibration such as stability can only be determined through long-term experience/research and some can be determined through investigation and short-term research such as the calibration parameters. The methodology used in this research for assessing the camera calibration is based on self-calibration using the Collinearity Equations. The analysis was undertaken in order to try to identify any systematic patterns in the resulting image residuals. By identifying and quantifying the systematic residuals, a new calibration method is proposed that recomputes the bundle adjustment based on the analysis of the systematic residual patterns. Only very small systematic patterns could be visually identified in small areas of the images. The existing self-calibration methods and the new approach have made a small improvement on the results. The new calibration approach for the low flight has been particularly beneficial in improving the RMSE in Z and reducing image residuals. However, the method was less successful at improving the high flown results. This approach has shown that it has potential but needs further investigation to fully assess its capabilities

    Monitoring Activities from Multiple Video Streams: Establishing a Common Coordinate Frame

    Get PDF
    Passive monitoring of large sites typically requires coordination between multiple cameras, which in turn requires methods for automatically relating events between distributed cameras. This paper tackles the problem of self-calibration of multiple cameras which are very far apart, using feature correspondences to determine the camera geometry. The key problem is finding such correspondences. Since the camera geometry and photometric characteristics vary considerably between images, one cannot use brightness and/or proximity constraints. Instead we apply planar geometric constraints to moving objects in the scene in order to align the scene"s ground plane across multiple views. We do not assume synchronized cameras, and we show that enforcing geometric constraints enables us to align the tracking data in time. Once we have recovered the homography which aligns the planar structure in the scene, we can compute from the homography matrix the 3D position of the plane and the relative camera positions. This in turn enables us to recover a homography matrix which maps the images to an overhead view. We demonstrate this technique in two settings: a controlled lab setting where we test the effects of errors in internal camera calibration, and an uncontrolled, outdoor setting in which the full procedure is applied to external camera calibration and ground plane recovery. In spite of noise in the internal camera parameters and image data, the system successfully recovers both planar structure and relative camera positions in both settings

    Camera Motion Estimation for Multi-Camera Systems

    No full text
    The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. ..

    Single View 3D Reconstruction under an Uncalibrated Camera and an Unknown Mirror Sphere

    Get PDF
    In this paper, we develop a novel self-calibration method for single view 3D reconstruction using a mirror sphere. Unlike other mirror sphere based reconstruction methods, our method needs neither the intrinsic parameters of the camera, nor the position and radius of the sphere be known. Based on eigen decomposition of the matrix representing the conic image of the sphere and enforcing a repeated eignvalue constraint, we derive an analytical solution for recovering the focal length of the camera given its principal point. We then introduce a robust algorithm for estimating both the principal point and the focal length of the camera by minimizing the differences between focal lengths estimated from multiple images of the sphere. We also present a novel approach for estimating both the principal point and focal length of the camera in the case of just one single image of the sphere. With the estimated camera intrinsic parameters, the position(s) of the sphere can be readily retrieved from the eigen decomposition(s) and a scaled 3D reconstruction follows. Experimental results on both synthetic and real data are presented, which demonstrate the feasibility and accuracy of our approach. © 2016 IEEE.postprin

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    A multi-projector CAVE system with commodity hardware and gesture-based interaction

    Get PDF
    Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever combination of skeletal data from multiple Kinect sensors.Preprin

    Self-correction of 3D reconstruction from multi-view stereo images

    Get PDF
    We present a self-correction approach to improving the 3D reconstruction of a multi-view 3D photogrammetry system. The self-correction approach has been able to repair the reconstructed 3D surface damaged by depth discontinuities. Due to self-occlusion, multi-view range images have to be acquired and integrated into a watertight nonredundant mesh model in order to cover the extended surface of an imaged object. The integrated surface often suffers from “dent” artifacts produced by depth discontinuities in the multi-view range images. In this paper we propose a novel approach to correcting the 3D integrated surface such that the dent artifacts can be repaired automatically. We show examples of 3D reconstruction to demonstrate the improvement that can be achieved by the self-correction approach. This self-correction approach can be extended to integrate range images obtained from alternative range capture devices
    corecore