4 research outputs found

    Auto-Calibration and Three-Dimensional Reconstruction for Zooming Cameras

    Get PDF
    This dissertation proposes new algorithms to recover the calibration parameters and 3D structure of a scene, using 2D images taken by uncalibrated stationary zooming cameras. This is a common configuration, usually encountered in surveillance camera networks, stereo camera systems, and event monitoring vision systems. This problem is known as camera auto-calibration (also called self-calibration) and the motivation behind this work is to obtain the Euclidean three-dimensional reconstruction and metric measurements of the scene, using only the captured images. Under this configuration, the problem of auto-calibrating zooming cameras differs from the classical auto-calibration problem of a moving camera in two major aspects. First, the camera intrinsic parameters are changing due to zooming. Second, because cameras are stationary in our case, using classical motion constraints, such as a pure translation for example, is not possible. In order to simplify the non-linear complexity of this problem, i.e., auto-calibration of zooming cameras, we have followed a geometric stratification approach. In particular, we have taken advantage of the movement of the camera center, that results from the zooming process, to locate the plane at infinity and, consequently to obtain an affine reconstruction. Then, using the assumption that typical cameras have rectangular or square pixels, the calculation of the camera intrinsic parameters have become possible, leading to the recovery of the Euclidean 3D structure. Being linear, the proposed algorithms were easily extended to the case of an arbitrary number of images and cameras. Furthermore, we have devised a sufficient constraint for detecting scene parallel planes, a useful information for solving other computer vision problems

    The Extraction and Use of Image Planes for Three-dimensional Metric Reconstruction

    Get PDF
    The three-dimensional (3D) metric reconstruction of a scene from two-dimensional images is a fundamental problem in Computer Vision. The major bottleneck in the process of retrieving such structure lies in the task of recovering the camera parameters. These parameters can be calculated either through a pattern-based calibration procedure, which requires an accurate knowledge of the scene, or using a more flexible approach, known as camera autocalibration, which exploits point correspondences across images. While pattern-based calibration requires the presence of a calibration object, autocalibration constraints are often cast into nonlinear optimization problems which are often sensitive to both image noise and initialization. In addition, autocalibration fails for some particular motions of the camera. To overcome these problems, we propose to combine scene and autocalibration constraints and address in this thesis (a) the problem of extracting geometric information of the scene from uncalibrated images, (b) the problem of obtaining a robust estimate of the affine calibration of the camera, and (c) the problem of upgrading and refining the affine calibration into a metric one. In particular, we propose a method for identifying the major planar structures in a scene from images and another method to recognize parallel pairs of planes whenever these are available. The identified parallel planes are then used to obtain a robust estimate of both the affine and metric 3D structure of the scene without resorting to the traditional error prone calculation of vanishing points. We also propose a refinement method which, unlike existing ones, is capable of simultaneously incorporating plane parallelism and perpendicularity constraints in the autocalibration process. Our experiments demonstrate that the proposed methods are robust to image noise and provide satisfactory results

    Camera Self-calibration with Parallel Screw Axis Motion by Intersecting Imaged Horopters

    No full text
    http://www.springerlink.com/content/wx70543p17h660wq/International audienceWe present a closed-form method for the self-calibration of a camera (intrinsic and extrinsic parameters) from at least three images acquired with parallel screw axis motion, i.e. the camera rotates about parallel axes while performing general translations. The considered camera motion is more general than pure rotation and planar motion, which are not always easy to produce. The proposed solution is nearly as simple as the existing for those motions, and it has been evaluated by using both synthetic and real data from acquired images

    Desarrollo de un algoritmo de auto-calibración de sistemas estéreo basado en el uso del UV-Disparity : aplicación a la odometría visual

    Get PDF
    El objetivo de este proyecto es el de presentar una solución que calibre de forma autónoma los parámetros extrínsecos de un par de cámaras estéreo cuyo fin es trazar la trayectoria del vehículo mediante un algoritmo de odometría visual. La calibración de la cámara ha de realizarse en cada instante de captura de la cámara para así poder minimizar los errores debidos a posibles vibraciones del vehículo mientras este está en movimiento. Gracias a ello, los resultados serán más precisos y robustos. Para ello, se proceder á a introducir las fórmulas que relacionan el entorno con el sistema de la cámara, y poder con ello efectuar una calibración de ellas antes de calcular la trayectoria del vehículo. El algoritmo ha sido diseñado en CUDA (Compute Unified Device Architecture) para poder satisfacer con la demanda de bajo tiempo de cómputo necesaria para poder llevar a cabo la empresa. El código implementado será introducido en el prototipo de vehículo inteligente de la UC3M (Universidad Carlos III de Madrid), ivvi (Intelligent Vehicle Based On Visual Information) 2.0 para poder formar un sistema de navegación dependiente únicamente de los sensores instalados en él. ______________________________________________________________________________________________________________The aim of this project is to present a solution that calibrates automatically the extrinsic parameters of stereo vision pairs of cameras, whose objective is to determine the path of the vehicle due to a visual odometry algorithm. The calibration of the camera must be done in each frame, in order to minimize the errors that occur because of the car's vibrations while it is being driven. Thanks to this improvements, the results will be more accurate and robust. For this purpose, formulas that relate the environment with the camera coordinate system will be introduced, so that calibration takes place before the algortihm calculates the current position of the vehicle. The algorithm has been designed in CUDA (Compute Unifi ed Device Architecture), in order to satisfy with the low computation time request that is needed in this problem. The implemented code will be used in the intelligent vehicle owned by the UC3M (Universidad Carlos III de Madrid), called ivvi (Intelligent Vehicle Based On Visual Information) 2.0, to create a navigation system that only depends on the on-board sensors.Ingeniería Industria
    corecore