74,389 research outputs found

    A Multi Camera and Multi Laser Calibration Method for 3D Reconstruction of Revolution Parts

    Get PDF
    This paper describes a method for calibrating multi camera and multi laser 3D triangulation systems, particularly for those using Scheimpflug adapters. Under this configuration, the focus plane of the camera is located at the laser plane, making it difficult to use traditional calibration methods, such as chessboard pattern-based strategies. Our method uses a conical calibration object whose intersections with the laser planes generate stepped line patterns that can be used to calculate the camera-laser homographies. The calibration object has been designed to calibrate scanners for revolving surfaces, but it can be easily extended to linear setups. The experiments carried out show that the proposed system has a precision of 0.1 mm

    Investigation into Self-calibration Methods for the Vexcel UltraCam D Digital Aerial Camera

    Get PDF
    This paper provides an investigation into the camera calibration of a Vexcel UltraCam D digital aerial camera which was undertaken as part of the EuroSDR Digital Camera Calibration project. This paper will present results from two flights flown over a test site at Fredrikstad-Norway using established camera calibration techniques. Furthermore, it proposes an alternative approach. The "new" multi cone digital camera systems are geometrically complex. The image used for photogrammetric analysis is made up of a number of images produced by a cluster of camera cones and possibly various groups of CCD arrays. This produces a resultant image which is not just based on traditional single lens/focal plane camera geometries, but depends on the joining of images from multiple lens (different perspectives), handling groups of focal planes and the matching of overlapping image areas. Some of the requirements from camera calibration such as stability can only be determined through long-term experience/research and some can be determined through investigation and short-term research such as the calibration parameters. The methodology used in this research for assessing the camera calibration is based on self-calibration using the Collinearity Equations. The analysis was undertaken in order to try to identify any systematic patterns in the resulting image residuals. By identifying and quantifying the systematic residuals, a new calibration method is proposed that recomputes the bundle adjustment based on the analysis of the systematic residual patterns. Only very small systematic patterns could be visually identified in small areas of the images. The existing self-calibration methods and the new approach have made a small improvement on the results. The new calibration approach for the low flight has been particularly beneficial in improving the RMSE in Z and reducing image residuals. However, the method was less successful at improving the high flown results. This approach has shown that it has potential but needs further investigation to fully assess its capabilities

    A visual servoing path-planning strategy for cameras obeying the unified model

    Get PDF
    Part of 2010 IEEE Multi-Conference on Systems and ControlRecently, a unified camera model has been introduced in visual control systems in order to describe through a unique mathematical model conventional perspective cameras, fisheye cameras, and catadioptric systems. In this paper, a path-planning strategy for visual servoing is proposed for any camera obeying this unified model. The proposed strategy is based on the projection onto a virtual plane of the available image projections. This has two benefits. First, it allows one to perform camera pose estimation and 3D object reconstruction by using methods for conventional camera that are not valid for other cameras. Second, it allows one to perform image pathplanning for multi-constraint satisfaction by using a simplified but equivalent projection model, that in this paper is addressed by introducing polynomial parametrizations of the rotation and translation. The planned image trajectory is hence tracked by using an IBVS controller. The proposed strategy is validated through simulations with image noise and calibration errors typical of real experiments. It is worth remarking that visual servoing path-planning for non conventional perspective cameras has not been proposed yet in the literature. © 2010 IEEE.published_or_final_versionThe 2010 IEEE International Symposium on Computer-Aided Control System Design (CACSD), Yokohama, Japan, 8-10 September 2010. In Proceedings of CACSD, 2010, p. 1795-180

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    Cross-calibration of Time-of-flight and Colour Cameras

    Get PDF
    Time-of-flight cameras provide depth information, which is complementary to the photometric appearance of the scene in ordinary images. It is desirable to merge the depth and colour information, in order to obtain a coherent scene representation. However, the individual cameras will have different viewpoints, resolutions and fields of view, which means that they must be mutually calibrated. This paper presents a geometric framework for this multi-view and multi-modal calibration problem. It is shown that three-dimensional projective transformations can be used to align depth and parallax-based representations of the scene, with or without Euclidean reconstruction. A new evaluation procedure is also developed; this allows the reprojection error to be decomposed into calibration and sensor-dependent components. The complete approach is demonstrated on a network of three time-of-flight and six colour cameras. The applications of such a system, to a range of automatic scene-interpretation problems, are discussed.Comment: 18 pages, 12 figures, 3 table

    Forward Vehicle Collision Warning Based on Quick Camera Calibration

    Full text link
    Forward Vehicle Collision Warning (FCW) is one of the most important functions for autonomous vehicles. In this procedure, vehicle detection and distance measurement are core components, requiring accurate localization and estimation. In this paper, we propose a simple but efficient forward vehicle collision warning framework by aggregating monocular distance measurement and precise vehicle detection. In order to obtain forward vehicle distance, a quick camera calibration method which only needs three physical points to calibrate related camera parameters is utilized. As for the forward vehicle detection, a multi-scale detection algorithm that regards the result of calibration as distance priori is proposed to improve the precision. Intensive experiments are conducted in our established real scene dataset and the results have demonstrated the effectiveness of the proposed framework
    corecore