227 research outputs found

    Building an Omnidirectional 3D Color Laser Ranging System through a Novel Calibration Method

    Get PDF
    3D color laser ranging technology plays a crucial role in many applications. This paper develops a new omnidirectional 3D color laser ranging system. It consists of a 2D laser rangefinder (LRF), a color camera, and a rotating platform. Both the 2D LRF and the camera rotate with the rotating platform to collect line point clouds and images synchronously. The line point clouds and the images are then fused into a 3D color point cloud by a novel calibration method of a 2D LRF and a camera based on an improved checkerboard pattern with rectangle holes. In the calibration, boundary constraint and mean approximation are deployed to accurately compute the centers of rectangle holes from the raw sensor data based on data correction. Then, the data association between the 2D LRF and the camera is directly established to determine their geometric mapping relationship. These steps make the calibration process simple, accurate, and reliable. The experiments show that the proposed calibration method is accurate, robust to noise, and suitable for different geometric structures, and the developed 3D color laser ranging system has good performance for both indoor and outdoor scenes

    Simple and efficient method for calibration of a camera and 2D laser rangefinder

    Get PDF
    In the last few years, the integration of cameras and laser rangefinders has been applied to a lot of researches on robotics, namely autonomous navigation vehicles, and intelligent transportation systems. The system based on multiple devices usually requires the relative pose of devices for processing. Therefore, the requirement of calibration of a camera and a laser device is very important task. This paper presents a calibration method for determining the relative position and direction of a camera with respect to a laser rangefinder. The calibration method makes use of depth discontinuities of the calibration pattern, which emphasizes the beams of laser to automatically estimate the occurred position of laser scans on the calibration pattern. Laser range scans are also used for estimating corresponding 3D image points in the camera coordinates. Finally, the relative parameters between camera and laser device are discovered by using corresponding 3D points of them.In the last few years, the integration of cameras and laser rangefinders has been applied to a lot of researches on robotics, namely autonomous navigation vehicles, and intelligent transportation systems. The system based on multiple devices usually requires the relative pose of devices for processing. Therefore, the requirement of calibration of a camera and a laser device is very important task. This paper presents a calibration method for determining the relative position and direction of a camera with respect to a laser rangefinder. The calibration method makes use of depth discontinuities of the calibration pattern, which emphasizes the beams of laser to automatically estimate the occurred position of laser scans on the calibration pattern. Laser range scans are also used for estimating corresponding 3D image points in the camera coordinates. Finally, the relative parameters between camera and laser device are discovered by using corresponding 3D points of them

    Global Optimality via Tight Convex Relaxations for Pose Estimation in Geometric 3D Computer Vision

    Get PDF
    In this thesis, we address a set of fundamental problems whose core difficulty boils down to optimizing over 3D poses. This includes many geometric 3D registration problems, covering well-known problems with a long research history such as the Perspective-n-Point (PnP) problem and generalizations, extrinsic sensor calibration, or even the gold standard for Structure from Motion (SfM) pipelines: The relative pose problem from corresponding features. Likewise, this is also the case for a close relative of SLAM, Pose Graph Optimization (also commonly known as Motion Averaging in SfM). The crux of this thesis contribution revolves around the successful characterization and development of empirically tight (convex) semidefinite relaxations for many of the aforementioned core problems of 3D Computer Vision. Building upon these empirically tight relaxations, we are able to find and certify the globally optimal solution to these problems with algorithms whose performance ranges as of today from efficient, scalable approaches comparable to fast second-order local search techniques to polynomial time (worst case). So, to conclude, our research reveals that an important subset of core problems that has been historically regarded as hard and thus dealt with mostly in empirical ways, are indeed tractable with optimality guarantees.Artificial Intelligence (AI) drives a lot of services and products we use everyday. But for AI to bring its full potential into daily tasks, with technologies such as autonomous driving, augmented reality or mobile robots, AI needs to be not only intelligent but also perceptive. In particular, the ability to see and to construct an accurate model of the environment is an essential capability to build intelligent perceptive systems. The ideas developed in Computer Vision for the last decades in areas such as Multiple View Geometry or Optimization, put together to work into 3D reconstruction algorithms seem to be mature enough to nurture a range of emerging applications that already employ as of today 3D Computer Vision in the background. However, while there is a positive trend in the use of 3D reconstruction tools in real applications, there are also some fundamental limitations regarding reliability and performance guarantees that may hinder a wider adoption, e.g. in more critical applications involving people's safety such as autonomous navigation. State-of-the-art 3D reconstruction algorithms typically formulate the reconstruction problem as a Maximum Likelihood Estimation (MLE) instance, which entails solving a high-dimensional non-convex non-linear optimization problem. In practice, this is done via fast local optimization methods, that have enabled fast and scalable reconstruction pipelines, yet lack of guarantees on most of the building blocks leaving us with fundamentally brittle pipelines where no guarantees exist

    Automatic Extrinsic Self-Calibration of Mobile Mapping Systems Based on Geometric 3D Features

    Get PDF
    Mobile Mapping is an efficient technology to acquire spatial data of the environment. The spatial data is fundamental for applications in crisis management, civil engineering or autonomous driving. The extrinsic calibration of the Mobile Mapping System is a decisive factor that affects the quality of the spatial data. Many existing extrinsic calibration approaches require the use of artificial targets in a time-consuming calibration procedure. Moreover, they are usually designed for a specific combination of sensors and are, thus, not universally applicable. We introduce a novel extrinsic self-calibration algorithm, which is fully automatic and completely data-driven. The fundamental assumption of the self-calibration is that the calibration parameters are estimated the best when the derived point cloud represents the real physical circumstances the best. The cost function we use to evaluate this is based on geometric features which rely on the 3D structure tensor derived from the local neighborhood of each point. We compare different cost functions based on geometric features and a cost function based on the RĂ©nyi quadratic entropy to evaluate the suitability for the self-calibration. Furthermore, we perform tests of the self-calibration on synthetic and two different real datasets. The real datasets differ in terms of the environment, the scale and the utilized sensors. We show that the self-calibration is able to extrinsically calibrate Mobile Mapping Systems with different combinations of mapping and pose estimation sensors such as a 2D laser scanner to a Motion Capture System and a 3D laser scanner to a stereo camera and ORB-SLAM2. For the first dataset, the parameters estimated by our self-calibration lead to a more accurate point cloud than two comparative approaches. For the second dataset, which has been acquired via a vehicle-based mobile mapping, our self-calibration achieves comparable results to a manually refined reference calibration, while it is universally applicable and fully automated
    • …
    corecore