2,391 research outputs found

    External multi-modal imaging sensor calibration for sensor fusion: A review

    Get PDF
    Multi-modal data fusion has gained popularity due to its diverse applications, leading to an increased demand for external sensor calibration. Despite several proven calibration solutions, they fail to fully satisfy all the evaluation criteria, including accuracy, automation, and robustness. Thus, this review aims to contribute to this growing field by examining recent research on multi-modal imaging sensor calibration and proposing future research directions. The literature review comprehensively explains the various characteristics and conditions of different multi-modal external calibration methods, including traditional motion-based calibration and feature-based calibration. Target-based calibration and targetless calibration are two types of feature-based calibration, which are discussed in detail. Furthermore, the paper highlights systematic calibration as an emerging research direction. Finally, this review concludes crucial factors for evaluating calibration methods and provides a comprehensive discussion on their applications, with the aim of providing valuable insights to guide future research directions. Future research should focus primarily on the capability of online targetless calibration and systematic multi-modal sensor calibration.Ministerio de Ciencia, Innovación y Universidades | Ref. PID2019-108816RB-I0

    Algorithms for the Calibration and Correction of Texel Images Using Inertial Measurement Updates

    Get PDF
    A 3D terrain map can be developed to survey the earth’s surface and provide scientists with a method of accurately modeling terrain. Some of the applications include agricultural surveys, disaster management, and object classification. In order to develop a 3D terrain map, a sensor must capture all of the data to create the map. This 3D mapping tool can be made by combining a sensor that take samples of points on the earth’s surface and combines that with a digital camera that takes an image and lays it across those samples of the earth. In order to take data over a large area, the entire system is flown on a small drone and must be small enough to allow the drone to fly for several minutes. While the drone is flying, a GPS sensor tracks the system’s location and orientation while taking data so they can be correlated with the 3D terrain map. Utah State University has developed a sensor for this application, called a texel camera, which uses laser measurements, digital images, and a GPS system to develop 3D terrain maps. The texel camera includes methods for correcting the terrain maps using the physics of the sensors and the motion of the drone during flight

    Registration and Recognition in 3D

    Get PDF
    The simplest Computer Vision algorithm can tell you what color it sees when you point it at an object, but asking that computer what it is looking at is a much harder problem. Camera and LiDAR (Light Detection And Ranging) sensors generally provide streams pixel of values and sophisticated algorithms must be engineered to recognize objects or the environment. There has been significant effort expended by the computer vision community on recognizing objects in color images; however, LiDAR sensors, which sense depth values for pixels instead of color, have been studied less. Recently we have seen a renewed interest in depth data with the democratization provided by consumer depth cameras. Detecting objects in depth data is more challenging in some ways because of the lack of texture and increased complexity of processing unordered point sets. We present three systems that contribute to solving the object recognition problem from the LiDAR perspective. They are: calibration, registration, and object recognition systems. We propose a novel calibration system that works with both line and raster based LiDAR sensors, and calibrates them with respect to image cameras. Our system can be extended to calibrate LiDAR sensors that do not give intensity information. We demonstrate a novel system that produces registrations between different LiDAR scans by transforming the input point cloud into a Constellation Extended Gaussian Image (CEGI) and then uses this CEGI to estimate the rotational alignment of the scans independently. Finally we present a method for object recognition which uses local (Spin Images) and global (CEGI) information to recognize cars in a large urban dataset. We present real world results from these three systems. Compelling experiments show that object recognition systems can gain much information using only 3D geometry. There are many object recognition and navigation algorithms that work on images; the work we propose in this thesis is more complimentary to those image based methods than competitive. This is an important step along the way to more intelligent robots

    Toward 3D reconstruction of outdoor scenes using an MMW radar and a monocular vision sensor

    Get PDF
    International audienceIn this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors' coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors' geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction.The proposed methods have been validated with synthetic and real data

    Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

    Get PDF
    Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorith

    Multi-FEAT: Multi-Feature Edge AlignmenT for Targetless Camera-LiDAR Calibration

    Full text link
    The accurate environment perception of automobiles and UAVs (Unmanned Ariel Vehicles) relies on the precision of onboard sensors, which require reliable in-field calibration. This paper introduces a novel approach for targetless camera-LiDAR extrinsic calibration called Multi-FEAT (Multi-Feature Edge AlignmenT). Multi-FEAT uses the cylindrical projection model to transform the 2D(Camera)-3D(LiDAR) calibration problem into a 2D-2D calibration problem, and exploits various LiDAR feature information to supplement the sparse LiDAR point cloud boundaries. In addition, a feature matching function with a precision factor is designed to improve the smoothness of the solution space. The performance of the proposed Multi-FEAT algorithm is evaluated using the KITTI dataset, and our approach shows more reliable results, as compared with several existing targetless calibration methods. We summarize our results and present potential directions for future work

    B.O.G.G.L.E.S.: Boundary Optical GeoGraphic Lidar Environment System

    Get PDF
    The purpose of this paper is to describe a pseudo X-ray vision system that pairs a Lidar scanner with a visualization device. The system as a whole is referred to as B.O.G.G.L.E.S. There are several key factors that went into the development of this system and the background information and design approach are thoroughly described. B.O.G.G.L.E.S functionality is depicted through the use of design constraints and the analysis of test results. Additionally, many possible developments for B.O.G.G.L.E.S are proposed in the paper. This indicates that there are various avenues of improvement for this project that could be implemented in the future

    Geometric accuracy evaluation of mobile terrestrial LIDAR surveys with supporting algorithms

    Get PDF
    Mobile Mapping System (MMS) technology is widely used for many applications, hence quantifying its accuracy is a very important and essential task and is a primary focus of this research. In general, to perfrom geometric accuracy evaluation of MMS data, validation points/features are needed. A method is needed to capture a point feature off the roadway in a position where a target on the ground surface would not be visible to the scanner. In this study, eight sphere targets with 14 diameter were placed on the shoulder of the roadway over validation points on the ground. The sphere targets were constructed from injection molded spherical light fixtures. Through a calibration process, they were verified as consistent in size and shape at the 1 mm level. The targets were scanned by four different MMSs (two of design grade and two of asset grade) on two established Test Sites representing different roadway environments (highway and urban settings). Two different selectable data rates (250 KHz and 500 KHz) were also exercised in the data collection as well as two different vehicle driving techniques for data collection (with and without acceleration while the vehicle is turning). Absolute and relative accuracy of the dataset obtained from MMS are of interest. All of these characteristics and factors have been geometrically evaluated through the developed procedures. An automatic sphere target detection/estimation algorithm was developed to detect and extract the scanned sphere target points by eliminating most of the adjacent non-sphere points via a 3D Hough transform process. Following this, the sphere center is robustly located through estimation via L1-norm minimization which allows outliers (ex. tribrach points) to be detected and automatically eliminated. Subsequently the final sphere target center is estimated through least squares. This procedure is robust to several sources of non-random noise. Through error propagation, the precision of the center point estimation is SE90 = 0.20 cm (radius for spherical error, 90%). The case of disturbed targets was able to be detected with the results from this algorithm as well. Although such geometric targets have been widely used in static laser scanning, their use in Mobile Mapping has not been thoroughly studied. Another contribution from this research is that L1-estimation has been applied to all methods of forming condition equations. Those are indirect observations (line fitting), observations only (level network), and mixed model (dependent relative orientation of stereo pair images) problems. Existing published work has exclusively been applied to the indirect observations form of condition equation representation. In this test, outliers which were intentionally added to observations of all the problems were correctly detected. Additionally, L1-estimation was implemented to each of the problems by two different approaches: 1) by using a linear programming approach solved by the simplex method, 2) by a brute force method (exhaustive search for all possible sets of solutions). Results from both approaches are identical. This has verified the idea that the linear programming approach can be used as a convenient tool for implementing L1-estimation for all methods of forming the condition equations

    Joint Intrinsic and Extrinsic LiDAR-Camera Calibration in Targetless Environments Using Plane-Constrained Bundle Adjustment

    Full text link
    This paper introduces a novel targetless method for joint intrinsic and extrinsic calibration of LiDAR-camera systems using plane-constrained bundle adjustment (BA). Our method leverages LiDAR point cloud measurements from planes in the scene, alongside visual points derived from those planes. The core novelty of our method lies in the integration of visual BA with the registration between visual points and LiDAR point cloud planes, which is formulated as a unified optimization problem. This formulation achieves concurrent intrinsic and extrinsic calibration, while also imparting depth constraints to the visual points to enhance the accuracy of intrinsic calibration. Experiments are conducted on both public data sequences and self-collected dataset. The results showcase that our approach not only surpasses other state-of-the-art (SOTA) methods but also maintains remarkable calibration accuracy even within challenging environments. For the benefits of the robotics community, we have open sourced our codes
    • …
    corecore