1,024 research outputs found

    Comprehensive Extrinsic Calibration of a Camera and a 2D Laser Scanner for a Ground Vehicle

    Get PDF
    Cameras and laser scanners are two important kinds of perceptive sensors and both become more and more commonly used for intelligent ground vehicles; the calibration of these sensors is a fundamental task. A new method is proposed to perform COMPREHENSIVE extrinsic calibration of a SINGLE camera-2D laser scanner pair, i.e. the process of revealing ALL the spatial relationships among the camera coordinates system, the laser scanner coordinates system, the ground coordinates system, and the vehicle coordinates system. The proposed method is mainly based on the convenient and widely used chessboard calibration practice and can be conveniently implemented. The proposed method has been tested on both synthetic data and real data based experiments, which validate the effectiveness of the proposed method.La caméra et le scanner laser sont deux types importants de capteurs perceptifs et tous les deux deviennent de plus en plus communs pour de nombreuses applications des véhicules intelligents. La calibration de ces capteurs est une tâche fondamentale. Dans ce rapport, on a propose une nouvelle méthode pour réaliser la calibration extrinsèque compréhensive d'une seule paire caméra-scanner laser 2D, à savoir le procédé de révéler tous les relations spatiales parmi un système de coordonnées caméra, un système de coordonnées scanner laser, un système de coordonnées terrestre, et un système de coordonnées véhicule. La méthode proposée se fonde principalement sur la practique de cabliration au damier et est facile à mettre en œuvre. Des tests des données réelles et des données synthétiques ont validé la performance de la méthode proposée

    Extrinsic Calibration and Ego-Motion Estimation for Mobile Multi-Sensor Systems

    Get PDF
    Autonomous robots and vehicles are often equipped with multiple sensors to perform vital tasks such as localization or mapping. The joint system of various sensors with different sensing modalities can often provide better localization or mapping results than individual sensor alone in terms of accuracy or completeness. However, to enable improved performance, two important challenges have to be addressed when dealing with multi-sensor systems. Firstly, how to accurately determine the spatial relationship between individual sensor on the robot? This is a vital task known as extrinsic calibration. Without this calibration information, measurements from different sensors cannot be fused. Secondly, how to combine data from multiple sensors to correct for the deficiencies of each sensor, and thus, provides better estimations? This is another important task known as data fusion. The core of this thesis is to provide answers to these two questions. We cover, in the first part of the thesis, aspects related to improving the extrinsic calibration accuracy, and present, in the second part, novel data fusion algorithms designed to address the ego-motion estimation problem using data from a laser scanner and a monocular camera. In the extrinsic calibration part, we contribute by revealing and quantifying the relative calibration accuracies of three common types of calibration methods, so as to offer an insight into choosing the best calibration method when multiple options are available. Following that, we propose an optimization approach for solving common motion-based calibration problems. By exploiting the Gauss-Helmert model, our approach is more accurate and robust than classical least squares model. In the data fusion part, we focus on camera-laser data fusion and contribute with two new ego-motion estimation algorithms that combine complementary information from a laser scanner and a monocular camera. The first algorithm utilizes camera image information to guide the laser scan-matching. It can provide accurate motion estimates and yet can work in general conditions without requiring a field-of-view overlap between the camera and laser scanner, nor an initial guess of the motion parameters. The second algorithm combines the camera and the laser scanner information in a direct way, assuming the field-of-view overlap between the sensors is substantial. By maximizing the information usage of both the sparse laser point cloud and the dense image, the second algorithm is able to achieve state-of-the-art estimation accuracy. Experimental results confirm that both algorithms offer excellent alternatives to state-of-the-art camera-laser ego-motion estimation algorithms

    High-precision grasping and placing for mobile robots

    Get PDF
    This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation

    Global Optimality via Tight Convex Relaxations for Pose Estimation in Geometric 3D Computer Vision

    Get PDF
    In this thesis, we address a set of fundamental problems whose core difficulty boils down to optimizing over 3D poses. This includes many geometric 3D registration problems, covering well-known problems with a long research history such as the Perspective-n-Point (PnP) problem and generalizations, extrinsic sensor calibration, or even the gold standard for Structure from Motion (SfM) pipelines: The relative pose problem from corresponding features. Likewise, this is also the case for a close relative of SLAM, Pose Graph Optimization (also commonly known as Motion Averaging in SfM). The crux of this thesis contribution revolves around the successful characterization and development of empirically tight (convex) semidefinite relaxations for many of the aforementioned core problems of 3D Computer Vision. Building upon these empirically tight relaxations, we are able to find and certify the globally optimal solution to these problems with algorithms whose performance ranges as of today from efficient, scalable approaches comparable to fast second-order local search techniques to polynomial time (worst case). So, to conclude, our research reveals that an important subset of core problems that has been historically regarded as hard and thus dealt with mostly in empirical ways, are indeed tractable with optimality guarantees.Artificial Intelligence (AI) drives a lot of services and products we use everyday. But for AI to bring its full potential into daily tasks, with technologies such as autonomous driving, augmented reality or mobile robots, AI needs to be not only intelligent but also perceptive. In particular, the ability to see and to construct an accurate model of the environment is an essential capability to build intelligent perceptive systems. The ideas developed in Computer Vision for the last decades in areas such as Multiple View Geometry or Optimization, put together to work into 3D reconstruction algorithms seem to be mature enough to nurture a range of emerging applications that already employ as of today 3D Computer Vision in the background. However, while there is a positive trend in the use of 3D reconstruction tools in real applications, there are also some fundamental limitations regarding reliability and performance guarantees that may hinder a wider adoption, e.g. in more critical applications involving people's safety such as autonomous navigation. State-of-the-art 3D reconstruction algorithms typically formulate the reconstruction problem as a Maximum Likelihood Estimation (MLE) instance, which entails solving a high-dimensional non-convex non-linear optimization problem. In practice, this is done via fast local optimization methods, that have enabled fast and scalable reconstruction pipelines, yet lack of guarantees on most of the building blocks leaving us with fundamentally brittle pipelines where no guarantees exist

    Single and multiple stereo view navigation for planetary rovers

    Get PDF
    © Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment
    • …
    corecore