142 research outputs found
Invariant EKF Design for Scan Matching-aided Localization
Localization in indoor environments is a technique which estimates the
robot's pose by fusing data from onboard motion sensors with readings of the
environment, in our case obtained by scan matching point clouds captured by a
low-cost Kinect depth camera. We develop both an Invariant Extended Kalman
Filter (IEKF)-based and a Multiplicative Extended Kalman Filter (MEKF)-based
solution to this problem. The two designs are successfully validated in
experiments and demonstrate the advantage of the IEKF design
Simultaneous maximum-likelihood calibration of odometry and sensor parameters
For a differential-drive mobile robot equipped with an on-board range sensor, there are six parameters to calibrate: three for the odometry (radii and distance between the wheels), and three for the pose of the sensor with respect to the robot frame. This paper describes a method for calibrating all six parameters at the same time, without the need for external sensors or devices. Moreover, it is not necessary to drive the robot along particular trajectories. The available data are the measures of the angular velocities of the wheels and the range sensor readings. The maximum-likelihood calibration solution is found in a closed form
On the Covariance of ICP-based Scan-matching Techniques
This paper considers the problem of estimating the covariance of
roto-translations computed by the Iterative Closest Point (ICP) algorithm. The
problem is relevant for localization of mobile robots and vehicles equipped
with depth-sensing cameras (e.g., Kinect) or Lidar (e.g., Velodyne). The
closed-form formulas for covariance proposed in previous literature generally
build upon the fact that the solution to ICP is obtained by minimizing a linear
least-squares problem. In this paper, we show this approach needs caution
because the rematching step of the algorithm is not explicitly accounted for,
and applying it to the point-to-point version of ICP leads to completely
erroneous covariances. We then provide a formal mathematical proof why the
approach is valid in the point-to-plane version of ICP, which validates the
intuition and experimental results of practitioners.Comment: Accepted at 2016 American Control Conferenc
A Drift-Resilient and Degeneracy-Aware Loop Closure Detection Method for Localization and Mapping In Perceptually-Degraded Environments
Enabling fully autonomous robots capable of navigating and exploring unknown and complex environments has been at the core of robotics research for several decades. Mobile robots rely on a model of the environment for functions like manipulation, collision avoidance and path planning. In GPS-denied and unknown environments where a prior map of the environment is not available, robots need to rely on the onboard sensing to obtain locally accurate maps to operate in their local environment. A global map of an unknown environment can be constructed from fusion of local maps of temporally or spatially distributed mobile robots in the environment.
Loop closure detection, the ability to assert that a robot has returned to a previously visited location, is crucial for consistent mapping as it reduces the drift caused by error accumulation in the estimated robot trajectory. Moreover, in multi-robot systems, loop closure detection enables finding the correspondences between the local maps obtained by individual robots and merging them into a consistent global map of the environment. In ambiguous and perceptually-degraded environments, robust detection of intra- and inter-robot loop closures is especially challenging. This is due to poor illumination or lack-thereof, self-similarity, and sparsity of distinctive perceptual landmarks and features sufficient for establishing global position. Overcoming these challenges enables a wide range of terrestrial and planetary applications, ranging from search and rescue, and disaster relief in hostile environments, to robotic exploration of lunar and Martian surfaces, caves and lava tubes that are of particular interest as they can provide potential habitats for future manned space missions.
In this dissertation, methods and metrics are developed for resolving location ambiguities to significantly improve loop closures in perceptually-degraded environments with sparse or undifferentiated features. The first contribution of this dissertation is development of a degeneracy-aware SLAM front-end capable of determining the level of geometric degeneracy in an unknown environment based on computing the Hessian associated with the computed optimal transformation from lidar scan matching. Using this crucial capability, featureless areas that could lead to data association ambiguity and spurious loop closures are determined and excluded from the search for loop closures. This significantly improves the quality and accuracy of localization and mapping, because the search space for loop closures can be expanded as needed to account for drift while decreasing rather than increasing the probability of false loop closure detections.
The second contribution of this dissertation is development of a drift-resilient loop closure detection method that relies on the 2D semantic and 3D geometric features extracted from lidar point cloud data to enable detection of loop closures with increased robustness and accuracy as compared to traditional geometric methods. The proposed method achieves higher performance by exploiting the spatial configuration of the local scenes embedded in 2D occupancy grid maps commonly used in robot navigation, to search for putative loop closures in a pre-matching step before using a geometric verification. The third contribution of this dissertation is an extensive evaluation and analysis of performance and comparison with the state-of-the-art methods in simulation and in real-world, including six challenging underground mines across the United States
Accurate 3D maps from depth images and motion sensors via nonlinear Kalman filtering
This paper investigates the use of depth images as localisation sensors for
3D map building. The localisation information is derived from the 3D data
thanks to the ICP (Iterative Closest Point) algorithm. The covariance of the
ICP, and thus of the localization error, is analysed, and described by a Fisher
Information Matrix. It is advocated this error can be much reduced if the data
is fused with measurements from other motion sensors, or even with prior
knowledge on the motion. The data fusion is performed by a recently introduced
specific extended Kalman filter, the so-called Invariant EKF, and is directly
based on the estimated covariance of the ICP. The resulting filter is very
natural, and is proved to possess strong properties. Experiments with a Kinect
sensor and a three-axis gyroscope prove clear improvement in the accuracy of
the localization, and thus in the accuracy of the built 3D map.Comment: Submitted to IROS 2012. 8 page
Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations
Size, weight, and power constrained platforms impose constraints on
computational resources that introduce unique challenges in implementing
localization algorithms. We present a framework to perform fast localization on
such platforms enabled by the compressive capabilities of Gaussian Mixture
Model representations of point cloud data. Given raw structural data from a
depth sensor and pitch and roll estimates from an on-board attitude reference
system, a multi-hypothesis particle filter localizes the vehicle by exploiting
the likelihood of the data originating from the mixture model. We demonstrate
analysis of this likelihood in the vicinity of the ground truth pose and detail
its utilization in a particle filter-based vehicle localization strategy, and
later present results of real-time implementations on a desktop system and an
off-the-shelf embedded platform that outperform localization results from
running a state-of-the-art algorithm on the same environment
Extrinsic Calibration and Ego-Motion Estimation for Mobile Multi-Sensor Systems
Autonomous robots and vehicles are often equipped with multiple sensors to perform vital tasks such as localization or mapping. The joint system of various sensors with different sensing modalities can often provide better localization or mapping results than individual sensor alone in terms of accuracy or completeness. However, to enable improved performance, two important challenges have to be addressed when dealing with multi-sensor systems. Firstly, how to accurately determine the spatial relationship between individual sensor on the robot? This is a vital task known as extrinsic calibration. Without this calibration information, measurements from different sensors cannot be fused. Secondly, how to combine data from multiple sensors to correct for the deficiencies of each sensor, and thus, provides better estimations? This is another important task known as data fusion. The core of this thesis is to provide answers to these two questions. We cover, in the first part of the thesis, aspects related to improving the extrinsic calibration accuracy, and present, in the second part, novel data fusion algorithms designed to address the ego-motion estimation problem using data from a laser scanner and a monocular camera. In the extrinsic calibration part, we contribute by revealing and quantifying the relative calibration accuracies of three common types of calibration methods, so as to offer an insight into choosing the best calibration method when multiple options are available. Following that, we propose an optimization approach for solving common motion-based calibration problems. By exploiting the Gauss-Helmert model, our approach is more accurate and robust than classical least squares model. In the data fusion part, we focus on camera-laser data fusion and contribute with two new ego-motion estimation algorithms that combine complementary information from a laser scanner and a monocular camera. The first algorithm utilizes camera image information to guide the laser scan-matching. It can provide accurate motion estimates and yet can work in general conditions without requiring a field-of-view overlap between the camera and laser scanner, nor an initial guess of the motion parameters. The second algorithm combines the camera and the laser scanner information in a direct way, assuming the field-of-view overlap between the sensors is substantial. By maximizing the information usage of both the sparse laser point cloud and the dense image, the second algorithm is able to achieve state-of-the-art estimation accuracy. Experimental results confirm that both algorithms offer excellent alternatives to state-of-the-art camera-laser ego-motion estimation algorithms
A robust extended H-infinity filtering approach to multi-robot cooperative localization in dynamic indoor environments
Multi-robot cooperative localization serves as an essential task for a team of mobile robots to work within an unknown environment. Based on the real-time laser scanning data interaction, a robust approach is proposed to obtain optimal multi-robot relative observations using the Metric-based Iterative Closest Point (MbICP) algorithm, which makes it possible to utilize the surrounding environment information directly instead of placing a localization-mark on the robots. To meet the demand of dealing with the inherent non-linearities existing in the multi-robot kinematic models and the relative observations, a robust extended H∞ filtering (REHF) approach is developed for the multi-robot cooperative localization system, which could handle non-Gaussian process and measurement noises with respect to robot navigation in unknown dynamic scenes. Compared with the conventional multi-robot localization system using extended Kalman filtering (EKF) approach, the proposed filtering algorithm is capable of providing superior performance in a dynamic indoor environment with outlier disturbances. Both numerical experiments and experiments conducted for the Pioneer3-DX robots show that the proposed localization scheme is effective in improving both the accuracy and reliability of the performance within a complex environment.This work was supported inpart by the National Natural Science Foundation of China under grants 61075094, 61035005 and 61134009
X-ICP: Localizability-Aware LiDAR Registration for Robust Localization in Extreme Environments
Modern robotic systems are required to operate in challenging environments,
which demand reliable localization under challenging conditions. LiDAR-based
localization methods, such as the Iterative Closest Point (ICP) algorithm, can
suffer in geometrically uninformative environments that are known to
deteriorate point cloud registration performance and push optimization toward
divergence along weakly constrained directions. To overcome this issue, this
work proposes i) a robust fine-grained localizability detection module, and ii)
a localizability-aware constrained ICP optimization module, which couples with
the localizability detection module in a unified manner. The proposed
localizability detection is achieved by utilizing the correspondences between
the scan and the map to analyze the alignment strength against the principal
directions of the optimization as part of its fine-grained LiDAR localizability
analysis. In the second part, this localizability analysis is then integrated
into the scan-to-map point cloud registration to generate drift-free pose
updates by enforcing controlled updates or leaving the degenerate directions of
the optimization unchanged. The proposed method is thoroughly evaluated and
compared to state-of-the-art methods in simulated and real-world experiments,
demonstrating the performance and reliability improvement in LiDAR-challenging
environments. In all experiments, the proposed framework demonstrates accurate
and generalizable localizability detection and robust pose estimation without
environment-specific parameter tuning.Comment: 20 Pages, 20 Figures Submitted to IEEE Transactions On Robotics.
Supplementary Video: https://youtu.be/SviLl7q69aA Project Website:
https://sites.google.com/leggedrobotics.com/x-ic
- …