21 research outputs found
On the Covariance of ICP-based Scan-matching Techniques
This paper considers the problem of estimating the covariance of
roto-translations computed by the Iterative Closest Point (ICP) algorithm. The
problem is relevant for localization of mobile robots and vehicles equipped
with depth-sensing cameras (e.g., Kinect) or Lidar (e.g., Velodyne). The
closed-form formulas for covariance proposed in previous literature generally
build upon the fact that the solution to ICP is obtained by minimizing a linear
least-squares problem. In this paper, we show this approach needs caution
because the rematching step of the algorithm is not explicitly accounted for,
and applying it to the point-to-point version of ICP leads to completely
erroneous covariances. We then provide a formal mathematical proof why the
approach is valid in the point-to-plane version of ICP, which validates the
intuition and experimental results of practitioners.Comment: Accepted at 2016 American Control Conferenc
LiDAR based relative pose and covariance estimation for communicating vehicles exchanging a polygonal model of their shape
International audienc
Invariant EKF Design for Scan Matching-aided Localization
Localization in indoor environments is a technique which estimates the
robot's pose by fusing data from onboard motion sensors with readings of the
environment, in our case obtained by scan matching point clouds captured by a
low-cost Kinect depth camera. We develop both an Invariant Extended Kalman
Filter (IEKF)-based and a Multiplicative Extended Kalman Filter (MEKF)-based
solution to this problem. The two designs are successfully validated in
experiments and demonstrate the advantage of the IEKF design
Probabilistic visual verification for robotic assembly manipulation
In this paper we present a visual verification approach for robotic assembly manipulation which enables robots to verify their assembly state. Given shape models of objects and their expected placement configurations, our approach estimates the probability of the success of the assembled state using a depth sensor. The proposed approach takes into account uncertainties in object pose. Probability distributions of depth and surface normal depending on the uncertainties are estimated to classify the assembly state in a Bayesian formulation. The effectiveness of our approach is validated in comparative experiments with other approaches.Boeing Compan
Real-Time Pose Graph SLAM based on Radar
This work presents a real-time pose graph based Simultaneous Localization and Mapping (SLAM) system for automotive Radar.
The algorithm constructs a map from Radar detections using the Iterative Closest Point (ICP) method to match consecutive scans obtained from a single, front-facing Radar sensor.
The algorithm is evaluated on a range of real-world datasets and shows mean translational errors as low as 0.62 m and demonstrates robustness on long tracks.
Using a single Radar, our proposed system achieves state-of-the-art performance when compared to other Radar-based SLAM algorithms that use multiple, higher-resolution Radars
Localisation-safe reinforcement learning for mapless navigation
Most reinforcement learning (RL)-based works for mapless point goal navigation tasks assume the availability of the robot ground-truth poses, which is unrealistic for real world applications. In this work, we remove such an assumption and deploy observation-based localisation algorithms, such as Lidar-based or visual odometry, for robot self-pose estimation. These algorithms, despite having widely achieved promising performance and being robust to various harsh environments, may fail to track robot locations under many scenarios, where observations perceived along robot trajectories are insufficient or ambiguous. Hence, using such localisation algorithms will introduce new unstudied problems for mapless navigation tasks. This work will propose a new RL-based algorithm, with which robots learn to navigate in a way that prevents localisation failures or getting trapped in local minimum regions. This ability can be learned by deploying two techniques suggested in this work: a reward metric to decide punishment on behaviours resulting in localisation failures; and a reconfigured state representation that consists of current observation and history trajectory information to transfer the problem from a partially observable Markov decision process (POMDP) to a Markov Decision Process (MDP) model to avoid local minimum