9 research outputs found
Robust Incremental State Estimation through Covariance Adaptation
Recent advances in the fields of robotics and automation have spurred
significant interest in robust state estimation. To enable robust state
estimation, several methodologies have been proposed. One such technique, which
has shown promising performance, is the concept of iteratively estimating a
Gaussian Mixture Model (GMM), based upon the state estimation residuals, to
characterize the measurement uncertainty model. Through this iterative process,
the measurement uncertainty model is more accurately characterized, which
enables robust state estimation through the appropriate de-weighting of
erroneous observations. This approach, however, has traditionally required a
batch estimation framework to enable the estimation of the measurement
uncertainty model, which is not advantageous to robotic applications. In this
paper, we propose an efficient, incremental extension to the measurement
uncertainty model estimation paradigm. The incremental covariance estimation
(ICE) approach, as detailed within this paper, is evaluated on several
collected data sets, where it is shown to provide a significant increase in
localization accuracy when compared to other state-of-the-art robust,
incremental estimation algorithms.Comment: 8 pages, 4 figures, 2 tables, submitted to IEEE Robotics and
Automation Letter
Fast Optimal Joint Tracking-Registration for Multi-Sensor Systems
Sensor fusion of multiple sources plays an important role in vehicular
systems to achieve refined target position and velocity estimates. In this
article, we address the general registration problem, which is a key module for
a fusion system to accurately correct systematic errors of sensors. A fast
maximum a posteriori (FMAP) algorithm for joint registration-tracking (JRT) is
presented. The algorithm uses a recursive two-step optimization that involves
orthogonal factorization to ensure numerically stability. Statistical
efficiency analysis based on Cram\`{e}r-Rao lower bound theory is presented to
show asymptotical optimality of FMAP. Also, Givens rotation is used to derive a
fast implementation with complexity O(n) with the number of tracked
targets. Simulations and experiments are presented to demonstrate the promise
and effectiveness of FMAP
iSAM: Fast Incremental Smoothing and Mapping with Efficient Data Association
©2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2007 IEEE International Conference on Robotics and Automation (ICRA), 10-14 April 2007, Roma, Italy.DOI: 10.1109/ROBOT.2007.363563We introduce incremental smoothing and mapping
(iSAM), a novel approach to the problem of simultaneous
localization and mapping (SLAM) that addresses the data association
problem and allows real-time application in large-scale
environments. We employ smoothing to obtain the complete
trajectory and map without the need for any approximations,
exploiting the natural sparsity of the smoothing information
matrix. A QR-factorization of this information matrix is at
the heart of our approach. It provides efficient access to the
exact covariances as well as to conservative estimates that
are used for online data association. It also allows recovery
of the exact trajectory and map at any given time by backsubstitution.
Instead of refactoring in each step, we update
the QR-factorization whenever a new measurement arrives.
We analyze the effect of loops, and show how our approach
extends to the non-linear case. Finally, we provide experimental
validation of the overall non-linear algorithm based on the
standard Victoria Park data set with unknown correspondences
Enabling Robust State Estimation through Covariance Adaptation
Several robust state estimation frameworks have been proposed over the previous decades. Underpinning all of these robust frameworks is one dubious assumption. Specifically, the assumption that an accurate a priori measurement uncertainty model can be provided. As systems become more autonomous, this assumption becomes less valid (i.e., as systems start operating in novel environments, there is no guarantee that the assumed a priori measurement uncertainty model characterizes the sensors current observation uncertainty).
In an attempt to relax this assumption, a novel robust state estimation framework is proposed. The proposed framework enables robust state estimation through the iterative adaptation of the measurement uncertainty model. The adaptation of the measurement uncertainty model is granted through non-parametric clustering of the estimator\u27 s residuals, which enables the characterization of the measurement uncertainty via a Gaussian mixture model. This Gaussian mixture model based measurement uncertainty characterization can be incorporated into any non-linear least square optimization routine.
Within this dissertation, the proposed framework is instantiated into three novel robust state estimation algorithms: batch covariance estimation (BCE), batch covariance estimation over an augmented data space (BCE-AD), and incremental covariance estimation (ICE). To verify the proposed framework, three global navigation satellite system (GNSS) data sets were collected. The collected data sets provide varying levels of observation degradation to enable the characterization of the proposed algorithm on a diverse data set. Utilizing these data sets, it is shown that the proposed framework exhibits improved state estimation accuracy when compared to other robust estimation techniques when confronted with degraded data quality
Variabilitätsmodellierung in Kartographierungs- und Lokalisierungsverfahren
In der heutigen Zeit spielt die Automatisierung eine immer bedeutendere Rolle, speziell im Bereich
der Robotik entwickeln sich immer neue Einsatzgebiete, in denen der Mensch durch autonome Fahrzeuge ersetzt wird. Dabei orientiert sich der Großteil der eingesetzten Roboter an Streckenmarkierungen, die in den Einsatzumgebungen installiert sind. Bei diesen Systemen gibt es jedoch einen hohen Installationsaufwand, was die Entwicklung von Robotersystemen, die sich mithilfe ihrer verbauten Sensorik orientieren, vorantreibt. Es existiert zwar eine Vielzahl an Robotern die dafür verwendet werden können. Die Entwicklung der Steuerungssoftware ist aber immer noch Teil der Forschung.
Für die Steuerung wird eine Umgebungskarte benötigt, an der sich der Roboter orientieren kann. Hierfür eignen sich besonders SLAM-Verfahren, die simultanes Lokalisieren und Kartographieren durchführen. Dabei baut der Roboter während seiner Bewegung durch den Raum mithilfe seiner Sensordaten eine Umgebungskarte auf und lokalisiert sich daran, um seine Position auf der Karte exakt zu bestimmen.
Im Laufe dieser Arbeit wurden über 30 verschiedene SLAM Implementierungen bzw. Umsetzungen gefunden die das SLAM Problem lösen. Diese sind jedoch größtenteils an spezielle Systembzw. Umgebungsvoraussetzungen angepasste eigenständige Implementierungen.
Es existiert keine öffentlich zugängliche Übersicht, die einen Vergleich aller bzw. des Großteils der Verfahren, z.B. in Bezug auf ihre Funktionsweise, Systemvoraussetzungen (Sensorik, Roboterplattform), Umgebungsvoraussetzungen (Indoor, Outdoor, ...), Genauigkeit oder Geschwindigkeit, gibt. Viele dieser SLAMs besitzen Implementierungen und Dokumentationen in denen ihre Einsatzgebiete, Testvoraussetzungen oder Weiterentwicklungen im Vergleich zu anderen SLAMVerfahren beschrieben werden, was aber bei der großen Anzahl an Veröffentlichungen das Finden eines passenden SLAM-Verfahrens nicht erleichtert.
Bei einer solchen Menge an SLAM-Verfahren und Implementierungen stellen sich aus softwaretechnologischer Sicht folgende Fragen:
1. Besteht die Möglichkeit einzelne Teile des SLAM wiederzuverwenden?
2. Besteht die Möglichkeit einzelne Teile des SLAM dynamisch auszutauschen?
Mit dieser Arbeit wird das Ziel verfolgt, diese beiden Fragen zu beantworten. Hierfür wird zu Beginn eine Übersicht über alle gefundenen SLAMs aufgebaut um diese in ihren grundlegenden Eigenschaften zu unterscheiden. Aus der Vielzahl von Verfahren werden die rasterbasierten Verfahren, welche Laserscanner bzw. Tiefenbildkamera als Sensorik verwenden, als zu untersuchende Menge ausgewählt. Diese Teilmenge an SLAM-Verfahren wird hinsichtlich ihrer nichtfunktionalen Eigenschaften genauer untersucht und versucht in Komponenten zu unterteilen, welche in mehreren verschiedenen Implementierungen wiederverwendet werden können. Anhand der extrahierten Komponenten soll ein Featurebaum aufgebaut werden, der dem Anwender einen Überblick und die Möglichkeit bereitstellt SLAM-Verfahren nach speziellen Kriterien (Systemvoraussetzungen, Umgebungen, ...) zusammenzusetzen bzw. zur Laufzeit anzupassen. Dafür müssen die verfügbaren SLAM Implementierungen und dazugehörigen Dokumentationen in Bezug auf ihre Gemeinsamkeiten und Unterschiede analysiert werden
Construcción de mapas probabilísticos mediante técnicas de SLAM en entorno ROS
El presente trabajo es una mejora del proyecto fin de carrera con título "Construcción de Mapas Probabilísticos mediante sensores de visión y láser a bordo de un robot" [1], donde se soluciona el problema del SLAM para entornos pequeños usando un robot dotado de un par de cámaras a modo de sistema de visión estéreo y con la información de odometría disponible. El
enfoque seguido en el trabajo que se usa como referencia se plantea como un Filtro de Kalman Extendido (EKF), que mantiene una estimación de la posición del robot y las marcas naturales
que definen el entorno.
Tras detectar los puntos débiles encontrados en el proyecto mencionado en cuanto al límite del tamaño del mapa, se ha procedido a cambiar el Filtro de Kalman extendido que se usa como
método de inferencia bayesiano para la gestión del mapa por un enfoque más actual de Smoothing and Mapping. Tras la implementación de la solución al SLAM con el método de
Smoothing and Mapping se han comparado los resultados obtenidos con los que se obtuvieron haciendo uso del
EKF. Se ha adaptado y optimizado el método de asociación de datos ya utilizado y que tan
buenos resultados dio. Para esto se ha realizado una versión del algoritmo iSAM [2] con un enfoque multi-proceso, permitiendo mezclar la bondades del trabajo realizado en [1] con la mejora de un enfoque más moderno a la solución del SLAM. El algoritmo se ha desarrollado haciendo uso de las librerías GTSAM. Estas librerías ayudan a implementar el Smoothing and Mapping ofreciendo clases para manejar redes bayesianas. La propuesta funciona como varios nodos del meta-sistema operativo ROS (Robot Operating System) que se comunican entre sí y funcionan en paralelo. El algoritmo propuesto se ha desarrollado de manera que sea compatible con diferentes plataformas robóticas y diferentes sensores de visión estereoscópica. La plataforma robótica utilizada es Qbo, desarrollado por la empresa TheCorpora, sobre la que se han validado los resultados de los dos enfoques seguidos, demostrando que el enfoque más actual de Smoothing and Mapping supera a la solución clásica de SLAM EKF tanto en el tamaño del mapa que se puede gestionar como en el tiempo de cómputo de cada iteración del algoritmo.This project is an new version of the final year project called "Construcción de Mapas Probabilísticos mediante sensores de visión y láser a bordo de un robot" [1], in which the SLAM
problem is solved for small environments using a robot provided with two cameras that compoese
the stereoscopic vision system and also with the odometry available. The solution given in the
project cited is an Extended Kalman Filter (EKF) that mantains an estimate of the position
and the natural landmarks that model the environment.
There were some weak points found in the original project, such as a limited map size. In
order to improve its behaviour, the EKF used as a bayesian inference method to manage the
map has been changed. A Smoothing and Mapping method has been used in its place. The
results obtained with the two methods have been compared. The data association method is the
same in the two cases with just a few modi cations to adapt it to the inputs necesary for the
Smoothing and Mapping.
The present work is a version of the iSAM [2] algorithm with a
multi-process implementation, allowing to mix the strong points of the solution provided in [1] with an improvement coming from a modern approach to the SLAM problem. The GTSAM libraries have been used to develop the algorithm. These libraries help to implement the Smoothing and Mapping by the use of clases to manage bayesian networks. The final solution runs as various nodes of the meta-operative system ROS (Robot Operating System) who communicate between them and work in paralell. The algorithm proposed has been developed to be adaptable to different robotic platforms and also different stereoscopic visión sensors. The robotic platform used is Qbo, from TheCorpora company. It has been used to validate the results of the two tested solutions, concluding that the more actual method of Smoothing and Mapping improves the clasic SLAM EKF both in the map size and the computation time spent in one step of the algorithm.Máster Universitario en Sistemas Electrónicos Avanzados. Sistemas Inteligentes (M128
Robust and efficient robotic mapping
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 123-129).Mobile robots are dependent upon a model of the environment for many of their basic functions. Locally accurate maps are critical to collision avoidance, while large-scale maps (accurate both metrically and topologically) are necessary for efficient route planning. Solutions to these problems have immediate and important applications to autonomous vehicles, precision surveying, and domestic robots. Building accurate maps can be cast as an optimization problem: find the map that is most probable given the set of observations of the environment. However, the problem rapidly becomes difficult when dealing with large maps or large numbers of observations. Sensor noise and non-linearities make the problem even more difficult especially when using inexpensive (and therefore preferable) sensors. This thesis describes an optimization algorithm that can rapidly estimate the maximum likelihood map given a set of observations. The algorithm, which iteratively reduces map error by considering a single observation at a time, scales well to large environments with many observations. The approach is particularly robust to noise and non-linearities, quickly escaping local minima that trap current methods. Both batch and online versions of the algorithm are described. In order to build a map, however, a robot must first be able to recognize places that it has previously seen. Limitations in sensor processing algorithms, coupled with environmental ambiguity, make this difficult. Incorrect place recognitions can rapidly lead to divergence of the map. This thesis describes a place recognition algorithm that can robustly handle ambiguous data. We evaluate these algorithms on a number of challenging datasets and provide quantitative comparisons to other state-of-the-art methods, illustrating the advantages of our methods.by Edwin B. Olson.Ph.D
Fast incremental square root information smoothing
We propose a novel approach to the problem of simultaneous localization and mapping (SLAM) based on incremental smoothing, that is suitable for real-time applications in large-scale environments. The main advantages over filter-based algorithms are that we solve the full SLAM problem without the need for any approximations, and that we do not suffer from linearization errors. We achieve efficiency by updating the square-root information matrix, a factored version of the naturally sparse smoothing information matrix. We can efficiently recover the exact trajectory and map at any given time by back-substitution. Furthermore, our approach allows access to the exact covariances, as it does not suffer from under-estimation of uncertainties, which is another problem inherent to filters. We present simulation-based results for the linear case, showing constant time updates for exploration tasks. We further evaluate the behavior in the presence of loops, and discuss how our approach extends to the non-linear case. Finally, we evaluate the overall non-linear algorithm on the standard Victoria Park data set.