1,381 research outputs found

    A Comprehensive Introduction of Visual-Inertial Navigation

    Full text link
    In this article, a tutorial introduction to visual-inertial navigation(VIN) is presented. Visual and inertial perception are two complementary sensing modalities. Cameras and inertial measurement units (IMU) are the corresponding sensors for these two modalities. The low cost and light weight of camera-IMU sensor combinations make them ubiquitous in robotic navigation. Visual-inertial Navigation is a state estimation problem, that estimates the ego-motion and local environment of the sensor platform. This paper presents visual-inertial navigation in the classical state estimation framework, first illustrating the estimation problem in terms of state variables and system models, including related quantities representations (Parameterizations), IMU dynamic and camera measurement models, and corresponding general probabilistic graphical models (Factor Graph). Secondly, we investigate the existing model-based estimation methodologies, these involve filter-based and optimization-based frameworks and related on-manifold operations. We also discuss the calibration of some relevant parameters, also initialization of state of interest in optimization-based frameworks. Then the evaluation and improvement of VIN in terms of accuracy, efficiency, and robustness are discussed. Finally, we briefly mention the recent development of learning-based methods that may become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Advancing non-linear methods for coupled data assimilation across the atmosphere-land interface

    Get PDF
    In this thesis, I present two complementary frameworks to improve data assimila- tion in Earth system models, using the atmosphere-land interface as an exemplary case. As processes and components in the Earth system are coupled via interfaces, we would expect that assimilating observations from one Earth system component into another would improve the initialization of both components. In contrast to this expectation, it is often found that assimilation of atmospheric boundary layer observations into the land surface does not improve the analysis of the latter component. To disentangle the effects on the cross-compartmental assimilation, I take a step back from operational methods and use the coupled atmosphere-land modelling platform TerrSysMP in idealized twin experiments. I synthesize hourly and sparsely-distributed 2-metre-temperature observations from a single "nature" run. I subsequently assimilate these observations into the soil moisture with dif- ferent types of data assimilation methods. Based on this experimental structure, I test advanced data assimilation methods without model errors or biases. As my first framework, I propose to use localized ensemble Kalman filters for the unification of coupled data assimilation in Earth system models. To validate this framework, I conduct comparison experiments with a localized ensemble transform Kalman filter and a simplified extended Kalman filter, as similarly used at the ECMWF. Based on my developed environment, I find that we can assimilate 2-metre-temperature observations to improve the soil moisture analysis. In addition, hourly-updating the soil moisture with an ensemble Kalman filter decreases the error within the soil moisture analysis by up to 50 % compared to a daily-smoothing with a simplified extended Kalman filter. As a consequence, observations from the atmospheric boundary layer can be directly assimilated into the land surface model without a need of any intermediate interpolation, as normally used in land surface data assimilation. The improvement suggests that the land surface can be updated based on the same hourly cycle as used for mesoscale data assimilation. My results therefore prove that a unification of methods for data assimilation across the atmosphere-land interface is possible. As my second framework, I propose to use feature-based data assimilation to stabilize cross-compartmental data assimilation. To validate this framework, I use my implementation of an ensemble Kalman smoother that applies its analysis at the beginning of an assimilation window and resembles 4DEnVar. This smoother takes advantage of temporal dependencies in the atmosphere-land interface and improves the soil moisture analysis compared to the ensemble Kalman filter by 10 %. Subsequently based on this smoother, I introduce fingerprint operators as observational feature extractor into cross-compartmental data assimilation. These fingerprint operators take advantage of characteristic fingerprints in the difference between observations and model that point towards forecast errors, possibly in another Earth system component. As main finding, this concept can condense the information from the diurnal cycle in 2-metre-temperature observations into two observational features. This condensation makes the soil moisture analysis more robust against a miss-specified localization radius and errors in the observational covariance. Finally, I provide two new theoretical approaches to automatically learn such observational features with machine learning. In the first approach, I generalize ensemble Kalman filter with observational features to a novel kernelized ensemble transform Kalman filter.automatically This kernelized filter automatically con- structs the feature extractor on the basis of the given ensemble data and a chosen kernel function. In the second approach, I show that parameters within the data assimilation can be learned by variational Bayes. In this way, we can find whole distributions for parameters in data assimilation and, thus, determining their un- certainties. Furthermore, I prove the ensemble transform Kalman filter as a special solution of variational Bayes in the linearized-Gaussian case. These results suggest a possibility to specify the feature extractor as neural network and to train it with variational Bayes. These two approaches therefore prove that developments in machine learning can be used to extend data assimilation.In dieser Arbeit stelle ich zwei unterschiedliche Frameworks vor, um die Ini- tialisierung in gekoppelten Erdsystemmodellen für die Wettervorhersage zu verbessern. Dabei benutze ich die Schnittstelle zwischen der Atmosphäre und der Landoberfläche als Beispiel. Diese Schnittstelle bietet mir die Möglichkeit zu unter- suchen, in wie weit gekoppelte Datenassimilierung möglich ist. Prozesse und Kom- ponenten des Klimasystems sind über verschiedene Schnittstellen miteinander verbunden. Von daher würden wir erwarten, dass Beobachtungen aus der atmo- sphärischen Grenzschicht, auch die Initialisierung von Bodenmodellen verbessern, allerdings wurde in verschiedenen vorangegangenden Studien gezeigt, dass dies nicht der Fall ist. Um die Einflüsse von unterschiedlichen Fehler-Faktoren auf die Datenassimilierung zu reduzieren, benutze ich Experimente, die im Vergleich zur operationellen Wettervorhersage vereinfacht sind. Hierfür benutze ich das gekop- pelte Atmosphären-Land Vorhersagemodel TerrSysMP. All diese Experimente basieren auf einem Lauf ohne Datenassimilierung, den ich als meine "Natur" definiere. Aus diesem Naturlauf extrahiere ich künstliche 2-Meter-Temperatur Beobachtungen, welche dann mit unterschiedlichen Datenassimilierungsverfahren in die Bodenfeuchte assimiliert werden. Mit dieser Art von Experimenten teste ich fortschrittliche und nicht-lineare Datenassimilierungsverfahren für die Atmosphären- Land-Schnittstelle. Als erstes Framework schlage ich vor, einen lokalisierten Ensemble-Kalman-Filter für eine vereinheitlichte Datenassimilierung in Erdsystemmodellen zu verwenden. Um dieses Framework zu validieren, mache ich Vergleichsexperimente mit dem eben erwähnten lokalisierten Ensemble-Kalman-Filter und einem vereinfachten Extended-Kalman-Filter, der in ähnlicher Form beim Europäischen Zentrum für mittelfristige Wettervorhersage verwendet wird. Basierend auf meiner entwick- elten Umgebung zeige ich, dass 2-Meter-Temperatur Beobachtungen dafür ver- wendet werden können, um die Initialisierung der Bodenfeuchte zu verbessern. Der lokalisierte Ensemble-Kalman Filter reduziert zusätzlich den Fehler in der Ini- tialisierung der Bodenfeuchte um bis zu 50 %, im Vergleich zu dem vereinfachten Extended-Kalman-Filter. Dies zeigt zum ersten Mal, dass Beobachtungen aus der atmosphärischen Grenzschicht, direkt für die Initialisierung der Bodenfeuchte, ver- wendet werden können, ohne den Umweg einer Interpolierung zu nehmen, wie es bei dem vereinfachten Extended-Kalman-Filter der Fall ist. Darüberhinausge- hend legen diese Verbesserungen nahe, dass die Landoberfläche mit der gleichen stündlichen Aktualisierungs-Rate, wie die Atmosphäre, initialisiert werden kann. Deshalb beweisen diese Ergebnisse, dass eine vereinheitlichte Datenassimilierung über die Atmosphären-Land-Schnittstelle hinweg möglich ist. Als zweites Framework schlage ich vor, anstatt von Beobachtungen, Merkmale dieser Beobachtung zu assimilieren. Dies kann die Assimilierung, über die Atmosphären-Land Schnittstelle hinweg, verbessern. Um dieses Framework zu validieren, führe ich einen Ensemble-Kalman-Smoother ein. Dieser Ensemble- Kalman-Smoother initialisiert die Bodenfeuchte auf Basis eines Assimilierungs- fensters, ähnlich dem variationsgetriebenem vierdimensionellem Verfahren. Mit diesem Ensemble-Kalman-Smoother zeige ich, dass es möglich ist, zeitliche Ab- hängigkeiten innerhalb der Atmospähren-Land-Schnittstelle in der Datenassimi- lierung zu verwenden. Die Verwendung dieser Abhängigkeiten verbessert hierbei die Initialisierung der Bodenfeuchte. Auf Basis dieser Methodik, führe ich Oper- atoren ein, die Fingerabdrücke innerhalb von Beobachtungen ausnutzen. Diese Fingerabdruck-Operatoren nutze ich dafür, um Vorhersage-Fehler in anderen Komponenten des Erdsystems zu finden. Für die 2-Meter-Temperatur zeige ich, dass Informationen aus dem Tagesverlauf der Temperatur in 2 unterschiedliche Merkmale kondensiert werden können. Diese Kondensation macht die Initial- isierung der Bodenfeuchte robuster gegen Störungen innerhalb der Lokalisierung und der Beobachtungskovarianzen. Deshalb beweisen diese Ergebnisse, dass die eingeführten Fingerabdruck-Operatoren, die Datenassimilierung über die Atmosphären-Land Schnittstelle hinweg stabilisieren. Als letzten Punkte führe ich zwei neue, theoretische, Ansätze ein, um solche Beobachtungsmerkmale automatisch mit maschinellem Lernen zu finden. In meinem ersten Ansatz zeige ich, dass der merkmal-basierte Ensemble-Kalman- Filter unter dem Deckmantel des kernbasierten Ensemble-Transform-Kalman- Filter generalisiert werden kann. Hierbei lernt die Datenassimilierung automa- tisch die wichtigsten Beobachtungsmerkmale auf Basis der Ensemble Daten und einem gewählten Kern. In meinem zweiten Ansatz, zeige ich, dass Parameter des Ensemble-Kalman Filters mit variationsgetriebenen Bayesianischen Meth- oden erlernt werden können. Mit dieser Bayesianischen Methode kann die gesamte Wahrscheinlichkeitsverteilung der Parameter herausgefunden und so Unsicherheiten, innerhalb dieser, dargestellt werden können. Zusätzlich beweise ich, dass der Ensemble-Kalman-Filters eine spezielle Lösung dieses Ansatze im linear-Gaussischen Fall ist. Als Konsequenz, deute ich an, dass wir die Beobach- tungsmerkmale durch neuronale Netzwerke ersetzen können, die mit Hilfe dieses Ansatze erlernt werden. Von daher beweisen diese beiden Ansätze, dass Entwick- lungen im maschinellen Lernen dafür genutzt werden können, um Datenassimi- lierungsmethoden zu erweitern und möglicherweise zu verbessern

    Long-Term Localization for Self-Driving Cars

    Get PDF
    Long-term localization is hard due to changing conditions, while relative localization within time sequences is much easier. To achieve long-term localization in a sequential setting, such as, for self-driving cars, relative localization should be used to the fullest extent, whenever possible.This thesis presents solutions and insights both for long-term sequential visual localization, and localization using global navigational satellite systems (GNSS), that push us closer to the goal of accurate and reliable localization for self-driving cars. It addresses the question: How to achieve accurate and robust, yet cost-effective long-term localization for self-driving cars?Starting in this question, the thesis explores how existing sensor suites for advanced driver-assistance systems (ADAS) can be used most efficiently, and how landmarks in maps can be recognized and used for localization even after severe changes in appearance. The findings show that:* State-of-the-art ADAS sensors are insufficient to meet the requirements for localization of a self-driving car in less than ideal conditions.GNSS and visual localization are identified as areas to improve.\ua0* Highly accurate relative localization with no convergence delay is possible by using time relative GNSS observations with a single band receiver, and no base stations.\ua0* Sequential semantic localization is identified as a promising focus point for further research based on a benchmark study comparing state-of-the-art visual localization methods in challenging autonomous driving scenarios including day-to-night and seasonal changes.\ua0* A novel sequential semantic localization algorithm improves accuracy while significantly reducing map size compared to traditional methods based on matching of local image features.\ua0* Improvements for semantic segmentation in challenging conditions can be made efficiently by automatically generating pixel correspondences between images from a multitude of conditions and enforcing a consistency constraint during training.\ua0* A segmentation algorithm with automatically defined and more fine-grained classes improves localization performance.\ua0* The performance advantage seen in single image localization for modern local image features, when compared to traditional ones, is all but erased when considering sequential data with odometry, thus, encouraging to focus future research more on sequential localization, rather than pure single image localization

    Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators

    Full text link
    Robust velocity and position estimation is crucial for autonomous robot navigation. The optical flow based methods for autonomous navigation have been receiving increasing attentions in tandem with the development of micro unmanned aerial vehicles. This paper proposes a kernel cross-correlator (KCC) based algorithm to determine optical flow using a monocular camera, which is named as correlation flow (CF). Correlation flow is able to provide reliable and accurate velocity estimation and is robust to motion blur. In addition, it can also estimate the altitude velocity and yaw rate, which are not available by traditional methods. Autonomous flight tests on a quadcopter show that correlation flow can provide robust trajectory estimation with very low processing power. The source codes are released based on the ROS framework.Comment: 2018 International Conference on Robotics and Automation (ICRA 2018

    Robust state estimation methods for robotics applications

    Get PDF
    State estimation is an integral component of any autonomous robotic system. Finding the correct position, velocity, and orientation of an agent in its environment enables it to do other tasks like mapping and interacting with the environment, and collaborating with other agents. State estimation is achieved by using data obtained from multiple sensors and fusing them in a probabilistic framework. These include inertial data from Inertial Measurement Unit (IMU), images from camera, range data from lidars, and positioning data from Global Navigation Satellite Systems (GNSS) receivers. The main challenge faced in sensor-based state estimation is the presence of noisy, erroneous, and even lack of informative data. Some common examples of such situations include wrong feature matching between images or point clouds, false loop-closures due to perceptual aliasing (different places that look similar can confuse the robot), presence of dynamic objects in the environment (odometry algorithms assume a static environment), multipath errors for GNSS (signals for satellites jumping off tall structures like buildings before reaching receivers) and more. This work studies existing and new ways of how standard estimation algorithms like the Kalman filter and factor graphs can be made robust to such adverse conditions without losing performance in ideal outlier-free conditions. The first part of this work demonstrates the importance of robust Kalman filters on wheel-inertial odometry for high-slip terrain. Next, inertial data is integrated into GNSS factor graphs to improve the accuracy and robustness of GNSS factor graphs. Lastly, a combined framework for improving the robustness of non-linear least squares and estimating the inlier noise threshold is proposed and tested with point cloud registration and lidar-inertial odometry algorithms followed by an algorithmic analysis of optimizing generalized robust cost functions with factor graphs for GNSS positioning problem
    corecore