6 research outputs found

    Visual SLAM from image sequences acquired by unmanned aerial vehicles

    Get PDF
    This thesis shows that Kalman filter based approaches are sufficient for the task of simultaneous localization and mapping from image sequences acquired by unmanned aerial vehicles. Using solely direction measurements to solve the problem of simultaneous localization and mapping (SLAM) is an important part of autonomous systems. Because the need for real-time capable systems, recursive estimation techniques, Kalman filter based approaches are the main focus of interest. Unfortunately, the non-linearity of the triangulation using the direction measurements cause decrease of accuracy and consistency of the results. The first contribution of this work is a general derivation of the recursive update of the Kalman filter. This derivation is based on implicit measurement equations, having the classical iterative non-linear as well as the non-iterative and linear Kalman filter as specializations of our general derivation. Second, a new formulation of linear-motion models for the single camera state model and the sliding window camera state model are given, that make it possible to compute the prediction in a fully linear manner. The third major contribution is a novel method for the initialization of new object points in the Kalman filter. Empirical studies using synthetic and real data of an image sequence of a photogrammetric strip are made, that demonstrate and compare the influences of the initialization methods of new object points in the Kalman filter. Forth, the accuracy potential of monoscopic image sequences from unmanned aerial vehicles for autonomous localization and mapping is theoretically analyzed, which can be used for planning purposes.Visuelle gleichzeitige Lokalisierung und Kartierung aus Bildfolgen von unbemannten Flugkörpern Diese Arbeit zeigt, dass die Kalmanfilter basierte Lösung der Triangulation zur Lokalisierung und Kartierung aus Bildfolgen von unbemannten Flugkörpern realisierbar ist. Aufgrund von Echtzeitanforderungen autonomer Systeme erreichen rekursive Schätz-verfahren, insbesondere Kalmanfilter basierte Ansätze, große Beliebheit. Bedauerlicherweise treten dabei durch die Nichtlinearität der Triangulation einige Effekte auf, welche die Konsistenz und Genauigkeit der Lösung hinsichtlich der geschätzten Parameter maßgeblich beeinflussen. Der erste Beitrag dieser Arbeit besteht in der Herleitung eines generellen Verfahrens zum rekursiven Verbessern im Kalmanfilter mit impliziten Beobachtungsgleichungen. Wir zeigen, dass die klassischen Verfahren im Kalmanfilter eine Spezialisierung unseres Ansatzes darstellen. Im zweiten Beitrag erweitern wir die klassische Modellierung für ein Einkameramodell zu einem Mehrkameramodell im Kalmanfilter. Diese Erweiterung erlaubt es uns, die Prädiktion für eine lineares Bewegungsmodell vollkommen linear zu berechnen. In einem dritten Hauptbeitrag stellen wir ein neues Verfahren zur Initialisierung von Neupunkten im Kalmanfilter vor. Anhand von empirischen Untersuchungen unter Verwendung simulierter und realer Daten einer Bildfolge eines photogrammetrischen Streifens zeigen und vergleichen wir, welchen Einfluß die Initialisierungsmethoden für Neupunkte im Kalmanfilter haben und welche Genauigkeiten für diese Szenarien erreichbar sind. Am Beispiel von Bildfolgen eines unbemannten Flugkörpern zeigen wir in dieser Arbeit als vierten Beitrag, welche Genauigkeit zur Lokalisierung und Kartierung durch Triangulation möglich ist. Diese theoretische Analyse kann wiederum zu Planungszwecken verwendet werden

    A Survey on Odometry for Autonomous Navigation Systems

    Get PDF
    The development of a navigation system is one of the major challenges in building a fully autonomous platform. Full autonomy requires a dependable navigation capability not only in a perfect situation with clear GPS signals but also in situations, where the GPS is unreliable. Therefore, self-contained odometry systems have attracted much attention recently. This paper provides a general and comprehensive overview of the state of the art in the field of self-contained, i.e., GPS denied odometry systems, and identifies the out-coming challenges that demand further research in future. Self-contained odometry methods are categorized into five main types, i.e., wheel, inertial, laser, radar, and visual, where such categorization is based on the type of the sensor data being used for the odometry. Most of the research in the field is focused on analyzing the sensor data exhaustively or partially to extract the vehicle pose. Different combinations and fusions of sensor data in a tightly/loosely coupled manner and with filtering or optimizing fusion method have been investigated. We analyze the advantages and weaknesses of each approach in terms of different evaluation metrics, such as performance, response time, energy efficiency, and accuracy, which can be a useful guideline for researchers and engineers in the field. In the end, some future research challenges in the field are discussed

    Comparison of state marginalization techniques in visual inertial navigation filters

    Get PDF
    The main focus of this thesis is finding and validating an efficient visual inertial navigation system (VINS) algorithm for applications in micro aerial vehicles (MAV). A typical VINS for a MAV consists of a low-cost micro electro mechanical system (MEMS) inertial measurement unit (IMU) and a monocular camera, which provides a minimum payload sensor setup. This setup is highly desirable for navigation of MAVs because highly resource constrains in the platform. However, bias and noise of lowcost IMUs demand sufficiently accurate VINS algorithms. Accurate VINS algorithms has been developed over the past decade but they demand higher computational resources. Therefore, resource limited MAVs demand computationally efficient VINS algorithms. This thesis considers the following computational cost elements in the VINS algorithm: feature tracking front-end, state marginalization technique and the complexity of the algorithm formulation. In this thesis three state-of-the-art feature tracking front ends were compared in terms of accuracy. (VINS-Mono front-end, MSCKF-Mono feature tracker and Matlab based feature tracker). Four state-ofthe- art state marginalization techniques (MSCKF-Generic marginalization, MSCKFMono marginalization, MSCKF-Two way marginalization and Two keyframe based epipolar constraint marginalization) were compared in terms of accuracy and efficiency. The complexity of the VINS algorithm formulation has also been compared using the filter execution time. The research study then presents the comparative analysis of the algorithms using a publicly available MAV benchmark datasets. Based on the results, an efficient VINS algorithm is proposed which is suitable for MAVs

    Visual-Inertial first responder localisation in large-scale indoor training environments.

    Get PDF
    Accurately and reliably determining the position and heading of first responders undertaking training exercises can provide valuable insights into their situational awareness and give a larger context to the decisions made. Measuring first responder movement, however, requires an accurate and portable localisation system. Training exercises of- ten take place in large-scale indoor environments with limited power infrastructure to support localisation. Indoor positioning technologies that use radio or sound waves for localisation require an extensive network of transmitters or receivers to be installed within the environment to ensure reliable coverage. These technologies also need power sources to operate, making their use impractical for this application. Inertial sensors are infrastructure independent, low cost, and low power positioning devices which are attached to the person or object being tracked, but their localisation accuracy deteriorates over long-term tracking due to intrinsic biases and sensor noise. This thesis investigates how inertial sensor tracking can be improved by providing correction from a visual sensor that uses passive infrastructure (fiducial markers) to calculate accurate position and heading values. Even though using a visual sensor increase the accuracy of the localisation system, combining them with inertial sensors is not trivial, especially when mounted on different parts of the human body and going through different motion dynamics. Additionally, visual sensors have higher energy consumption, requiring more batteries to be carried by the first responder. This thesis presents a novel sensor fusion approach by loosely coupling visual and inertial sensors to create a positioning system that accurately localises walking humans in largescale indoor environments. Experimental evaluation of the devised localisation system indicates sub-metre accuracy for a 250m long indoor trajectory. The thesis also proposes two methods to improve the energy efficiency of the localisation system. The first is a distance-based error correction approach which uses distance estimation from the foot-mounted inertial sensor to reduce the number of corrections required from the visual sensor. Results indicate a 70% decrease in energy consumption while maintaining submetre localisation accuracy. The second method is a motion type adaptive error correction approach, which uses the human walking motion type (forward, backward, or sideways) as an input to further optimise the energy efficiency of the localisation system by modulating the operation of the visual sensor. Results of this approach indicate a 25% reduction in the number of corrections required to keep submetre localisation accuracy. Overall, this thesis advances the state of the art by providing a sensor fusion solution for long-term submetre accurate localisation and methods to reduce the energy consumption, making it more practical for use in first responder training exercises

    Ein Spezialisiertes für Inertialsensor Gestütztes Stereo SLAM

    Get PDF
    This thesis aims at the design, implementation and analyzation of a method for Simultaneous Localization And Mapping. For that purpose a hard- and software system is developed and analyzed, that makes use of a pair of stereo cameras and an inertial measurement unit. This enables the system to perceive it’s environment and ego-motion respectively. This information is used to build up a sparse environment map (Mapping) and estimate the traveled trajectory (Localization). Moreover, the relative pose between the camera and the IMU (camera to IMU calibration) is estimated. The main focus is, to use a novel state parameterization improving the estimations consistency. That is, the appraisal of estimation errors is more reliable compared to other representations.Diese Arbeit hat die Entwicklung, Implementierung und Analyse eines Verfahrens zur simultanen Lokalisierung und Kartierung, englisch kurz SLAM, zum Ziel. Dazu wird ein Hard- und Software-System entwickelt und analysiert, welches mittels zweier Kameras (stereo System) und eines Inertialsensors (IMU) die Umgebung beziehungsweise die Eigenbewegung wahrnimmt. Diese Informationen werden dazu verwendet, Landmarken zu erstellen, welche eine Karte der Umgebung bilden. Simultan dazu wird die Position und Ausrichtung relativ zu dieser Karte bestimmt. Der Kern der Innovation dieser Arbeit ist die Entwicklung einer neuen Parametrisierung für Kalman Filter basierte SLAM Algorithmen. Sie ist spezialisiert für den Einsatz mit stereo Kamerasystemen und besonders geeignet für die Verwendung von Inertialsensorik. Zusätzlich wird die Kalibrierung der Kameras zur angebundenen IMU in das SLAM Verfahren integriert. Zur Validierung werden die verwendeten Modelle intensiv analysiert und mit “state of the art” Methoden verglichen. Desweiteren wird eine detailierte Beschreibung der verwendeten Modelle und der Software Implementierung gegeben

    A stochastically stable solution to the problem of robocentric mapping

    No full text
    This paper provides a novel solution for robocentric mapping using an autonomous mobile robot. The robot dynamic model is the standard unicycle model and the robot is assumed to measure both the range and relative bearing to the landmarks. The algorithm introduced in this paper relies on a coordinate transformation and an extended Kalman filter like algorithm. The coordinate transformation considered in this paper has not been previously considered for robocentric mapping applications. Moreover, we provide a rigorous stochastic stability analysis of the filter employed and we examine the conditions under which the mean-square estimation error converges to a steady-state value
    corecore