307 research outputs found

    Nonlinear State and Parameter Estimation Using Iterated Sigma Point Kalman Filter: Comparative Studies

    Get PDF
    In this chapter, iterated sigma‐point Kalman filter (ISPKF) methods are used for nonlinear state variable and model parameter estimation. Different conventional state estimation methods, namely the unscented Kalman filter (UKF), the central difference Kalman filter (CDKF), the square‐root unscented Kalman filter (SRUKF), the square‐root central difference Kalman filter (SRCDKF), the iterated unscented Kalman filter (IUKF), the iterated central difference Kalman filter (ICDKF), the iterated square‐root unscented Kalman filter (ISRUKF) and the iterated square‐root central difference Kalman filter (ISRCDKF) are evaluated through a simulation example with two comparative studies in terms of state accuracies, estimation errors and convergence. The state variables are estimated in the first comparative study, from noisy measurements with the several estimation methods. Then, in the next comparative study, both of states and parameters are estimated, and are compared by calculating the estimation root mean square error (RMSE) with the noise‐free data. The impacts of the practical challenges (measurement noise and number of estimated states/parameters) on the performances of the estimation techniques are investigated. The results of both comparative studies reveal that the ISRCDKF method provides better estimation accuracy than the IUKF, ICDKF and ISRUKF. Also the previous methods provide better accuracy than the UKF, CDKF, SRUKF and SRCDKF techniques. The ISRCDKF method provides accuracy over the other different estimation techniques; by iterating maximum a posteriori estimate around the updated state, it re‐linearizes the measurement equation instead of depending on the predicted state. The results also represent that estimating more parameters impacts the estimation accuracy as well as the convergence of the estimated parameters and states. The ISRCDKF provides improved state accuracies than the other techniques even with abrupt changes in estimated states

    Inertial navigation aided by simultaneous loacalization and mapping

    Get PDF
    Unmanned aerial vehicles technologies are getting smaller and cheaper to use and the challenges of payload limitation in unmanned aerial vehicles are being overcome. Integrated navigation system design requires selection of set of sensors and computation power that provides reliable and accurate navigation parameters (position, velocity and attitude) with high update rates and bandwidth in small and cost effective manner. Many of today’s operational unmanned aerial vehicles navigation systems rely on inertial sensors as a primary measurement source. Inertial Navigation alone however suffers from slow divergence with time. This divergence is often compensated for by employing some additional source of navigation information external to Inertial Navigation. From the 1990’s to the present day Global Positioning System has been the dominant navigation aid for Inertial Navigation. In a number of scenarios, Global Positioning System measurements may be completely unavailable or they simply may not be precise (or reliable) enough to be used to adequately update the Inertial Navigation hence alternative methods have seen great attention. Aiding Inertial Navigation with vision sensors has been the favoured solution over the past several years. Inertial and vision sensors with their complementary characteristics have the potential to answer the requirements for reliable and accurate navigation parameters. In this thesis we address Inertial Navigation position divergence. The information for updating the position comes from combination of vision and motion. When using such a combination many of the difficulties of the vision sensors (relative depth, geometry and size of objects, image blur and etc.) can be circumvented. Motion grants the vision sensors with many cues that can help better to acquire information about the environment, for instance creating a precise map of the environment and localize within the environment. We propose changes to the Simultaneous Localization and Mapping augmented state vector in order to take repeated measurements of the map point. We show that these repeated measurements with certain manoeuvres (motion) around or by the map point are crucial for constraining the Inertial Navigation position divergence (bounded estimation error) while manoeuvring in vicinity of the map point. This eliminates some of the uncertainty of the map point estimates i.e. it reduces the covariance of the map points estimates. This concept brings different parameterization (feature initialisation) of the map points in Simultaneous Localization and Mapping and we refer to it as concept of aiding Inertial Navigation by Simultaneous Localization and Mapping. We show that making such an integrated navigation system requires coordination with the guidance and control measurements and the vehicle task itself for performing the required vehicle manoeuvres (motion) and achieving better navigation accuracy. This fact brings new challenges to the practical design of these modern jam proof Global Positioning System free autonomous navigation systems. Further to the concept of aiding Inertial Navigation by Simultaneous Localization and Mapping we have investigated how a bearing only sensor such as single camera can be used for aiding Inertial Navigation. The results of the concept of Inertial Navigation aided by Simultaneous Localization and Mapping were used. New parameterization of the map point in Bearing Only Simultaneous Localization and Mapping is proposed. Because of the number of significant problems that appear when implementing the Extended Kalman Filter in Inertial Navigation aided by Bearing Only Simultaneous Localization and Mapping other algorithms such as Iterated Extended Kalman Filter, Unscented Kalman Filter and Particle Filters were implemented. From the results obtained, the conclusion can be drawn that the nonlinear filters should be the choice of estimators for this application

    Iterated Filters for Nonlinear Transition Models

    Full text link
    A new class of iterated linearization-based nonlinear filters, dubbed dynamically iterated filters, is presented. Contrary to regular iterated filters such as the iterated extended Kalman filter (IEKF), iterated unscented Kalman filter (IUKF) and iterated posterior linearization filter (IPLF), dynamically iterated filters also take nonlinearities in the transition model into account. The general filtering algorithm is shown to essentially be a (locally over one time step) iterated Rauch-Tung-Striebel smoother. Three distinct versions of the dynamically iterated filters are especially investigated: analogues to the IEKF, IUKF and IPLF. The developed algorithms are evaluated on 25 different noise configurations of a tracking problem with a nonlinear transition model and linear measurement model, a scenario where conventional iterated filters are not useful. Even in this "simple" scenario, the dynamically iterated filters are shown to have superior root mean-squared error performance as compared with their respective baselines, the EKF and UKF. Particularly, even though the EKF diverges in 22 out of 25 configurations, the dynamically iterated EKF remains stable in 20 out of 25 scenarios, only diverging under high noise.Comment: 8 pages. Accepted to IEEE International Conference on Information Fusion 2023 (FUSION 2023). Copyright 2023 IEE

    Simultaneous localisation and mapping: A stereo vision based approach

    Get PDF
    With limited dynamic range and poor noise performance, cameras still pose considerable challenges in the application of range sensors in the context of robotic navigation, especially in the implementation of Simultaneous Localisation and Mapping (SLAM) with sparse features. This paper presents a combination of methods in solving the SLAM problem in a constricted indoor environment using small baseline stereo vision. Main contributions include a feature selection and tracking algorithm, a stereo noise filter, a robust feature validation algorithm and a multiple hypotheses adaptive window positioning method in 'closing the loop'. These methods take a novel approach in that information from the image processing and robotic navigation domains are used in tandem to augment each other. Experimental results including a real-time implementation in an office-like environment are also presented. © 2006 IEEE

    Computationally-efficient visual inertial odometry for autonomous vehicle

    Get PDF
    This thesis presents the design, implementation, and validation of a novel nonlinearfiltering based Visual Inertial Odometry (VIO) framework for robotic navigation in GPSdenied environments. The system attempts to track the vehicle’s ego-motion at each time instant while capturing the benefits of both the camera information and the Inertial Measurement Unit (IMU). VIO demands considerable computational resources and processing time, and this makes the hardware implementation quite challenging for micro- and nanorobotic systems. In many cases, the VIO process selects a small subset of tracked features to reduce the computational cost. VIO estimation also suffers from the inevitable accumulation of error. This limitation makes the estimation gradually diverge and even fail to track the vehicle trajectory over long-term operation. Deploying optimization for the entire trajectory helps to minimize the accumulative errors, but increases the computational cost significantly. The VIO hardware implementation can utilize a more powerful processor and specialized hardware computing platforms, such as Field Programmable Gate Arrays, Graphics Processing Units and Application-Specific Integrated Circuits, to accelerate the execution. However, the computation still needs to perform identical computational steps with similar complexity. Processing data at a higher frequency increases energy consumption significantly. The development of advanced hardware systems is also expensive and time-consuming. Consequently, the approach of developing an efficient algorithm will be beneficial with or without hardware acceleration. The research described in this thesis proposes multiple solutions to accelerate the visual inertial odometry computation while maintaining a comparative estimation accuracy over long-term operation among state-ofthe- art algorithms. This research has resulted in three significant contributions. First, this research involved the design and validation of a novel nonlinear filtering sensor-fusion algorithm using trifocal tensor geometry and a cubature Kalman filter. The combination has handled the system nonlinearity effectively, while reducing the computational cost and system complexity significantly. Second, this research develops two solutions to address the error accumulation issue. For standalone self-localization projects, the first solution applies a local optimization procedure for the measurement update, which performs multiple corrections on a single measurement to optimize the latest filter state and covariance. For larger navigation projects, the second solution integrates VIO with additional pseudo-ranging measurements between the vehicle and multiple beacons in order to bound the accumulative errors. Third, this research develops a novel parallel-processing VIO algorithm to speed up the execution using a multi-core CPU. This allows the distribution of the filtering computation on each core to process and optimize each feature measurement update independently. The performance of the proposed visual inertial odometry framework is evaluated using publicly-available self-localization datasets, for comparison with some other open-source algorithms. The results illustrate that a proposed VIO framework is able to improve the VIO’s computational efficiency without the installation of specialized hardware computing platforms and advanced software libraries

    A Unified Approach to the Orbital Tracking Problem

    Get PDF
    Consider an object in orbit about the earth for which a sequence of angles-only measurements is made. This paper looks in detail at a one-step update for the filtering problem. Although the problem appears very nonlinear at first sight, it can be almost reduced to the standard linear Kalman filter by a careful formulation. The key features of this formulation are (1) the use of a local or adapted basis rather than a fixed basis for three-dimensional Euclidean space and the use of structural rather than ambient coordinates to represent the state, (2) the development of a novel "normal:conditional- normal" distribution to described the propagated position of the state, and (3) the development of a novel "Observation- Centered" Kalman filter to update the state distribution.A major advantage of this unified approach is that it gives a closed form filter which is highly accurate under a wide range of conditions, including high initial uncertainty, high eccentricity and long propagation times

    Vehicle Tracking and Motion Estimation Based on Stereo Vision Sequences

    Get PDF
    In this dissertation, a novel approach for estimating trajectories of road vehicles such as cars, vans, or motorbikes, based on stereo image sequences is presented. Moving objects are detected and reliably tracked in real-time from within a moving car. The resulting information on the pose and motion state of other moving objects with respect to the own vehicle is an essential basis for future driver assistance and safety systems, e.g., for collision prediction. The focus of this contribution is on oncoming traffic, while most existing work in the literature addresses tracking the lead vehicle. The overall approach is generic and scalable to a variety of traffic scenes including inner city, country road, and highway scenarios. A considerable part of this thesis addresses oncoming traffic at urban intersections. The parameters to be estimated include the 3D position and orientation of an object relative to the ego-vehicle, as well as the object's shape, dimension, velocity, acceleration and the rotational velocity (yaw rate). The key idea is to derive these parameters from a set of tracked 3D points on the object's surface, which are registered to a time-consistent object coordinate system, by means of an extended Kalman filter. Combining the rigid 3D point cloud model with the dynamic model of a vehicle is one main contribution of this thesis. Vehicle tracking at intersections requires covering a wide range of different object dynamics, since vehicles can turn quickly. Three different approaches for tracking objects during highly dynamic turn maneuvers up to extreme maneuvers such as skidding are presented and compared. These approaches allow for an online adaptation of the filter parameter values, overcoming manual parameter tuning depending on the dynamics of the tracked object in the scene. This is the second main contribution. Further issues include the introduction of two initialization methods, a robust outlier handling, a probabilistic approach for assigning new points to a tracked object, as well as mid-level fusion of the vision-based approach with a radar sensor. The overall system is systematically evaluated both on simulated and real-world data. The experimental results show the proposed system is able to accurately estimate the object pose and motion parameters in a variety of challenging situations, including night scenes, quick turn maneuvers, and partial occlusions. The limits of the system are also carefully investigated.In dieser Dissertation wird ein Ansatz zur Trajektorienschätzung von Straßenfahrzeugen (PKW, Lieferwagen, Motorräder,...) anhand von Stereo-Bildfolgen vorgestellt. Bewegte Objekte werden in Echtzeit aus einem fahrenden Auto heraus automatisch detektiert, vermessen und deren Bewegungszustand relativ zum eigenen Fahrzeug zuverlässig bestimmt. Die gewonnenen Informationen liefern einen entscheidenden Grundstein für zukünftige Fahrerassistenz- und Sicherheitssysteme im Automobilbereich, beispielsweise zur Kollisionsprädiktion. Während der Großteil der existierenden Literatur das Detektieren und Verfolgen vorausfahrender Fahrzeuge in Autobahnszenarien adressiert, setzt diese Arbeit einen Schwerpunkt auf den Gegenverkehr, speziell an städtischen Kreuzungen. Der Ansatz ist jedoch grundsätzlich generisch und skalierbar für eine Vielzahl an Verkehrssituationen (Innenstadt, Landstraße, Autobahn). Die zu schätzenden Parameter beinhalten die räumliche Lage des anderen Fahrzeugs relativ zum eigenen Fahrzeug, die Objekt-Geschwindigkeit und -Längsbeschleunigung, sowie die Rotationsgeschwindigkeit (Gierrate) des beobachteten Objektes. Zusätzlich werden die Objektabmaße sowie die Objektform rekonstruiert. Die Grundidee ist es, diese Parameter anhand der Transformation von beobachteten 3D Punkten, welche eine ortsfeste Position auf der Objektoberfläche besitzen, mittels eines rekursiven Schätzers (Kalman Filter) zu bestimmen. Ein wesentlicher Beitrag dieser Arbeit liegt in der Kombination des Starrkörpermodells der Punktewolke mit einem Fahrzeugbewegungsmodell. An Kreuzungen können sehr unterschiedliche Dynamiken auftreten, von einer Geradeausfahrt mit konstanter Geschwindigkeit bis hin zum raschen Abbiegen. Um eine manuelle Parameteradaption abhängig von der jeweiligen Szene zu vermeiden, werden drei verschiedene Ansätze zur automatisierten Anpassung der Filterparameter an die vorliegende Situation vorgestellt und verglichen. Dies stellt den zweiten Hauptbeitrag der Arbeit dar. Weitere wichtige Beiträge sind zwei alternative Initialisierungsmethoden, eine robuste Ausreißerbehandlung, ein probabilistischer Ansatz zur Zuordnung neuer Objektpunkte, sowie die Fusion des bildbasierten Verfahrens mit einem Radar-Sensor. Das Gesamtsystem wird im Rahmen dieser Arbeit systematisch anhand von simulierten und realen Straßenverkehrsszenen evaluiert. Die Ergebnisse zeigen, dass das vorgestellte Verfahren in der Lage ist, die unbekannten Objektparameter auch unter schwierigen Umgebungsbedingungen, beispielsweise bei Nacht, schnellen Abbiegemanövern oder unter Teilverdeckungen, sehr präzise zu schätzen. Die Grenzen des Systems werden ebenfalls sorgfältig untersucht

    All Source Sensor Integration Using an Extended Kalman Filter

    Get PDF
    The global positioning system (GPS) has become an ubiquitous source for navigation in the modern age, especially since the removal of selective availability at the beginning of this century. The utility of the GPS is unmatched, however GPS is not available in all environments. Heavy reliance on GPS for navigation makes the warfighter increasingly vulnerability as modern warfare continues to evolve. This research provides a method for incorporating measurements from a massive variety of sensors to mitigate GPS dependence. The result is the integration of sensor sets that encompass those examined in recent literature as well as some custom navigation devices. A full-state extended Kalman filter is developed and implemented, accommodating the requirements of the varied sensor sets and scenarios. Some 19 types of sensors are used in multiple quantities including inertial measurement units, single cameras and stereo pairs, 2D and 3D laser scanners, altimeters, 3-axis magnetometers, heading sensors, inclinometers, a stop sign sensor, an odometer, a step sensor, a ranging device, a signal of opportunity sensor, global navigation satellite system sensors, an air data computer, and radio frequency identification devices. Simulation data for all sensors was generated to test filter performance. Additionally, real data was collected and processed from an aircraft, ground vehicles, and a pedestrian. Measurement equations are developed to relate sensor measurements to the navigation states. Each sensor measurement is incorporated into the filter using the Kalman filter measurement update equations. Measurement types are segregated based on whether they observe instantaneous or accumulated state information. Accumulated state measurements are incorporated using delayed-state update equations. All other measurements are incorporated using the numerically robust UD update equations
    corecore