46 research outputs found
The Estimation Methods for an Integrated INS/GPS UXO Geolocation System
This work was supported by a project funded by the US Army Corps of Engineers,
Strategic Environment Research and Development Program, contract number W912HQ-
08-C-0044.This report was also submitted to the Graduate School of the Ohio State
University in partial fulfillment of the PhD degree in Geodetic Science.Unexploded ordnance (UXO) is the explosive weapons such as mines, bombs, bullets,
shells and grenades that failed to explode when they were employed. In North America,
especially in the US, the UXO is the result of weapon system testing and troop training
by the DOD. The traditional UXO detection method employs metal detectors which
measure distorted signals of local magnetic fields. Based on detected magnetic signals,
holes are dug to remove buried UXO. However, the detection and remediation of UXO
contaminated sites using the traditional methods are extremely inefficient in that it is
difficult to distinguish the buried UXO from the noise of geologic magnetic sources or
anthropic clutter items. The reliable discrimination performance of UXO detection
system depends on the employed sensor technology as well as on the data processing
methods that invert the collected data to infer the UXO. The detection systems require
very accurate positioning (or geolocation) of the detection units to detect and discriminate
the candidate UXO from the non-hazardous clutter, greater position and orientation
precision because the inversion of magnetic or EMI data relies on their precise relative
locations, orientation, and depth. The requirements of position accuracy for MEC
geolocation and characterization using typical state-of-the-art detection instrumentation
are classified according to levels of accuracy outlined in: the screening level with position
tolerance of 0.5 m (as standard deviation), area mapping (less than 0.05 m), and
characterize and discriminate level of accuracy (less than 0.02m).
The primary geolocation system is considered as a dual-frequency GPS integrated with a
three dimensional inertial measurement unit (IMU); INS/GPS system. Selecting the
appropriate estimation method has been the key problem to obtain highly precise
geolocation of INS/GPS system for the UXO detection performance in dynamic
environments. For this purpose, the Extended Kalman Filter (EKF) has been used as the
conventional algorithm for the optimal integration of INS/GPS system. However, the
newly introduced non-linear based filters can deal with the non-linear nature of the
positioning dynamics as well as the non-Gaussian statistics for the instrument errors, and
the non-linear based estimation methods (filtering/smoothing) have been developed and
proposed. Therefore, this study focused on the optimal estimation methods for the
highly precise geolocation of INS/GPS system using simulations and analyses of two
Laboratory tests (cart-based and handheld geolocation system).
First, the non-linear based filters (UKF and UKF) have been shown to yield superior
performance than the EKF in various specific simulation tests which are designed similar
to the UXO geolocation environment (highly dynamic and small area). The UKF yields
50% improvement in the position accuracy over the EKF particularly in the curved
sections (medium-grade IMUs case). The UKF also performed significantly better than
EKF and shows comparable improvement over the UKF when the IMU noise probability
iii
density function is symmetric and non-symmetric. Also, since the UXO detection
survey does not require the real-time operations, each of the developed filters was
modified to accommodate the standard Rauch-Tung-Striebel (RTS) smoothing algorithms.
The smoothing methods are applied to the typical UXO detection trajectory; the position
error was reduced significantly using a minimal number of control points. Finally, these
simulation tests confirmed that tactical-grade IMUs (e.g. HG1700 or HG1900) are
required to bridge gaps of high-accuracy ranging solution systems longer than 1 second.
Second, these result of the simulation tests were validated from the laboratory tests using
navigation-grade and medium-grade accuracy IMUs. To overcome inaccurate a priori
knowledge of process noise of the system, the adaptive filtering methods have been
applied to the EKF and UKF and they are called the AEKS and AUKS. The neural
network aided adaptive nonlinear filtering/smoothing methods (NN-EKS and NN-UKS)
which are augmented with RTS smoothing method were compared with the AEKS and
AUKS. Each neural network-aided, adaptive filter/smoother improved the position
accuracy in both straight and curved sections. The navigation grade IMU (H764G) can
achieve the area mapping level of accuracy when the gap of control points is about 8
seconds. The medium grade IMUs (HG1700 and HG1900) with NN-AUKS can
maintain less than 10cm under the same conditions as above. Also, the neural network
aiding can decrease the difference of position error between the straight and the curved
section. Third, in the previous simulation test, the UPF performed better than the other
filters. However since the UPF needs a large number of samples to represent the a
posteriori statistics in high-dimensional space, the RBPF can be used as an alternative to
avoid the inefficiency of particle filter. The RBPF is tailored to precise geolocation for
UXO detection using IMU/GPS system and yielded improved estimation results with a
small number of samples. The handheld geolocation system using HG1900 with a
nonlinear filter-based smoother can achieve the discrimination level of accuracy if the
update rate of control points is less than 0.5Hz and 1Hz for the sweep and swing
respectively. Also, the sweep operation is more preferred than the swing motion
because the position accuracy of the sweep test was better than that of the swing test
Adaptive Localization and Mapping for Planetary Rovers
Future rovers will be equipped with substantial onboard autonomy as space agencies and industry proceed with missions studies and technology development in preparation for the next planetary exploration missions. Simultaneous Localization and Mapping (SLAM) is a fundamental part of autonomous capabilities and has close connections to robot perception, planning and control. SLAM positively affects rover operations and mission success. The SLAM community has made great progress in the last decade by enabling real world solutions in terrestrial applications and is nowadays addressing important challenges in robust performance, scalability, high-level understanding, resources awareness and domain adaptation. In this thesis, an adaptive SLAM system is proposed in order to improve rover navigation performance and demand. This research presents a novel localization and mapping solution following a bottom-up approach. It starts with an Attitude and Heading Reference System (AHRS), continues with a 3D odometry dead reckoning solution and builds up to a full graph optimization scheme which uses visual odometry and takes into account rover traction performance, bringing scalability to modern SLAM solutions. A design procedure is presented in order to incorporate inertial sensors into the AHRS. The procedure follows three steps: error characterization, model derivation and filter design. A complete kinematics model of the rover locomotion subsystem is developed in order to improve the wheel odometry solution. Consequently, the parametric model predicts delta poses by solving a system of equations with weighed least squares. In addition, an odometry error model is learned using Gaussian processes (GPs) in order to predict non-systematic errors induced by poor traction of the rover with the terrain. The odometry error model complements the parametric solution by adding an estimation of the error. The gained information serves to adapt the localization and mapping solution to the current navigation demands (domain adaptation). The adaptivity strategy is designed to adjust the visual odometry computational load (active perception) and to influence the optimization back-end by including highly informative keyframes in the graph (adaptive information gain). Following this strategy, the solution is adapted to the navigation demands, providing an adaptive SLAM system driven by the navigation performance and conditions of the interaction with the terrain. The proposed methodology is experimentally verified on a representative planetary rover under realistic field test scenarios. This thesis introduces a modern SLAM system which adapts the estimated pose and map to the predicted error. The system maintains accuracy with fewer nodes, taking the best of both wheel and visual methods in a consistent graph-based smoothing approach
Inertial navigation aided by simultaneous loacalization and mapping
Unmanned aerial vehicles technologies are getting smaller and cheaper
to use and the challenges of payload limitation in unmanned aerial
vehicles are being overcome. Integrated navigation system design requires
selection of set of sensors and computation power that provides
reliable and accurate navigation parameters (position, velocity
and attitude) with high update rates and bandwidth in small and
cost effective manner. Many of today’s operational unmanned aerial
vehicles navigation systems rely on inertial sensors as a primary measurement
source. Inertial Navigation alone however suffers from slow
divergence with time. This divergence is often compensated for by
employing some additional source of navigation information external
to Inertial Navigation. From the 1990’s to the present day Global
Positioning System has been the dominant navigation aid for Inertial
Navigation. In a number of scenarios, Global Positioning System measurements
may be completely unavailable or they simply may not be
precise (or reliable) enough to be used to adequately update the Inertial
Navigation hence alternative methods have seen great attention.
Aiding Inertial Navigation with vision sensors has been the favoured
solution over the past several years. Inertial and vision sensors with
their complementary characteristics have the potential to answer the
requirements for reliable and accurate navigation parameters.
In this thesis we address Inertial Navigation position divergence. The
information for updating the position comes from combination of vision
and motion. When using such a combination many of the difficulties
of the vision sensors (relative depth, geometry and size of objects,
image blur and etc.) can be circumvented. Motion grants the vision
sensors with many cues that can help better to acquire information
about the environment, for instance creating a precise map of the environment
and localize within the environment.
We propose changes to the Simultaneous Localization and Mapping
augmented state vector in order to take repeated measurements of
the map point. We show that these repeated measurements with certain
manoeuvres (motion) around or by the map point are crucial for
constraining the Inertial Navigation position divergence (bounded estimation
error) while manoeuvring in vicinity of the map point. This
eliminates some of the uncertainty of the map point estimates i.e.
it reduces the covariance of the map points estimates. This concept
brings different parameterization (feature initialisation) of the map
points in Simultaneous Localization and Mapping and we refer to it
as concept of aiding Inertial Navigation by Simultaneous Localization
and Mapping.
We show that making such an integrated navigation system requires
coordination with the guidance and control measurements and the vehicle
task itself for performing the required vehicle manoeuvres (motion)
and achieving better navigation accuracy. This fact brings new
challenges to the practical design of these modern jam proof Global
Positioning System free autonomous navigation systems.
Further to the concept of aiding Inertial Navigation by Simultaneous
Localization and Mapping we have investigated how a bearing only
sensor such as single camera can be used for aiding Inertial Navigation.
The results of the concept of Inertial Navigation aided by
Simultaneous Localization and Mapping were used. New parameterization
of the map point in Bearing Only Simultaneous Localization
and Mapping is proposed. Because of the number of significant problems
that appear when implementing the Extended Kalman Filter in
Inertial Navigation aided by Bearing Only Simultaneous Localization
and Mapping other algorithms such as Iterated Extended Kalman Filter,
Unscented Kalman Filter and Particle Filters were implemented.
From the results obtained, the conclusion can be drawn that the nonlinear
filters should be the choice of estimators for this application
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
From Perception to Navigation in Environments with Persons: An Indoor Evaluation of the State of the Art
Research in the field of social robotics is allowing service robots to operate in environments with people. In the aim of realizing the vision of humans and robots coexisting in the same environment, several solutions have been proposed to (1) perceive persons and objects in the immediate environment; (2) predict the movements of humans; as well as (3) plan the navigation in agreement with socially accepted rules. In this work, we discuss the different aspects related to social navigation in the context of our experience in an indoor environment. We describe state-of-the-art approaches and experiment with existing methods to analyze their performance in practice. From this study, we gather first-hand insights into the limitations of current solutions and identify possible research directions to address the open challenges. In particular, this paper focuses on topics related to perception at the hardware and application levels, including 2D and 3D sensors, geometric and mainly semantic mapping, the prediction of people trajectories (physics-, pattern- and planning-based), and social navigation (reactive and predictive) in indoor environments
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed