2,789 research outputs found
Modelling the spatial distribution of DEM Error
Assessment of a DEM’s quality is usually undertaken by deriving a measure of DEM accuracy – how close the DEM’s elevation values are to the true elevation. Measures such as Root Mean Squared Error and standard deviation of the error are frequently used. These measures summarise elevation errors in a DEM as a single value. A more detailed description of DEM accuracy would allow better understanding of DEM quality and the consequent uncertainty associated with using DEMs in analytical applications. The research presented addresses the limitations of using a single root mean squared error (RMSE) value to represent the uncertainty associated with a DEM by developing a new technique for creating a spatially distributed model of DEM quality – an accuracy surface. The technique is based on the hypothesis that the distribution and scale of elevation error within a DEM are at least partly related to morphometric characteristics of the terrain. The technique involves generating a set of terrain parameters to characterise terrain morphometry and developing regression models to define the relationship between DEM error and morphometric character. The regression models form the basis for creating standard deviation surfaces to represent DEM accuracy. The hypothesis is shown to be true and reliable accuracy surfaces are successfully created. These accuracy surfaces provide more detailed information about DEM accuracy than a single global estimate of RMSE
Bayesian Optimisation for Safe Navigation under Localisation Uncertainty
In outdoor environments, mobile robots are required to navigate through
terrain with varying characteristics, some of which might significantly affect
the integrity of the platform. Ideally, the robot should be able to identify
areas that are safe for navigation based on its own percepts about the
environment while avoiding damage to itself. Bayesian optimisation (BO) has
been successfully applied to the task of learning a model of terrain
traversability while guiding the robot through more traversable areas. An
issue, however, is that localisation uncertainty can end up guiding the robot
to unsafe areas and distort the model being learnt. In this paper, we address
this problem and present a novel method that allows BO to consider localisation
uncertainty by applying a Gaussian process model for uncertain inputs as a
prior. We evaluate the proposed method in simulation and in experiments with a
real robot navigating over rough terrain and compare it against standard BO
methods.Comment: To appear in the proceedings of the 18th International Symposium on
Robotics Research (ISRR 2017
Adaptive Robotic Information Gathering via Non-Stationary Gaussian Processes
Robotic Information Gathering (RIG) is a foundational research topic that
answers how a robot (team) collects informative data to efficiently build an
accurate model of an unknown target function under robot embodiment
constraints. RIG has many applications, including but not limited to autonomous
exploration and mapping, 3D reconstruction or inspection, search and rescue,
and environmental monitoring. A RIG system relies on a probabilistic model's
prediction uncertainty to identify critical areas for informative data
collection. Gaussian Processes (GPs) with stationary kernels have been widely
adopted for spatial modeling. However, real-world spatial data is typically
non-stationary -- different locations do not have the same degree of
variability. As a result, the prediction uncertainty does not accurately reveal
prediction error, limiting the success of RIG algorithms. We propose a family
of non-stationary kernels named Attentive Kernel (AK), which is simple, robust,
and can extend any existing kernel to a non-stationary one. We evaluate the new
kernel in elevation mapping tasks, where AK provides better accuracy and
uncertainty quantification over the commonly used stationary kernels and the
leading non-stationary kernels. The improved uncertainty quantification guides
the downstream informative planner to collect more valuable data around the
high-error area, further increasing prediction accuracy. A field experiment
demonstrates that the proposed method can guide an Autonomous Surface Vehicle
(ASV) to prioritize data collection in locations with significant spatial
variations, enabling the model to characterize salient environmental features.Comment: International Journal of Robotics Research (IJRR). arXiv admin note:
text overlap with arXiv:2205.0642
Practical Bayesian optimization in the presence of outliers
Inference in the presence of outliers is an important field of research as
outliers are ubiquitous and may arise across a variety of problems and domains.
Bayesian optimization is method that heavily relies on probabilistic inference.
This allows outstanding sample efficiency because the probabilistic machinery
provides a memory of the whole optimization process. However, that virtue
becomes a disadvantage when the memory is populated with outliers, inducing
bias in the estimation. In this paper, we present an empirical evaluation of
Bayesian optimization methods in the presence of outliers. The empirical
evidence shows that Bayesian optimization with robust regression often produces
suboptimal results. We then propose a new algorithm which combines robust
regression (a Gaussian process with Student-t likelihood) with outlier
diagnostics to classify data points as outliers or inliers. By using an
scheduler for the classification of outliers, our method is more efficient and
has better convergence over the standard robust regression. Furthermore, we
show that even in controlled situations with no expected outliers, our method
is able to produce better results.Comment: 10 pages (2 of references), 6 figures, 1 algorith
- …