5,117 research outputs found
Learning a Bias Correction for Lidar-only Motion Estimation
This paper presents a novel technique to correct for bias in a classical
estimator using a learning approach. We apply a learned bias correction to a
lidar-only motion estimation pipeline. Our technique trains a Gaussian process
(GP) regression model using data with ground truth. The inputs to the model are
high-level features derived from the geometry of the point-clouds, and the
outputs are the predicted biases between poses computed by the estimator and
the ground truth. The predicted biases are applied as a correction to the poses
computed by the estimator.
Our technique is evaluated on over 50km of lidar data, which includes the
KITTI odometry benchmark and lidar datasets collected around the University of
Toronto campus. After applying the learned bias correction, we obtained
significant improvements to lidar odometry in all datasets tested. We achieved
around 10% reduction in errors on all datasets from an already accurate lidar
odometry algorithm, at the expense of only less than 1% increase in
computational cost at run-time.Comment: 15th Conference on Computer and Robot Vision (CRV 2018
Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload
Powered wheelchair users often struggle to drive safely and effectively and in more critical cases can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists the user as and when they require help. The system uses a multiple–hypotheses method to predict the driver’s intentions and if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance, but, perhaps more importantly, we characterise the user performance, in an experiment that combines eye–tracking with a secondary task. Without assistance, participants experienced multiple collisions whilst driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely, but they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain–machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input
Quantifying the Evolutionary Self Structuring of Embodied Cognitive Networks
We outline a possible theoretical framework for the quantitative modeling of
networked embodied cognitive systems. We notice that: 1) information self
structuring through sensory-motor coordination does not deterministically occur
in Rn vector space, a generic multivariable space, but in SE(3), the group
structure of the possible motions of a body in space; 2) it happens in a
stochastic open ended environment. These observations may simplify, at the
price of a certain abstraction, the modeling and the design of self
organization processes based on the maximization of some informational
measures, such as mutual information. Furthermore, by providing closed form or
computationally lighter algorithms, it may significantly reduce the
computational burden of their implementation. We propose a modeling framework
which aims to give new tools for the design of networks of new artificial self
organizing, embodied and intelligent agents and the reverse engineering of
natural ones. At this point, it represents much a theoretical conjecture and it
has still to be experimentally verified whether this model will be useful in
practice.
Bayesian Active Edge Evaluation on Expensive Graphs
Robots operate in environments with varying implicit structure. For instance,
a helicopter flying over terrain encounters a very different arrangement of
obstacles than a robotic arm manipulating objects on a cluttered table top.
State-of-the-art motion planning systems do not exploit this structure, thereby
expending valuable planning effort searching for implausible solutions. We are
interested in planning algorithms that actively infer the underlying structure
of the valid configuration space during planning in order to find solutions
with minimal effort. Consider the problem of evaluating edges on a graph to
quickly discover collision-free paths. Evaluating edges is expensive, both for
robots with complex geometries like robot arms, and for robots with limited
onboard computation like UAVs. Until now, this challenge has been addressed via
laziness i.e. deferring edge evaluation until absolutely necessary, with the
hope that edges turn out to be valid. However, all edges are not alike in value
- some have a lot of potentially good paths flowing through them, and some
others encode the likelihood of neighbouring edges being valid. This leads to
our key insight - instead of passive laziness, we can actively choose edges
that reduce the uncertainty about the validity of paths. We show that this is
equivalent to the Bayesian active learning paradigm of decision region
determination (DRD). However, the DRD problem is not only combinatorially hard,
but also requires explicit enumeration of all possible worlds. We propose a
novel framework that combines two DRD algorithms, DIRECT and BISECT, to
overcome both issues. We show that our approach outperforms several
state-of-the-art algorithms on a spectrum of planning problems for mobile
robots, manipulators and autonomous helicopters
- …