1,467 research outputs found
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras
Event-based cameras have shown great promise in a variety of situations where
frame based cameras suffer, such as high speed motions and high dynamic range
scenes. However, developing algorithms for event measurements requires a new
class of hand crafted algorithms. Deep learning has shown great success in
providing model free solutions to many problems in the vision community, but
existing networks have been developed with frame based images in mind, and
there does not exist the wealth of labeled data for events as there does for
images for supervised training. To these points, we present EV-FlowNet, a novel
self-supervised deep learning pipeline for optical flow estimation for event
based cameras. In particular, we introduce an image based representation of a
given event stream, which is fed into a self-supervised neural network as the
sole input. The corresponding grayscale images captured from the same camera at
the same time as the events are then used as a supervisory signal to provide a
loss function at training time, given the estimated flow from the network. We
show that the resulting network is able to accurately predict optical flow from
events only in a variety of different scenes, with performance competitive to
image based networks. This method not only allows for accurate estimation of
dense optical flow, but also provides a framework for the transfer of other
self-supervised methods to the event-based domain.Comment: 9 pages, 5 figures, 1 table. Accompanying video:
https://youtu.be/eMHZBSoq0sE. Dataset:
https://daniilidis-group.github.io/mvsec/, Robotics: Science and Systems 201
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
Single and multiple stereo view navigation for planetary rovers
© Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover.
The absence of global positioning systems (GPS) in space, added to the limitations
of wheel odometry makes autonomous navigation based on these two techniques - as
done in the literature - an inviable solution and necessitates the use of other approaches.
That, among other reasons, motivates this work to use solely visual data to solve the
robot’s Egomotion problem.
The homogeneity of Mars’ terrain makes the robustness of the low level image
processing technique a critical requirement. In the first part of the thesis, novel solutions
are presented to tackle this specific problem. Detection of robust features against
illumination changes and unique matching and association of features is a sought after
capability. A solution for robustness of features against illumination variation is proposed
combining Harris corner detection together with moment image representation.
Whereas the first provides a technique for efficient feature detection, the moment images
add the necessary brightness invariance. Moreover, a bucketing strategy is used
to guarantee that features are homogeneously distributed within the images. Then, the
addition of local feature descriptors guarantees the unique identification of image cues.
In the second part, reliable and precise motion estimation for the Mars’s robot is
studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous
Localisation And Mapping (VSLAM) is investigated, proposing enhancements
and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation
techniques are explored. Alternative photogrammetry reprojection concepts
are tested. Lastly, data fusion techniques are proposed to deal with the integration of
multiple stereo view data.
Our robust visual scheme allows good feature repeatability. Because of this,
dimensionality reduction of the feature data can be used without compromising the
overall performance of the proposed solutions for motion estimation. Also, the developed
Egomotion techniques have been extensively validated using both simulated and
real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot
motion estimation are introduced, presenting interesting benefits. The obtained results
prove the innovative methods presented here to be accurate and reliable approaches
capable to solve the Egomotion problem in a Mars environment
CELLO: A fast algorithm for Covariance Estimation
We present CELLO (Covariance Estimation and Learning through Likelihood Optimization), an algorithm for predicting the covariances of measurements based on any available informative features. This algorithm is intended to improve the accuracy and reliability of on-line state estimation by providing a principled way to extend the conventional fixed-covariance Gaussian measurement model. We show that in experiments, CELLO learns to predict measurement covariances that agree with empirical covariances obtained by manually annotating sensor regimes. We also show that using the learned covariances during filtering provides substantial quantitative improvement to the overall state estimate. © 2013 IEEE.United States. National Aeronautics and Space AdministrationSiemens Corporate ResearchUnited States. Office of Naval Research. Multidisciplinary University Research InitiativeMicro Autonomous Consortium Systems and Technolog
- …