6,441 research outputs found
Neighbourhood Consensus Networks
We address the problem of finding reliable dense correspondences between a
pair of images. This is a challenging task due to strong appearance differences
between the corresponding scene elements and ambiguities generated by
repetitive patterns. The contributions of this work are threefold. First,
inspired by the classic idea of disambiguating feature matches using semi-local
constraints, we develop an end-to-end trainable convolutional neural network
architecture that identifies sets of spatially consistent matches by analyzing
neighbourhood consensus patterns in the 4D space of all possible
correspondences between a pair of images without the need for a global
geometric model. Second, we demonstrate that the model can be trained
effectively from weak supervision in the form of matching and non-matching
image pairs without the need for costly manual annotation of point to point
correspondences. Third, we show the proposed neighbourhood consensus network
can be applied to a range of matching tasks including both category- and
instance-level matching, obtaining the state-of-the-art results on the PF
Pascal dataset and the InLoc indoor visual localization benchmark.Comment: In Proceedings of the 32nd Conference on Neural Information
Processing Systems (NeurIPS 2018
PROBE-GK: Predictive Robust Estimation using Generalized Kernels
Many algorithms in computer vision and robotics make strong assumptions about
uncertainty, and rely on the validity of these assumptions to produce accurate
and consistent state estimates. In practice, dynamic environments may degrade
sensor performance in predictable ways that cannot be captured with static
uncertainty parameters. In this paper, we employ fast nonparametric Bayesian
inference techniques to more accurately model sensor uncertainty. By setting a
prior on observation uncertainty, we derive a predictive robust estimator, and
show how our model can be learned from sample images, both with and without
knowledge of the motion used to generate the data. We validate our approach
through Monte Carlo simulations, and report significant improvements in
localization accuracy relative to a fixed noise model in several settings,
including on synthetic data, the KITTI dataset, and our own experimental
platform.Comment: In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA'16), Stockholm, Sweden, May 16-21, 201
Disparity and Optical Flow Partitioning Using Extended Potts Priors
This paper addresses the problems of disparity and optical flow partitioning
based on the brightness invariance assumption. We investigate new variational
approaches to these problems with Potts priors and possibly box constraints.
For the optical flow partitioning, our model includes vector-valued data and an
adapted Potts regularizer. Using the notation of asymptotically level stable
functions we prove the existence of global minimizers of our functionals. We
propose a modified alternating direction method of minimizers. This iterative
algorithm requires the computation of global minimizers of classical univariate
Potts problems which can be done efficiently by dynamic programming. We prove
that the algorithm converges both for the constrained and unconstrained
problems. Numerical examples demonstrate the very good performance of our
partitioning method
Robust Visual SLAM in Challenging Environments with Low-texture and Dynamic Illumination
- Robustness to Dynamic Illumination conditions is also one of the main open challenges in visual odometry and SLAM, e.g. high dynamic range (HDR) environments. The main difficulties in these situations come from both the limitations of the sensors, for instance automatic settings of a camera might not react fast enough to properly record dynamic illumination changes, and also from limitations in the algorithms, e.g. the track of interest points is typically based on brightness constancy. The work of this thesis contributes to mitigate these phenomena from two different perspectives. The first one addresses this problem from a deep learning perspective by enhancing images to invariant and richer representations for VO and SLAM, benefiting from the generalization properties of deep neural networks. In this work it is also demonstrated how the insertion of long short term memory (LSTM) allows us to obtain temporally consistent sequences, since the estimation depends on previous states. Secondly, a more traditional perspective is exploited to contribute with a purely geometric-based tracking of line segments in challenging stereo streams with complex or varying illumination, since they are intrinsically more informative.
Fecha de lectura de Tesis Doctoral: 26 de febrero 2020In the last years, visual Simultaneous Localization and Mapping (SLAM) has played a role of capital importance in rapid technological advances, e.g. mo- bile robotics and applications such as virtual, augmented, or mixed reality (VR/AR/MR), as a vital part of their processing pipelines. As its name indicates, it comprises the estimation of the state of a robot (typically the pose) while, simultaneously, incrementally building and refining a consistent representation of the environment, i.e. the so-called map, based on the equipped sensors.
Despite the maturity reached by state-of-art visual SLAM techniques in controlled environments, there are still many open challenges to address be- fore reaching a SLAM system robust to long-term operations in uncontrolled scenarios, where classical assumptions, such as static environments, do not hold anymore. This thesis contributes to improve robustness of visual SLAM in harsh or difficult environments, in particular:
- Low-textured Environments, where traditional approaches suffer from an accuracy impoverishment and, occasionally, the absolute failure of the system. Fortunately, many of such low-textured environments contain planar elements that are rich in linear shapes, so an alternative feature choice such as line segments would exploit information from structured parts of the scene. This set of contributions exploits both type of features, i.e. points and line segments, to produce visual odometry and SLAM algorithms robust in a broader variety of environments, hence leveraging them at all instances of the related processes: monocular depth estimation, visual odometry, keyframe selection, bundle adjustment, loop closing, etc. Additionally, an open-source C++ implementation of the proposed algorithms has been released along with the published articles and some extra multimedia material for the benefit of the community
On the Synergies between Machine Learning and Binocular Stereo for Depth Estimation from Images: a Survey
Stereo matching is one of the longest-standing problems in computer vision
with close to 40 years of studies and research. Throughout the years the
paradigm has shifted from local, pixel-level decision to various forms of
discrete and continuous optimization to data-driven, learning-based methods.
Recently, the rise of machine learning and the rapid proliferation of deep
learning enhanced stereo matching with new exciting trends and applications
unthinkable until a few years ago. Interestingly, the relationship between
these two worlds is two-way. While machine, and especially deep, learning
advanced the state-of-the-art in stereo matching, stereo itself enabled new
ground-breaking methodologies such as self-supervised monocular depth
estimation based on deep networks. In this paper, we review recent research in
the field of learning-based depth estimation from single and binocular images
highlighting the synergies, the successes achieved so far and the open
challenges the community is going to face in the immediate future.Comment: Accepted to TPAMI. Paper version of our CVPR 2019 tutorial:
"Learning-based depth estimation from stereo and monocular images: successes,
limitations and future challenges"
(https://sites.google.com/view/cvpr-2019-depth-from-image/home
Genetic Stereo Matching Algorithm with Fuzzy Fitness
This paper presents a genetic stereo matching algorithm with fuzzy evaluation
function. The proposed algorithm presents a new encoding scheme in which a
chromosome is represented by a disparity matrix. Evolution is controlled by a
fuzzy fitness function able to deal with noise and uncertain camera
measurements, and uses classical evolutionary operators. The result of the
algorithm is accurate dense disparity maps obtained in a reasonable
computational time suitable for real-time applications as shown in experimental
results
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
- …