6,490 research outputs found
PROBE-GK: Predictive Robust Estimation using Generalized Kernels
Many algorithms in computer vision and robotics make strong assumptions about
uncertainty, and rely on the validity of these assumptions to produce accurate
and consistent state estimates. In practice, dynamic environments may degrade
sensor performance in predictable ways that cannot be captured with static
uncertainty parameters. In this paper, we employ fast nonparametric Bayesian
inference techniques to more accurately model sensor uncertainty. By setting a
prior on observation uncertainty, we derive a predictive robust estimator, and
show how our model can be learned from sample images, both with and without
knowledge of the motion used to generate the data. We validate our approach
through Monte Carlo simulations, and report significant improvements in
localization accuracy relative to a fixed noise model in several settings,
including on synthetic data, the KITTI dataset, and our own experimental
platform.Comment: In Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA'16), Stockholm, Sweden, May 16-21, 201
Distributed multi-agent Gaussian regression via finite-dimensional approximations
We consider the problem of distributedly estimating Gaussian processes in
multi-agent frameworks. Each agent collects few measurements and aims to
collaboratively reconstruct a common estimate based on all data. Agents are
assumed with limited computational and communication capabilities and to gather
noisy measurements in total on input locations independently drawn from a
known common probability density. The optimal solution would require agents to
exchange all the input locations and measurements and then invert an matrix, a non-scalable task. Differently, we propose two suboptimal
approaches using the first orthonormal eigenfunctions obtained from the
\ac{KL} expansion of the chosen kernel, where typically . The benefits
are that the computation and communication complexities scale with and not
with , and computing the required statistics can be performed via standard
average consensus algorithms. We obtain probabilistic non-asymptotic bounds
that determine a priori the desired level of estimation accuracy, and new
distributed strategies relying on Stein's unbiased risk estimate (SURE)
paradigms for tuning the regularization parameters and applicable to generic
basis functions (thus not necessarily kernel eigenfunctions) and that can again
be implemented via average consensus. The proposed estimators and bounds are
finally tested on both synthetic and real field data
Learning how to be robust: Deep polynomial regression
Polynomial regression is a recurrent problem with a large number of
applications. In computer vision it often appears in motion analysis. Whatever
the application, standard methods for regression of polynomial models tend to
deliver biased results when the input data is heavily contaminated by outliers.
Moreover, the problem is even harder when outliers have strong structure.
Departing from problem-tailored heuristics for robust estimation of parametric
models, we explore deep convolutional neural networks. Our work aims to find a
generic approach for training deep regression models without the explicit need
of supervised annotation. We bypass the need for a tailored loss function on
the regression parameters by attaching to our model a differentiable hard-wired
decoder corresponding to the polynomial operation at hand. We demonstrate the
value of our findings by comparing with standard robust regression methods.
Furthermore, we demonstrate how to use such models for a real computer vision
problem, i.e., video stabilization. The qualitative and quantitative
experiments show that neural networks are able to learn robustness for general
polynomial regression, with results that well overpass scores of traditional
robust estimation methods.Comment: 18 pages, conferenc
Longitudinal variable selection by cross-validation in the case of many covariates
Longitudinal models are commonly used for studying data collected on individuals repeatedly through time. While there are now a variety of such models available (Marginal Models, Mixed Effects Models, etc.), far fewer options appear to exist for the closely related issue of variable selection. In addition, longitudinal data typically derive from medical or other large-scale studies where often large numbers of potential explanatory variables and hence even larger numbers of candidate models must be considered. Cross-validation is a popular method for variable selection based on the predictive ability of the model. Here, we propose a cross-validation Markov Chain Monte Carlo procedure as a general variable selection tool which avoids the need to visit all candidate models. Inclusion of a “one-standard error” rule provides users with a collection of good models as is often desired. We demonstrate the effectiveness of our procedure both in a simulation setting and in a real application.
- …