2,935 research outputs found
Nonlinear Gaussian Filtering : Theory, Algorithms, and Applications
By restricting to Gaussian distributions, the optimal Bayesian filtering problem can be transformed into an algebraically simple form, which allows for computationally efficient algorithms. Three problem settings are discussed in this thesis: (1) filtering with Gaussians only, (2) Gaussian mixture filtering for strong nonlinearities, (3) Gaussian process filtering for purely data-driven scenarios. For each setting, efficient algorithms are derived and applied to real-world problems
GP-SUM. Gaussian Processes Filtering of non-Gaussian Beliefs
This work studies the problem of stochastic dynamic filtering and state
propagation with complex beliefs. The main contribution is GP-SUM, a filtering
algorithm tailored to dynamic systems and observation models expressed as
Gaussian Processes (GP), and to states represented as a weighted sum of
Gaussians. The key attribute of GP-SUM is that it does not rely on
linearizations of the dynamic or observation models, or on unimodal Gaussian
approximations of the belief, hence enables tracking complex state
distributions. The algorithm can be seen as a combination of a sampling-based
filter with a probabilistic Bayes filter. On the one hand, GP-SUM operates by
sampling the state distribution and propagating each sample through the dynamic
system and observation models. On the other hand, it achieves effective
sampling and accurate probabilistic propagation by relying on the GP form of
the system, and the sum-of-Gaussian form of the belief. We show that GP-SUM
outperforms several GP-Bayes and Particle Filters on a standard benchmark. We
also demonstrate its use in a pushing task, predicting with experimental
accuracy the naturally occurring non-Gaussian distributions.Comment: WAFR 2018, 16 pages, 7 figure
Unscented Bayesian Optimization for Safe Robot Grasping
We address the robot grasp optimization problem of unknown objects
considering uncertainty in the input space. Grasping unknown objects can be
achieved by using a trial and error exploration strategy. Bayesian optimization
is a sample efficient optimization algorithm that is especially suitable for
this setups as it actively reduces the number of trials for learning about the
function to optimize. In fact, this active object exploration is the same
strategy that infants do to learn optimal grasps. One problem that arises while
learning grasping policies is that some configurations of grasp parameters may
be very sensitive to error in the relative pose between the object and robot
end-effector. We call these configurations unsafe because small errors during
grasp execution may turn good grasps into bad grasps. Therefore, to reduce the
risk of grasp failure, grasps should be planned in safe areas. We propose a new
algorithm, Unscented Bayesian optimization that is able to perform sample
efficient optimization while taking into consideration input noise to find safe
optima. The contribution of Unscented Bayesian optimization is twofold as if
provides a new decision process that drives exploration to safe regions and a
new selection procedure that chooses the optimal in terms of its safety without
extra analysis or computational cost. Both contributions are rooted on the
strong theory behind the unscented transformation, a popular nonlinear
approximation method. We show its advantages with respect to the classical
Bayesian optimization both in synthetic problems and in realistic robot grasp
simulations. The results highlights that our method achieves optimal and robust
grasping policies after few trials while the selected grasps remain in safe
regions.Comment: conference pape
- …