6,583 research outputs found
Unscented Bayesian Optimization for Safe Robot Grasping
We address the robot grasp optimization problem of unknown objects
considering uncertainty in the input space. Grasping unknown objects can be
achieved by using a trial and error exploration strategy. Bayesian optimization
is a sample efficient optimization algorithm that is especially suitable for
this setups as it actively reduces the number of trials for learning about the
function to optimize. In fact, this active object exploration is the same
strategy that infants do to learn optimal grasps. One problem that arises while
learning grasping policies is that some configurations of grasp parameters may
be very sensitive to error in the relative pose between the object and robot
end-effector. We call these configurations unsafe because small errors during
grasp execution may turn good grasps into bad grasps. Therefore, to reduce the
risk of grasp failure, grasps should be planned in safe areas. We propose a new
algorithm, Unscented Bayesian optimization that is able to perform sample
efficient optimization while taking into consideration input noise to find safe
optima. The contribution of Unscented Bayesian optimization is twofold as if
provides a new decision process that drives exploration to safe regions and a
new selection procedure that chooses the optimal in terms of its safety without
extra analysis or computational cost. Both contributions are rooted on the
strong theory behind the unscented transformation, a popular nonlinear
approximation method. We show its advantages with respect to the classical
Bayesian optimization both in synthetic problems and in realistic robot grasp
simulations. The results highlights that our method achieves optimal and robust
grasping policies after few trials while the selected grasps remain in safe
regions.Comment: conference pape
Reliability analysis for data-driven noisy models using active learning
Reliability analysis aims at estimating the failure probability of an
engineering system. It often requires multiple runs of a limit-state function,
which usually relies on computationally intensive simulations. Traditionally,
these simulations have been considered deterministic, i.e., running them
multiple times for a given set of input parameters always produces the same
output. However, this assumption does not always hold, as many studies in the
literature report non-deterministic computational simulations (also known as
noisy models). In such cases, running the simulations multiple times with the
same input will result in different outputs. Similarly, data-driven models that
rely on real-world data may also be affected by noise. This characteristic
poses a challenge when performing reliability analysis, as many classical
methods, such as FORM and SORM, are tailored to deterministic models. To bridge
this gap, this paper provides a novel methodology to perform reliability
analysis on models contaminated by noise. In such cases, noise introduces
latent uncertainty into the reliability estimator, leading to an incorrect
estimation of the real underlying reliability index, even when using Monte
Carlo simulation. To overcome this challenge, we propose the use of denoising
regression-based surrogate models within an active learning reliability
analysis framework. Specifically, we combine Gaussian process regression with a
noise-aware learning function to efficiently estimate the probability of
failure of the underlying noise-free model. We showcase the effectiveness of
this methodology on standard benchmark functions and a finite element model of
a realistic structural frame
Practical Bayesian optimization in the presence of outliers
Inference in the presence of outliers is an important field of research as
outliers are ubiquitous and may arise across a variety of problems and domains.
Bayesian optimization is method that heavily relies on probabilistic inference.
This allows outstanding sample efficiency because the probabilistic machinery
provides a memory of the whole optimization process. However, that virtue
becomes a disadvantage when the memory is populated with outliers, inducing
bias in the estimation. In this paper, we present an empirical evaluation of
Bayesian optimization methods in the presence of outliers. The empirical
evidence shows that Bayesian optimization with robust regression often produces
suboptimal results. We then propose a new algorithm which combines robust
regression (a Gaussian process with Student-t likelihood) with outlier
diagnostics to classify data points as outliers or inliers. By using an
scheduler for the classification of outliers, our method is more efficient and
has better convergence over the standard robust regression. Furthermore, we
show that even in controlled situations with no expected outliers, our method
is able to produce better results.Comment: 10 pages (2 of references), 6 figures, 1 algorith
- …