45,932 research outputs found
Comparison of Gaussian process modeling software
Gaussian process fitting, or kriging, is often used to create a model from a
set of data. Many available software packages do this, but we show that very
different results can be obtained from different packages even when using the
same data and model. We describe the parameterization, features, and
optimization used by eight different fitting packages that run on four
different platforms. We then compare these eight packages using various data
functions and data sets, revealing that there are stark differences between the
packages. In addition to comparing the prediction accuracy, the predictive
variance--which is important for evaluating precision of predictions and is
often used in stopping criteria--is also evaluated
Edge and Line Feature Extraction Based on Covariance Models
age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image
Bayesian Optimization Using Domain Knowledge on the ATRIAS Biped
Controllers in robotics often consist of expert-designed heuristics, which
can be hard to tune in higher dimensions. It is typical to use simulation to
learn these parameters, but controllers learned in simulation often don't
transfer to hardware. This necessitates optimization directly on hardware.
However, collecting data on hardware can be expensive. This has led to a recent
interest in adapting data-efficient learning techniques to robotics. One
popular method is Bayesian Optimization (BO), a sample-efficient black-box
optimization scheme, but its performance typically degrades in higher
dimensions. We aim to overcome this problem by incorporating domain knowledge
to reduce dimensionality in a meaningful way, with a focus on bipedal
locomotion. In previous work, we proposed a transformation based on knowledge
of human walking that projected a 16-dimensional controller to a 1-dimensional
space. In simulation, this showed enhanced sample efficiency when optimizing
human-inspired neuromuscular walking controllers on a humanoid model. In this
paper, we present a generalized feature transform applicable to non-humanoid
robot morphologies and evaluate it on the ATRIAS bipedal robot -- in simulation
and on hardware. We present three different walking controllers; two are
evaluated on the real robot. Our results show that this feature transform
captures important aspects of walking and accelerates learning on hardware and
simulation, as compared to traditional BO.Comment: 8 pages, submitted to IEEE International Conference on Robotics and
Automation 201
Sequential Design with Mutual Information for Computer Experiments (MICE): Emulation of a Tsunami Model
Computer simulators can be computationally intensive to run over a large
number of input values, as required for optimization and various uncertainty
quantification tasks. The standard paradigm for the design and analysis of
computer experiments is to employ Gaussian random fields to model computer
simulators. Gaussian process models are trained on input-output data obtained
from simulation runs at various input values. Following this approach, we
propose a sequential design algorithm, MICE (Mutual Information for Computer
Experiments), that adaptively selects the input values at which to run the
computer simulator, in order to maximize the expected information gain (mutual
information) over the input space. The superior computational efficiency of the
MICE algorithm compared to other algorithms is demonstrated by test functions,
and a tsunami simulator with overall gains of up to 20% in that case
- …