168 research outputs found
Cautious NMPC with Gaussian Process Dynamics for Autonomous Miniature Race Cars
This paper presents an adaptive high performance control method for
autonomous miniature race cars. Racing dynamics are notoriously hard to model
from first principles, which is addressed by means of a cautious nonlinear
model predictive control (NMPC) approach that learns to improve its dynamics
model from data and safely increases racing performance. The approach makes use
of a Gaussian Process (GP) and takes residual model uncertainty into account
through a chance constrained formulation. We present a sparse GP approximation
with dynamically adjusting inducing inputs, enabling a real-time implementable
controller. The formulation is demonstrated in simulations, which show
significant improvement with respect to both lap time and constraint
satisfaction compared to an NMPC without model learning
Gaussian Process priors with uncertain inputs? Application to multiple-step ahead time series forecasting
We consider the problem of multi-step ahead prediction in time series analysis using the non-parametric Gaussian process model. k-step ahead forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions. For a state-space model of the form y t = f(Yt-1 ,..., Yt-L ), the prediction of y at time t + k is based on the point estimates of the previous outputs. In this paper, we show how, using an analytical Gaussian approximation, we can formally incorporate the uncertainty about intermediate regressor values, thus updating the uncertainty on the current prediction
Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting
The object of Bayesian modelling is the predictive distribution, which in a forecasting scenario enables improved estimates of forecasted values and their uncertainties. In this paper we focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaussian Process and the Relevance Vector Machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting. The capability of the method is demonstrated for forecasting of time-series and compared to approximate methods
Intrinsic Gaussian processes on complex constrained domains
We propose a class of intrinsic Gaussian processes (in-GPs) for
interpolation, regression and classification on manifolds with a primary focus
on complex constrained domains or irregular shaped spaces arising as subsets or
submanifolds of R, R2, R3 and beyond. For example, in-GPs can accommodate
spatial domains arising as complex subsets of Euclidean space. in-GPs respect
the potentially complex boundary or interior conditions as well as the
intrinsic geometry of the spaces. The key novelty of the proposed approach is
to utilise the relationship between heat kernels and the transition density of
Brownian motion on manifolds for constructing and approximating valid and
computationally feasible covariance kernels. This enables in-GPs to be
practically applied in great generality, while existing approaches for
smoothing on constrained domains are limited to simple special cases. The broad
utilities of the in-GP approach is illustrated through simulation studies and
data examples
Nonparametric Bayesian Mixed-effect Model: a Sparse Gaussian Process Approach
Multi-task learning models using Gaussian processes (GP) have been developed
and successfully applied in various applications. The main difficulty with this
approach is the computational cost of inference using the union of examples
from all tasks. Therefore sparse solutions, that avoid using the entire data
directly and instead use a set of informative "representatives" are desirable.
The paper investigates this problem for the grouped mixed-effect GP model where
each individual response is given by a fixed-effect, taken from one of a set of
unknown groups, plus a random individual effect function that captures
variations among individuals. Such models have been widely used in previous
work but no sparse solutions have been developed. The paper presents the first
sparse solution for such problems, showing how the sparse approximation can be
obtained by maximizing a variational lower bound on the marginal likelihood,
generalizing ideas from single-task Gaussian processes to handle the
mixed-effect model as well as grouping. Experiments using artificial and real
data validate the approach showing that it can recover the performance of
inference with the full sample, that it outperforms baseline methods, and that
it outperforms state of the art sparse solutions for other multi-task GP
formulations.Comment: Preliminary version appeared in ECML201
Probabilistic movement modeling for intention inference in human-robot interaction.
Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes ’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
Disentangling and Operationalizing AI Fairness at LinkedIn
Operationalizing AI fairness at LinkedIn's scale is challenging not only
because there are multiple mutually incompatible definitions of fairness but
also because determining what is fair depends on the specifics and context of
the product where AI is deployed. Moreover, AI practitioners need clarity on
what fairness expectations need to be addressed at the AI level. In this paper,
we present the evolving AI fairness framework used at LinkedIn to address these
three challenges. The framework disentangles AI fairness by separating out
equal treatment and equitable product expectations. Rather than imposing a
trade-off between these two commonly opposing interpretations of fairness, the
framework provides clear guidelines for operationalizing equal AI treatment
complemented with a product equity strategy. This paper focuses on the equal AI
treatment component of LinkedIn's AI fairness framework, shares the principles
that support it, and illustrates their application through a case study. We
hope this paper will encourage other big tech companies to join us in sharing
their approach to operationalizing AI fairness at scale, so that together we
can keep advancing this constantly evolving field
- …