6,999 research outputs found
State-Space Inference and Learning with Gaussian Processes
State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. Copyright 2010 by the authors
Stochastic MPC Design for a Two-Component Granulation Process
We address the issue of control of a stochastic two-component granulation
process in pharmaceutical applications through using Stochastic Model
Predictive Control (SMPC) and model reduction to obtain the desired particle
distribution. We first use the method of moments to reduce the governing
integro-differential equation down to a nonlinear ordinary differential
equation (ODE). This reduced-order model is employed in the SMPC formulation.
The probabilistic constraints in this formulation keep the variance of
particles' drug concentration in an admissible range. To solve the resulting
stochastic optimization problem, we first employ polynomial chaos expansion to
obtain the Probability Distribution Function (PDF) of the future state
variables using the uncertain variables' distributions. As a result, the
original stochastic optimization problem for a particulate system is converted
to a deterministic dynamic optimization. This approximation lessens the
computation burden of the controller and makes its real time application
possible.Comment: American control Conference, May, 201
Inverse Problems and Data Assimilation
These notes are designed with the aim of providing a clear and concise
introduction to the subjects of Inverse Problems and Data Assimilation, and
their inter-relations, together with citations to some relevant literature in
this area. The first half of the notes is dedicated to studying the Bayesian
framework for inverse problems. Techniques such as importance sampling and
Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the
desirable property that in the limit of an infinite number of samples they
reproduce the full posterior distribution. Since it is often computationally
intensive to implement these methods, especially in high dimensional problems,
approximate techniques such as approximating the posterior by a Dirac or a
Gaussian distribution are discussed. The second half of the notes cover data
assimilation. This refers to a particular class of inverse problems in which
the unknown parameter is the initial condition of a dynamical system, and in
the stochastic dynamics case the subsequent states of the system, and the data
comprises partial and noisy observations of that (possibly stochastic)
dynamical system. We will also demonstrate that methods developed in data
assimilation may be employed to study generic inverse problems, by introducing
an artificial time to generate a sequence of probability measures interpolating
from the prior to the posterior
- …