13,178 research outputs found
Inverse Problems in a Bayesian Setting
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)
--- the propagation of uncertainty through a computational (forward) model ---
are strongly connected. In the form of conditional expectation the Bayesian
update becomes computationally attractive. We give a detailed account of this
approach via conditional approximation, various approximations, and the
construction of filters. Together with a functional or spectral approach for
the forward UQ there is no need for time-consuming and slowly convergent Monte
Carlo sampling. The developed sampling-free non-linear Bayesian update in form
of a filter is derived from the variational problem associated with conditional
expectation. This formulation in general calls for further discretisation to
make the computation possible, and we choose a polynomial approximation. After
giving details on the actual computation in the framework of functional or
spectral approximations, we demonstrate the workings of the algorithm on a
number of examples of increasing complexity. At last, we compare the linear and
nonlinear Bayesian update in form of a filter on some examples.Comment: arXiv admin note: substantial text overlap with arXiv:1312.504
Inverse problems and uncertainty quantification
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) -
the propagation of uncertainty through a computational (forward) model - are
strongly connected. In the form of conditional expectation the Bayesian update
becomes computationally attractive. This is especially the case as together
with a functional or spectral approach for the forward UQ there is no need for
time-consuming and slowly convergent Monte Carlo sampling. The developed
sampling-free non-linear Bayesian update is derived from the variational
problem associated with conditional expectation. This formulation in general
calls for further discretisation to make the computation possible, and we
choose a polynomial approximation. After giving details on the actual
computation in the framework of functional or spectral approximations, we
demonstrate the workings of the algorithm on a number of examples of increasing
complexity. At last, we compare the linear and quadratic Bayesian update on the
small but taxing example of the chaotic Lorenz 84 model, where we experiment
with the influence of different observation or measurement operators on the
update.Comment: 25 pages, 17 figures. arXiv admin note: text overlap with
arXiv:1201.404
Coarse Brownian Dynamics for Nematic Liquid Crystals: Bifurcation Diagrams via Stochastic Simulation
We demonstrate how time-integration of stochastic differential equations
(i.e. Brownian dynamics simulations) can be combined with continuum numerical
bifurcation analysis techniques to analyze the dynamics of liquid crystalline
polymers (LCPs). Sidestepping the necessity of obtaining explicit closures, the
approach analyzes the (unavailable in closed form) coarse macroscopic
equations, estimating the necessary quantities through appropriately
initialized, short bursts of Brownian dynamics simulation. Through this
approach, both stable and unstable branches of the equilibrium bifurcation
diagram are obtained for the Doi model of LCPs and their coarse stability is
estimated. Additional macroscopic computational tasks enabled through this
approach, such as coarse projective integration and coarse stabilizing
controller design, are also demonstrated
Parameter Estimation via Conditional Expectation --- A Bayesian Inversion
When a mathematical or computational model is used to analyse some system, it
is usual that some parameters resp.\ functions or fields in the model are not
known, and hence uncertain. These parametric quantities are then identified by
actual observations of the response of the real system. In a probabilistic
setting, Bayes's theory is the proper mathematical background for this
identification process. The possibility of being able to compute a conditional
expectation turns out to be crucial for this purpose. We show how this
theoretical background can be used in an actual numerical procedure, and
shortly discuss various numerical approximations
Stable Nonlinear Identification From Noisy Repeated Experiments via Convex Optimization
This paper introduces new techniques for using convex optimization to fit
input-output data to a class of stable nonlinear dynamical models. We present
an algorithm that guarantees consistent estimates of models in this class when
a small set of repeated experiments with suitably independent measurement noise
is available. Stability of the estimated models is guaranteed without any
assumptions on the input-output data. We first present a convex optimization
scheme for identifying stable state-space models from empirical moments. Next,
we provide a method for using repeated experiments to remove the effect of
noise on these moment and model estimates. The technique is demonstrated on a
simple simulated example
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
- …