398 research outputs found
Regularization independent of the noise level: an analysis of quasi-optimality
The quasi-optimality criterion chooses the regularization parameter in
inverse problems without taking into account the noise level. This rule works
remarkably well in practice, although Bakushinskii has shown that there are
always counterexamples with very poor performance. We propose an average case
analysis of quasi-optimality for spectral cut-off estimators and we prove that
the quasi-optimality criterion determines estimators which are rate-optimal
{\em on average}. Its practical performance is illustrated with a calibration
problem from mathematical finance.Comment: 18 pages, 3 figure
Predicting functional properties of milk powder based on manufacturing data in an industrial-scale powder plant
The fundamental science relating key physical and functional properties of milk powder to plant operating conditions is complex and largely unknown. Consequently this paper takes a data-driven approach to relate the routinely measured plant conditions to one vital function property known as sediment in an industrial-scale powder plant. Data from four consecutive production seasons was examined, and linear regression models based on a chosen set of processing variables were used to predict the sediment values. The average prediction error was well within the range of the uncertainty of the laboratory test. The models could be used to predict the effect of each individual plant variable on the sediment values which could be beneficial in quality optimisation. In addition the choice of the training data set used to compute regression coefficients was studied and the resultant regression models were compared to alternative PLS models built on the same data
Physics‐constrained non‐Gaussian probabilistic learning on manifolds
International audienceAn extension of the probabilistic learning on manifolds (PLoM), recently introduced by the authors, has been presented: In addition to the initial data set given for performing the probabilistic learning, constraints are given, which correspond to statistics of experiments or of physical models. We consider a non-Gaussian random vector whose unknown probability distribution has to satisfy constraints. The method consists in constructing a generator using the PLoM and the classical Kullback-Leibler minimum cross-entropy principle. The resulting optimization problem is reformulated using Lagrange multipliers associated with the constraints. The optimal solution of the Lagrange multipliers is computed using an efficient iterative algorithm. At each iteration, the Markov chainMonte Carlo algorithm developed for the PLoM is used, consisting in solving an Itô stochastic differential equation that is projected on a diffusion-maps basis. The method and the algorithm are efficient and allow the construction ofprobabilistic models for high-dimensional problems from small initial data sets and for which an arbitrary number of constraints are specified. The first application is sufficiently simple in order to be easily reproduced. The second one is relative to a stochastic elliptic boundary value problem in high dimension
A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations
Many scientific and engineering problems require to perform Bayesian
inferences in function spaces, in which the unknowns are of infinite dimension.
In such problems, choosing an appropriate prior distribution is an important
task. In particular we consider problems where the function to infer is subject
to sharp jumps which render the commonly used Gaussian measures unsuitable. On
the other hand, the so-called total variation (TV) prior can only be defined in
a finite dimensional setting, and does not lead to a well-defined posterior
measure in function spaces. In this work we present a TV-Gaussian (TG) prior to
address such problems, where the TV term is used to detect sharp jumps of the
function, and the Gaussian distribution is used as a reference measure so that
it results in a well-defined posterior measure in the function space. We also
present an efficient Markov Chain Monte Carlo (MCMC) algorithm to draw samples
from the posterior distribution of the TG prior. With numerical examples we
demonstrate the performance of the TG prior and the efficiency of the proposed
MCMC algorithm
Retrieval of process rate parameters in the general dynamic equation for aerosols using Bayesian state estimation: BAYROSOL1.0
The uncertainty in the radiative forcing caused by aerosols and its effect on climate change calls for research to improve knowledge of the aerosol
particle formation and growth processes. While experimental research has
provided a large amount of high-quality data on aerosols over the last 2 decades, the inference of the process rates is still inadequate, mainly due to
limitations in the analysis of data. This paper focuses on developing
computational methods to infer aerosol process rates from size distribution
measurements. In the proposed approach, the temporal evolution of aerosol
size distributions is modeled with the general dynamic equation (GDE) equipped with
stochastic terms that account for the uncertainties of the process rates. The
time-dependent particle size distribution and the rates of the underlying
formation and growth processes are reconstructed based on time series of
particle analyzer data using Bayesian state estimation – which not only
provides (point) estimates for the process rates but also enables quantification of
their uncertainties. The feasibility of the proposed computational framework
is demonstrated by a set of numerical simulation studies.</p
Fast Gibbs sampling for high-dimensional Bayesian inversion
Solving ill-posed inverse problems by Bayesian inference has recently
attracted considerable attention. Compared to deterministic approaches, the
probabilistic representation of the solution by the posterior distribution can
be exploited to explore and quantify its uncertainties. In applications where
the inverse solution is subject to further analysis procedures, this can be a
significant advantage. Alongside theoretical progress, various new
computational techniques allow to sample very high dimensional posterior
distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior
sampler was developed for linear inverse problems with -type priors. In
this article, we extend this single component Gibbs-type sampler to a wide
range of priors used in Bayesian inversion, such as general priors
with additional hard constraints. Besides a fast computation of the
conditional, single component densities in an explicit, parameterized form, a
fast, robust and exact sampling from these one-dimensional densities is key to
obtain an efficient algorithm. We demonstrate that a generalization of slice
sampling can utilize their specific structure for this task and illustrate the
performance of the resulting slice-within-Gibbs samplers by different computed
examples. These new samplers allow us to perform sample-based Bayesian
inference in high-dimensional scenarios with certain priors for the first time,
including the inversion of computed tomography (CT) data with the popular
isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems
Sensitivity analysis and variance reduction in a stochastic NDT problem
In this paper, we present a framework to deal with uncertainty quantification in case where the ranges of variability of the random parameters are ill-known. Namely the physical properties of the corrosion product (magnetite) which frequently clogs the tube support plate of steam generator, which is inaccessible in nuclear power plants. The methodology is based on Polynomial Chaos (PC) for the direct approach and on Bayesian inference for the inverse approach. The direct Non-Intrusive Spectral Projection (NISP) method is first employed by considering prior probability densities and therefore constructing a PC surrogate model of the large-scale NDT finite element model. To face the prohibitive computational cost underlying the high dimensional random space, an adaptive sparse grid technique is applied on NISP resulting in drastic time reduction. The PC surrogate model, with reduced dimensionality, is used as a forward model in the Bayesian procedure. The posterior probability densities are then identified by inferring from few noisy experimental data. We demonstrate effectiveness of the approach by identifying the most influential parameter in the clogging detection as well as a variability range reduction
Sparse Deterministic Approximation of Bayesian Inverse Problems
We present a parametric deterministic formulation of Bayesian inverse
problems with input parameter from infinite dimensional, separable Banach
spaces. In this formulation, the forward problems are parametric, deterministic
elliptic partial differential equations, and the inverse problem is to
determine the unknown, parametric deterministic coefficients from noisy
observations comprising linear functionals of the solution.
We prove a generalized polynomial chaos representation of the posterior
density with respect to the prior measure, given noisy observational data. We
analyze the sparsity of the posterior density in terms of the summability of
the input data's coefficient sequence. To this end, we estimate the
fluctuations in the prior. We exhibit sufficient conditions on the prior model
in order for approximations of the posterior density to converge at a given
algebraic rate, in terms of the number of unknowns appearing in the
parameteric representation of the prior measure. Similar sparsity and
approximation results are also exhibited for the solution and covariance of the
elliptic partial differential equation under the posterior. These results then
form the basis for efficient uncertainty quantification, in the presence of
data with noise
An approximate empirical Bayesian method for large-scale linear-Gaussian inverse problems
We study Bayesian inference methods for solving linear inverse problems,
focusing on hierarchical formulations where the prior or the likelihood
function depend on unspecified hyperparameters. In practice, these
hyperparameters are often determined via an empirical Bayesian method that
maximizes the marginal likelihood function, i.e., the probability density of
the data conditional on the hyperparameters. Evaluating the marginal
likelihood, however, is computationally challenging for large-scale problems.
In this work, we present a method to approximately evaluate marginal likelihood
functions, based on a low-rank approximation of the update from the prior
covariance to the posterior covariance. We show that this approximation is
optimal in a minimax sense. Moreover, we provide an efficient algorithm to
implement the proposed method, based on a combination of the randomized SVD and
a spectral approximation method to compute square roots of the prior covariance
matrix. Several numerical examples demonstrate good performance of the proposed
method
Fisher Information for Inverse Problems and Trace Class Operators
This paper provides a mathematical framework for Fisher information analysis
for inverse problems based on Gaussian noise on infinite-dimensional Hilbert
space. The covariance operator for the Gaussian noise is assumed to be trace
class, and the Jacobian of the forward operator Hilbert-Schmidt. We show that
the appropriate space for defining the Fisher information is given by the
Cameron-Martin space. This is mainly because the range space of the covariance
operator always is strictly smaller than the Hilbert space. For the Fisher
information to be well-defined, it is furthermore required that the range space
of the Jacobian is contained in the Cameron-Martin space. In order for this
condition to hold and for the Fisher information to be trace class, a
sufficient condition is formulated based on the singular values of the Jacobian
as well as of the eigenvalues of the covariance operator, together with some
regularity assumptions regarding their relative rate of convergence. An
explicit example is given regarding an electromagnetic inverse source problem
with "external" spherically isotropic noise, as well as "internal" additive
uncorrelated noise.Comment: Submitted to Journal of Mathematical Physic
- …