1,086 research outputs found
Scalable Bayesian uncertainty quantification in imaging inverse problems via convex optimization
We propose a Bayesian uncertainty quantification method for large-scale
imaging inverse problems. Our method applies to all Bayesian models that are
log-concave, where maximum-a-posteriori (MAP) estimation is a convex
optimization problem. The method is a framework to analyse the confidence in
specific structures observed in MAP estimates (e.g., lesions in medical
imaging, celestial sources in astronomical imaging), to enable using them as
evidence to inform decisions and conclusions. Precisely, following Bayesian
decision theory, we seek to assert the structures under scrutiny by performing
a Bayesian hypothesis test that proceeds as follows: firstly, it postulates
that the structures are not present in the true image, and then seeks to use
the data and prior knowledge to reject this null hypothesis with high
probability. Computing such tests for imaging problems is generally very
difficult because of the high dimensionality involved. A main feature of this
work is to leverage probability concentration phenomena and the underlying
convex geometry to formulate the Bayesian hypothesis test as a convex problem,
that we then efficiently solve by using scalable optimization algorithms. This
allows scaling to high-resolution and high-sensitivity imaging problems that
are computationally unaffordable for other Bayesian computation approaches. We
illustrate our methodology, dubbed BUQO (Bayesian Uncertainty Quantification by
Optimization), on a range of challenging Fourier imaging problems arising in
astronomy and medicine
Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation
Inverse problems play a key role in modern image/signal processing methods.
However, since they are generally ill-conditioned or ill-posed due to lack of
observations, their solutions may have significant intrinsic uncertainty.
Analysing and quantifying this uncertainty is very challenging, particularly in
high-dimensional problems and problems with non-smooth objective functionals
(e.g. sparsity-promoting priors). In this article, a series of strategies to
visualise this uncertainty are presented, e.g. highest posterior density
credible regions, and local credible intervals (cf. error bars) for individual
pixels and superpixels. Our methods support non-smooth priors for inverse
problems and can be scaled to high-dimensional settings. Moreover, we present
strategies to automatically set regularisation parameters so that the proposed
uncertainty quantification (UQ) strategies become much easier to use. Also,
different kinds of dictionaries (complete and over-complete) are used to
represent the image/signal and their performance in the proposed UQ methodology
is investigated.Comment: 5 pages, 5 figure
Maximum-a-posteriori estimation with Bayesian confidence regions
Solutions to inverse problems that are ill-conditioned or ill-posed may have
significant intrinsic uncertainty. Unfortunately, analysing and quantifying
this uncertainty is very challenging, particularly in high-dimensional
problems. As a result, while most modern mathematical imaging methods produce
impressive point estimation results, they are generally unable to quantify the
uncertainty in the solutions delivered. This paper presents a new general
methodology for approximating Bayesian high-posterior-density credibility
regions in inverse problems that are convex and potentially very
high-dimensional. The approximations are derived by using recent concentration
of measure results related to information theory for log-concave random
vectors. A remarkable property of the approximations is that they can be
computed very efficiently, even in large-scale problems, by using standard
convex optimisation techniques. In particular, they are available as a
by-product in problems solved by maximum-a-posteriori estimation. The
approximations also have favourable theoretical properties, namely they
outer-bound the true high-posterior-density credibility regions, and they are
stable with respect to model dimension. The proposed methodology is illustrated
on two high-dimensional imaging inverse problems related to tomographic
reconstruction and sparse deconvolution, where the approximations are used to
perform Bayesian hypothesis tests and explore the uncertainty about the
solutions, and where proximal Markov chain Monte Carlo algorithms are used as
benchmark to compute exact credible regions and measure the approximation
error
Optimization Methods for Inverse Problems
Optimization plays an important role in solving many inverse problems.
Indeed, the task of inversion often either involves or is fully cast as a
solution of an optimization problem. In this light, the mere non-linear,
non-convex, and large-scale nature of many of these inversions gives rise to
some very challenging optimization problems. The inverse problem community has
long been developing various techniques for solving such optimization tasks.
However, other, seemingly disjoint communities, such as that of machine
learning, have developed, almost in parallel, interesting alternative methods
which might have stayed under the radar of the inverse problem community. In
this survey, we aim to change that. In doing so, we first discuss current
state-of-the-art optimization methods widely used in inverse problems. We then
survey recent related advances in addressing similar challenges in problems
faced by the machine learning community, and discuss their potential advantages
for solving inverse problems. By highlighting the similarities among the
optimization challenges faced by the inverse problem and the machine learning
communities, we hope that this survey can serve as a bridge in bringing
together these two communities and encourage cross fertilization of ideas.Comment: 13 page
Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification
Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Accelerated Bayesian imaging by relaxed proximal-point Langevin sampling
This paper presents a new accelerated proximal Markov chain Monte Carlo
methodology to perform Bayesian inference in imaging inverse problems with an
underlying convex geometry. The proposed strategy takes the form of a
stochastic relaxed proximal-point iteration that admits two complementary
interpretations. For models that are smooth or regularised by Moreau-Yosida
smoothing, the algorithm is equivalent to an implicit midpoint discretisation
of an overdamped Langevin diffusion targeting the posterior distribution of
interest. This discretisation is asymptotically unbiased for Gaussian targets
and shown to converge in an accelerated manner for any target that is
-strongly log-concave (i.e., requiring in the order of
iterations to converge, similarly to accelerated optimisation schemes),
comparing favorably to [M. Pereyra, L. Vargas Mieles, K.C. Zygalakis, SIAM J.
Imaging Sciences, 13,2 (2020), pp. 905-935] which is only provably accelerated
for Gaussian targets and has bias. For models that are not smooth, the
algorithm is equivalent to a Leimkuhler-Matthews discretisation of a Langevin
diffusion targeting a Moreau-Yosida approximation of the posterior distribution
of interest, and hence achieves a significantly lower bias than conventional
unadjusted Langevin strategies based on the Euler-Maruyama discretisation. For
targets that are -strongly log-concave, the provided non-asymptotic
convergence analysis also identifies the optimal time step which maximizes the
convergence speed. The proposed methodology is demonstrated through a range of
experiments related to image deconvolution with Gaussian and Poisson noise,
with assumption-driven and data-driven convex priors. Source codes for the
numerical experiments of this paper are available from
https://github.com/MI2G/accelerated-langevin-imla.Comment: 34 pages, 13 figure
- …