137,212 research outputs found
Maximum-a-posteriori estimation with Bayesian confidence regions
Solutions to inverse problems that are ill-conditioned or ill-posed may have
significant intrinsic uncertainty. Unfortunately, analysing and quantifying
this uncertainty is very challenging, particularly in high-dimensional
problems. As a result, while most modern mathematical imaging methods produce
impressive point estimation results, they are generally unable to quantify the
uncertainty in the solutions delivered. This paper presents a new general
methodology for approximating Bayesian high-posterior-density credibility
regions in inverse problems that are convex and potentially very
high-dimensional. The approximations are derived by using recent concentration
of measure results related to information theory for log-concave random
vectors. A remarkable property of the approximations is that they can be
computed very efficiently, even in large-scale problems, by using standard
convex optimisation techniques. In particular, they are available as a
by-product in problems solved by maximum-a-posteriori estimation. The
approximations also have favourable theoretical properties, namely they
outer-bound the true high-posterior-density credibility regions, and they are
stable with respect to model dimension. The proposed methodology is illustrated
on two high-dimensional imaging inverse problems related to tomographic
reconstruction and sparse deconvolution, where the approximations are used to
perform Bayesian hypothesis tests and explore the uncertainty about the
solutions, and where proximal Markov chain Monte Carlo algorithms are used as
benchmark to compute exact credible regions and measure the approximation
error
Random projections for Bayesian regression
This article deals with random projections applied as a data reduction
technique for Bayesian regression analysis. We show sufficient conditions under
which the entire -dimensional distribution is approximately preserved under
random projections by reducing the number of data points from to in the case . Under mild
assumptions, we prove that evaluating a Gaussian likelihood function based on
the projected data instead of the original data yields a
-approximation in terms of the Wasserstein
distance. Our main result shows that the posterior distribution of Bayesian
linear regression is approximated up to a small error depending on only an
-fraction of its defining parameters. This holds when using
arbitrary Gaussian priors or the degenerate case of uniform distributions over
for . Our empirical evaluations involve different
simulated settings of Bayesian linear regression. Our experiments underline
that the proposed method is able to recover the regression model up to small
error while considerably reducing the total running time
Cluster Variation Method in Statistical Physics and Probabilistic Graphical Models
The cluster variation method (CVM) is a hierarchy of approximate variational
techniques for discrete (Ising--like) models in equilibrium statistical
mechanics, improving on the mean--field approximation and the Bethe--Peierls
approximation, which can be regarded as the lowest level of the CVM. In recent
years it has been applied both in statistical physics and to inference and
optimization problems formulated in terms of probabilistic graphical models.
The foundations of the CVM are briefly reviewed, and the relations with
similar techniques are discussed. The main properties of the method are
considered, with emphasis on its exactness for particular models and on its
asymptotic properties.
The problem of the minimization of the variational free energy, which arises
in the CVM, is also addressed, and recent results about both provably
convergent and message-passing algorithms are discussed.Comment: 36 pages, 17 figure
A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs
A -birthday repetition of a
two-prover game is a game in which the two provers are sent
random sets of questions from of sizes and respectively.
These two sets are sampled independently uniformly among all sets of questions
of those particular sizes. We prove the following birthday repetition theorem:
when satisfies some mild conditions, decreases exponentially in where is the total number of
questions. Our result positively resolves an open question posted by Aaronson,
Impagliazzo and Moshkovitz (CCC 2014).
As an application of our birthday repetition theorem, we obtain new
fine-grained hardness of approximation results for dense CSPs. Specifically, we
establish a tight trade-off between running time and approximation ratio for
dense CSPs by showing conditional lower bounds, integrality gaps and
approximation algorithms. In particular, for any sufficiently large and for
every , we show the following results:
- We exhibit an -approximation algorithm for dense Max -CSPs
with alphabet size via -level of Sherali-Adams relaxation.
- Through our birthday repetition theorem, we obtain an integrality gap of
for -level Lasserre relaxation for fully-dense Max
-CSP.
- Assuming that there is a constant such that Max 3SAT cannot
be approximated to within of the optimal in sub-exponential
time, our birthday repetition theorem implies that any algorithm that
approximates fully-dense Max -CSP to within a factor takes
time, almost tightly matching the algorithmic
result based on Sherali-Adams relaxation.Comment: 45 page
- …