2,200 research outputs found
Variational Bayesian Inference of Line Spectra
In this paper, we address the fundamental problem of line spectral estimation
in a Bayesian framework. We target model order and parameter estimation via
variational inference in a probabilistic model in which the frequencies are
continuous-valued, i.e., not restricted to a grid; and the coefficients are
governed by a Bernoulli-Gaussian prior model turning model order selection into
binary sequence detection. Unlike earlier works which retain only point
estimates of the frequencies, we undertake a more complete Bayesian treatment
by estimating the posterior probability density functions (pdfs) of the
frequencies and computing expectations over them. Thus, we additionally capture
and operate with the uncertainty of the frequency estimates. Aiming to maximize
the model evidence, variational optimization provides analytic approximations
of the posterior pdfs and also gives estimates of the additional parameters. We
propose an accurate representation of the pdfs of the frequencies by mixtures
of von Mises pdfs, which yields closed-form expectations. We define the
algorithm VALSE in which the estimates of the pdfs and parameters are
iteratively updated. VALSE is a gridless, convergent method, does not require
parameter tuning, can easily include prior knowledge about the frequencies and
provides approximate posterior pdfs based on which the uncertainty in line
spectral estimation can be quantified. Simulation results show that accounting
for the uncertainty of frequency estimates, rather than computing just point
estimates, significantly improves the performance. The performance of VALSE is
superior to that of state-of-the-art methods and closely approaches the
Cram\'er-Rao bound computed for the true model order.Comment: 15 pages, 8 figures, accepted for publication in IEEE Transactions on
Signal Processin
Variational Data Assimilation via Sparse Regularization
This paper studies the role of sparse regularization in a properly chosen
basis for variational data assimilation (VDA) problems. Specifically, it
focuses on data assimilation of noisy and down-sampled observations while the
state variable of interest exhibits sparsity in the real or transformed domain.
We show that in the presence of sparsity, the -norm regularization
produces more accurate and stable solutions than the classic data assimilation
methods. To motivate further developments of the proposed methodology,
assimilation experiments are conducted in the wavelet and spectral domain using
the linear advection-diffusion equation
Maximum-a-posteriori estimation with Bayesian confidence regions
Solutions to inverse problems that are ill-conditioned or ill-posed may have
significant intrinsic uncertainty. Unfortunately, analysing and quantifying
this uncertainty is very challenging, particularly in high-dimensional
problems. As a result, while most modern mathematical imaging methods produce
impressive point estimation results, they are generally unable to quantify the
uncertainty in the solutions delivered. This paper presents a new general
methodology for approximating Bayesian high-posterior-density credibility
regions in inverse problems that are convex and potentially very
high-dimensional. The approximations are derived by using recent concentration
of measure results related to information theory for log-concave random
vectors. A remarkable property of the approximations is that they can be
computed very efficiently, even in large-scale problems, by using standard
convex optimisation techniques. In particular, they are available as a
by-product in problems solved by maximum-a-posteriori estimation. The
approximations also have favourable theoretical properties, namely they
outer-bound the true high-posterior-density credibility regions, and they are
stable with respect to model dimension. The proposed methodology is illustrated
on two high-dimensional imaging inverse problems related to tomographic
reconstruction and sparse deconvolution, where the approximations are used to
perform Bayesian hypothesis tests and explore the uncertainty about the
solutions, and where proximal Markov chain Monte Carlo algorithms are used as
benchmark to compute exact credible regions and measure the approximation
error
- β¦