24,932 research outputs found
A fast Bayesian approach to discrete object detection in astronomical datasets - PowellSnakes I
A new fast Bayesian approach is introduced for the detection of discrete
objects immersed in a diffuse background. This new method, called PowellSnakes,
speeds up traditional Bayesian techniques by: i) replacing the standard form of
the likelihood for the parameters characterizing the discrete objects by an
alternative exact form that is much quicker to evaluate; ii) using a
simultaneous multiple minimization code based on Powell's direction set
algorithm to locate rapidly the local maxima in the posterior; and iii)
deciding whether each located posterior peak corresponds to a real object by
performing a Bayesian model selection using an approximate evidence value based
on a local Gaussian approximation to the peak. The construction of this
Gaussian approximation also provides the covariance matrix of the uncertainties
in the derived parameter values for the object in question. This new approach
provides a speed up in performance by a factor of `hundreds' as compared to
existing Bayesian source extraction methods that use MCMC to explore the
parameter space, such as that presented by Hobson & McLachlan. We illustrate
the capabilities of the method by applying to some simplified toy models.
Furthermore PowellSnakes has the advantage of consistently defining the
threshold for acceptance/rejection based on priors which cannot be said of the
frequentist methods. We present here the first implementation of this technique
(Version-I). Further improvements to this implementation are currently under
investigation and will be published shortly. The application of the method to
realistic simulated Planck observations will be presented in a forthcoming
publication.Comment: 30 pages, 15 figures, revised version with minor changes, accepted
for publication in MNRA
Bayesian adaptation
In the need for low assumption inferential methods in infinite-dimensional
settings, Bayesian adaptive estimation via a prior distribution that does not
depend on the regularity of the function to be estimated nor on the sample size
is valuable. We elucidate relationships among the main approaches followed to
design priors for minimax-optimal rate-adaptive estimation meanwhile shedding
light on the underlying ideas.Comment: 20 pages, Propositions 3 and 5 adde
Turbo EP-based Equalization: a Filter-Type Implementation
This manuscript has been submitted to Transactions on Communications on
September 7, 2017; revised on January 10, 2018 and March 27, 2018; and accepted
on April 25, 2018
We propose a novel filter-type equalizer to improve the solution of the
linear minimum-mean squared-error (LMMSE) turbo equalizer, with computational
complexity constrained to be quadratic in the filter length. When high-order
modulations and/or large memory channels are used the optimal BCJR equalizer is
unavailable, due to its computational complexity. In this scenario, the
filter-type LMMSE turbo equalization exhibits a good performance compared to
other approximations. In this paper, we show that this solution can be
significantly improved by using expectation propagation (EP) in the estimation
of the a posteriori probabilities. First, it yields a more accurate estimation
of the extrinsic distribution to be sent to the channel decoder. Second,
compared to other solutions based on EP the computational complexity of the
proposed solution is constrained to be quadratic in the length of the finite
impulse response (FIR). In addition, we review previous EP-based turbo
equalization implementations. Instead of considering default uniform priors we
exploit the outputs of the decoder. Some simulation results are included to
show that this new EP-based filter remarkably outperforms the turbo approach of
previous versions of the EP algorithm and also improves the LMMSE solution,
with and without turbo equalization
Posterior Mean Super-Resolution with a Compound Gaussian Markov Random Field Prior
This manuscript proposes a posterior mean (PM) super-resolution (SR) method
with a compound Gaussian Markov random field (MRF) prior. SR is a technique to
estimate a spatially high-resolution image from observed multiple
low-resolution images. A compound Gaussian MRF model provides a preferable
prior for natural images that preserves edges. PM is the optimal estimator for
the objective function of peak signal-to-noise ratio (PSNR). This estimator is
numerically determined by using variational Bayes (VB). We then solve the
conjugate prior problem on VB and the exponential-order calculation cost
problem of a compound Gaussian MRF prior with simple Taylor approximations. In
experiments, the proposed method roughly overcomes existing methods.Comment: 5 pages, 20 figures, 1 tables, accepted to ICASSP2012 (corrected
2012/3/23
Fast Exact Bayesian Inference for Sparse Signals in the Normal Sequence Model
We consider exact algorithms for Bayesian inference with model selection
priors (including spike-and-slab priors) in the sparse normal sequence model.
Because the best existing exact algorithm becomes numerically unstable for
sample sizes over n=500, there has been much attention for alternative
approaches like approximate algorithms (Gibbs sampling, variational Bayes,
etc.), shrinkage priors (e.g. the Horseshoe prior and the Spike-and-Slab LASSO)
or empirical Bayesian methods. However, by introducing algorithmic ideas from
online sequential prediction, we show that exact calculations are feasible for
much larger sample sizes: for general model selection priors we reach n=25000,
and for certain spike-and-slab priors we can easily reach n=100000. We further
prove a de Finetti-like result for finite sample sizes that characterizes
exactly which model selection priors can be expressed as spike-and-slab priors.
The computational speed and numerical accuracy of the proposed methods are
demonstrated in experiments on simulated data, on a differential gene
expression data set, and to compare the effect of multiple hyper-parameter
settings in the beta-binomial prior. In our experimental evaluation we compute
guaranteed bounds on the numerical accuracy of all new algorithms, which shows
that the proposed methods are numerically reliable whereas an alternative based
on long division is not
Exact Dimensionality Selection for Bayesian PCA
We present a Bayesian model selection approach to estimate the intrinsic
dimensionality of a high-dimensional dataset. To this end, we introduce a novel
formulation of the probabilisitic principal component analysis model based on a
normal-gamma prior distribution. In this context, we exhibit a closed-form
expression of the marginal likelihood which allows to infer an optimal number
of components. We also propose a heuristic based on the expected shape of the
marginal likelihood curve in order to choose the hyperparameters. In
non-asymptotic frameworks, we show on simulated data that this exact
dimensionality selection approach is competitive with both Bayesian and
frequentist state-of-the-art methods
Bayesian linear inverse problems in regularity scales
We obtain rates of contraction of posterior distributions in inverse problems
defined by scales of smoothness classes. We derive abstract results for general
priors, with contraction rates determined by Galerkin approximation. The rate
depends on the amount of prior concentration near the true function and the
prior mass of functions with inferior Galerkin approximation. We apply the
general result to non-conjugate series priors, showing that these priors give
near optimal and adaptive recovery in some generality, Gaussian priors, and
mixtures of Gaussian priors, where the latter are also shown to be near optimal
and adaptive. The proofs are based on general testing and approximation
arguments, without explicit calculations on the posterior distribution. We are
thus not restricted to priors based on the singular value decomposition of the
operator. We illustrate the results with examples of inverse problems resulting
from differential equations.Comment: 34 page
- …