13,682 research outputs found
Estimation of phase noise in oscillators with colored noise sources
In this letter we study the design of algorithms for estimation of phase
noise (PN) with colored noise sources. A soft-input maximum a posteriori PN
estimator and a modified soft-input extended Kalman smoother are proposed. The
performance of the proposed algorithms are compared against those studied in
the literature, in terms of mean square error of PN estimation, and symbol
error rate of the considered communication system. The comparisons show that
considerable performance gains can be achieved by designing estimators that
employ correct knowledge of the PN statistics
Revisiting maximum-a-posteriori estimation in log-concave models
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation
methodology in imaging sciences, where high dimensionality is often addressed
by using Bayesian models that are log-concave and whose posterior mode can be
computed efficiently by convex optimisation. Despite its success and wide
adoption, MAP estimation is not theoretically well understood yet. The
prevalent view in the community is that MAP estimation is not proper Bayesian
estimation in a decision-theoretic sense because it does not minimise a
meaningful expected loss function (unlike the minimum mean squared error (MMSE)
estimator that minimises the mean squared loss). This paper addresses this
theoretical gap by presenting a decision-theoretic derivation of MAP estimation
in Bayesian models that are log-concave. A main novelty is that our analysis is
based on differential geometry, and proceeds as follows. First, we use the
underlying convex geometry of the Bayesian model to induce a Riemannian
geometry on the parameter space. We then use differential geometry to identify
the so-called natural or canonical loss function to perform Bayesian point
estimation in that Riemannian manifold. For log-concave models, this canonical
loss is the Bregman divergence associated with the negative log posterior
density. We then show that the MAP estimator is the only Bayesian estimator
that minimises the expected canonical loss, and that the posterior mean or MMSE
estimator minimises the dual canonical loss. We also study the question of MAP
and MSSE estimation performance in large scales and establish a universal bound
on the expected canonical error as a function of dimension, offering new
insights into the good performance observed in convex problems. These results
provide a new understanding of MAP and MMSE estimation in log-concave settings,
and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science
Compressed Sensing over -balls: Minimax Mean Square Error
We consider the compressed sensing problem, where the object x_0 \in \bR^N
is to be recovered from incomplete measurements ; here the
sensing matrix is an random matrix with iid Gaussian entries
and . A popular method of sparsity-promoting reconstruction is
-penalized least-squares reconstruction (aka LASSO, Basis Pursuit).
It is currently popular to consider the strict sparsity model, where the
object is nonzero in only a small fraction of entries. In this paper, we
instead consider the much more broadly applicable -sparsity model,
where is sparse in the sense of having norm bounded by for some fixed .
We study an asymptotic regime in which and both tend to infinity with
limiting ratio , both in the noisy () and
noiseless () cases. Under weak assumptions on , we are able to
precisely evaluate the worst-case asymptotic minimax mean-squared
reconstruction error (AMSE) for penalized least-squares: min over
penalization parameters, max over -sparse objects . We exhibit the
asymptotically least-favorable object (hardest sparse signal to recover) and
the maximin penalization.
Our explicit formulas unexpectedly involve quantities appearing classically
in statistical decision theory. Occurring in the present setting, they reflect
a deeper connection between penalized minimization and scalar soft
thresholding. This connection, which follows from earlier work of the authors
and collaborators on the AMP iterative thresholding algorithm, is carefully
explained.
Our approach also gives precise results under weak- ball coefficient
constraints, as we show here.Comment: 41 pages, 11 pdf figure
- β¦