13,682 research outputs found

    Estimation of phase noise in oscillators with colored noise sources

    Get PDF
    In this letter we study the design of algorithms for estimation of phase noise (PN) with colored noise sources. A soft-input maximum a posteriori PN estimator and a modified soft-input extended Kalman smoother are proposed. The performance of the proposed algorithms are compared against those studied in the literature, in terms of mean square error of PN estimation, and symbol error rate of the considered communication system. The comparisons show that considerable performance gains can be achieved by designing estimators that employ correct knowledge of the PN statistics

    Revisiting maximum-a-posteriori estimation in log-concave models

    Get PDF
    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. Despite its success and wide adoption, MAP estimation is not theoretically well understood yet. The prevalent view in the community is that MAP estimation is not proper Bayesian estimation in a decision-theoretic sense because it does not minimise a meaningful expected loss function (unlike the minimum mean squared error (MMSE) estimator that minimises the mean squared loss). This paper addresses this theoretical gap by presenting a decision-theoretic derivation of MAP estimation in Bayesian models that are log-concave. A main novelty is that our analysis is based on differential geometry, and proceeds as follows. First, we use the underlying convex geometry of the Bayesian model to induce a Riemannian geometry on the parameter space. We then use differential geometry to identify the so-called natural or canonical loss function to perform Bayesian point estimation in that Riemannian manifold. For log-concave models, this canonical loss is the Bregman divergence associated with the negative log posterior density. We then show that the MAP estimator is the only Bayesian estimator that minimises the expected canonical loss, and that the posterior mean or MMSE estimator minimises the dual canonical loss. We also study the question of MAP and MSSE estimation performance in large scales and establish a universal bound on the expected canonical error as a function of dimension, offering new insights into the good performance observed in convex problems. These results provide a new understanding of MAP and MMSE estimation in log-concave settings, and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science

    Compressed Sensing over β„“p\ell_p-balls: Minimax Mean Square Error

    Full text link
    We consider the compressed sensing problem, where the object x_0 \in \bR^N is to be recovered from incomplete measurements y=Ax0+zy = Ax_0 + z; here the sensing matrix AA is an nΓ—Nn \times N random matrix with iid Gaussian entries and n<Nn < N. A popular method of sparsity-promoting reconstruction is β„“1\ell^1-penalized least-squares reconstruction (aka LASSO, Basis Pursuit). It is currently popular to consider the strict sparsity model, where the object x0x_0 is nonzero in only a small fraction of entries. In this paper, we instead consider the much more broadly applicable β„“p\ell_p-sparsity model, where x0x_0 is sparse in the sense of having β„“p\ell_p norm bounded by ΞΎβ‹…N1/p\xi \cdot N^{1/p} for some fixed 000 0. We study an asymptotic regime in which nn and NN both tend to infinity with limiting ratio n/N=δ∈(0,1)n/N = \delta \in (0,1), both in the noisy (zβ‰ 0z \neq 0) and noiseless (z=0z=0) cases. Under weak assumptions on x0x_0, we are able to precisely evaluate the worst-case asymptotic minimax mean-squared reconstruction error (AMSE) for β„“1\ell^1 penalized least-squares: min over penalization parameters, max over β„“p\ell_p-sparse objects x0x_0. We exhibit the asymptotically least-favorable object (hardest sparse signal to recover) and the maximin penalization. Our explicit formulas unexpectedly involve quantities appearing classically in statistical decision theory. Occurring in the present setting, they reflect a deeper connection between penalized β„“1\ell^1 minimization and scalar soft thresholding. This connection, which follows from earlier work of the authors and collaborators on the AMP iterative thresholding algorithm, is carefully explained. Our approach also gives precise results under weak-β„“p\ell_p ball coefficient constraints, as we show here.Comment: 41 pages, 11 pdf figure
    • …
    corecore