768 research outputs found
Partially Linear Estimation with Application to Sparse Signal Recovery From Measurement Pairs
We address the problem of estimating a random vector X from two sets of
measurements Y and Z, such that the estimator is linear in Y. We show that the
partially linear minimum mean squared error (PLMMSE) estimator does not require
knowing the joint distribution of X and Y in full, but rather only its
second-order moments. This renders it of potential interest in various
applications. We further show that the PLMMSE method is minimax-optimal among
all estimators that solely depend on the second-order statistics of X and Y. We
demonstrate our approach in the context of recovering a signal, which is sparse
in a unitary dictionary, from noisy observations of it and of a filtered
version of it. We show that in this setting PLMMSE estimation has a clear
computational advantage, while its performance is comparable to
state-of-the-art algorithms. We apply our approach both in static and dynamic
estimation applications. In the former category, we treat the problem of image
enhancement from blurred/noisy image pairs, where we show that PLMMSE
estimation performs only slightly worse than state-of-the art algorithms, while
running an order of magnitude faster. In the dynamic setting, we provide a
recursive implementation of the estimator and demonstrate its utility in the
context of tracking maneuvering targets from position and acceleration
measurements.Comment: 13 pages, 5 figure
Revisiting maximum-a-posteriori estimation in log-concave models
Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation
methodology in imaging sciences, where high dimensionality is often addressed
by using Bayesian models that are log-concave and whose posterior mode can be
computed efficiently by convex optimisation. Despite its success and wide
adoption, MAP estimation is not theoretically well understood yet. The
prevalent view in the community is that MAP estimation is not proper Bayesian
estimation in a decision-theoretic sense because it does not minimise a
meaningful expected loss function (unlike the minimum mean squared error (MMSE)
estimator that minimises the mean squared loss). This paper addresses this
theoretical gap by presenting a decision-theoretic derivation of MAP estimation
in Bayesian models that are log-concave. A main novelty is that our analysis is
based on differential geometry, and proceeds as follows. First, we use the
underlying convex geometry of the Bayesian model to induce a Riemannian
geometry on the parameter space. We then use differential geometry to identify
the so-called natural or canonical loss function to perform Bayesian point
estimation in that Riemannian manifold. For log-concave models, this canonical
loss is the Bregman divergence associated with the negative log posterior
density. We then show that the MAP estimator is the only Bayesian estimator
that minimises the expected canonical loss, and that the posterior mean or MMSE
estimator minimises the dual canonical loss. We also study the question of MAP
and MSSE estimation performance in large scales and establish a universal bound
on the expected canonical error as a function of dimension, offering new
insights into the good performance observed in convex problems. These results
provide a new understanding of MAP and MMSE estimation in log-concave settings,
and of the multiple roles that convex geometry plays in imaging problems.Comment: Accepted for publication in SIAM Imaging Science
Regularized Block Toeplitz Covariance Matrix Estimation via Kronecker Product Expansions
In this work we consider the estimation of spatio-temporal covariance
matrices in the low sample non-Gaussian regime. We impose covariance structure
in the form of a sum of Kronecker products decomposition (Tsiligkaridis et al.
2013, Greenewald et al. 2013) with diagonal correction (Greenewald et al.),
which we refer to as DC-KronPCA, in the estimation of multiframe covariance
matrices. This paper extends the approaches of (Tsiligkaridis et al.) in two
directions. First, we modify the diagonally corrected method of (Greenewald et
al.) to include a block Toeplitz constraint imposing temporal stationarity
structure. Second, we improve the conditioning of the estimate in the very low
sample regime by using Ledoit-Wolf type shrinkage regularization similar to
(Chen, Hero et al. 2010). For improved robustness to heavy tailed
distributions, we modify the KronPCA to incorporate robust shrinkage estimation
(Chen, Hero et al. 2011). Results of numerical simulations establish benefits
in terms of estimation MSE when compared to previous methods. Finally, we apply
our methods to a real-world network spatio-temporal anomaly detection problem
and achieve superior results.Comment: To appear at IEEE SSP 2014 4 page
Multi-Step Knowledge-Aided Iterative ESPRIT for Direction Finding
In this work, we propose a subspace-based algorithm for DOA estimation which
iteratively reduces the disturbance factors of the estimated data covariance
matrix and incorporates prior knowledge which is gradually obtained on line. An
analysis of the MSE of the reshaped data covariance matrix is carried out along
with comparisons between computational complexities of the proposed and
existing algorithms. Simulations focusing on closely-spaced sources, where they
are uncorrelated and correlated, illustrate the improvements achieved.Comment: 7 figures. arXiv admin note: text overlap with arXiv:1703.1052
- …