9,829 research outputs found
Analysis of Fisher Information and the Cram\'{e}r-Rao Bound for Nonlinear Parameter Estimation after Compressed Sensing
In this paper, we analyze the impact of compressed sensing with complex
random matrices on Fisher information and the Cram\'{e}r-Rao Bound (CRB) for
estimating unknown parameters in the mean value function of a complex
multivariate normal distribution. We consider the class of random compression
matrices whose distribution is right-orthogonally invariant. The compression
matrix whose elements are i.i.d. standard normal random variables is one such
matrix. We show that for all such compression matrices, the Fisher information
matrix has a complex matrix beta distribution. We also derive the distribution
of CRB. These distributions can be used to quantify the loss in CRB as a
function of the Fisher information of the non-compressed data. In our numerical
examples, we consider a direction of arrival estimation problem and discuss the
use of these distributions as guidelines for choosing compression ratios based
on the resulting loss in CRB.Comment: 12 pages, 3figure
Statistical Mechanics of High-Dimensional Inference
To model modern large-scale datasets, we need efficient algorithms to infer a
set of unknown model parameters from noisy measurements. What are
fundamental limits on the accuracy of parameter inference, given finite
signal-to-noise ratios, limited measurements, prior information, and
computational tractability requirements? How can we combine prior information
with measurements to achieve these limits? Classical statistics gives incisive
answers to these questions as the measurement density . However, these classical results are not
relevant to modern high-dimensional inference problems, which instead occur at
finite . We formulate and analyze high-dimensional inference as a
problem in the statistical physics of quenched disorder. Our analysis uncovers
fundamental limits on the accuracy of inference in high dimensions, and reveals
that widely cherished inference algorithms like maximum likelihood (ML) and
maximum-a posteriori (MAP) inference cannot achieve these limits. We further
find optimal, computationally tractable algorithms that can achieve these
limits. Intriguingly, in high dimensions, these optimal algorithms become
computationally simpler than MAP and ML, while still outperforming them. For
example, such optimal algorithms can lead to as much as a 20% reduction in the
amount of data to achieve the same performance relative to MAP. Moreover, our
analysis reveals simple relations between optimal high dimensional inference
and low dimensional scalar Bayesian inference, insights into the nature of
generalization and predictive power in high dimensions, information theoretic
limits on compressed sensing, phase transitions in quadratic inference, and
connections to central mathematical objects in convex optimization theory and
random matrix theory.Comment: See http://ganguli-gang.stanford.edu/pdf/HighDimInf.Supp.pdf for
supplementary materia
Compressed matched filter for non-Gaussian noise
We consider estimation of a deterministic unknown parameter vector in a
linear model with non-Gaussian noise. In the Gaussian case, dimensionality
reduction via a linear matched filter provides a simple low dimensional
sufficient statistic which can be easily communicated and/or stored for future
inference. Such a statistic is usually unknown in the general non-Gaussian
case. Instead, we propose a hybrid matched filter coupled with a randomized
compressed sensing procedure, which together create a low dimensional
statistic. We also derive a complementary algorithm for robust reconstruction
given this statistic. Our recovery method is based on the fast iterative
shrinkage and thresholding algorithm which is used for outlier rejection given
the compressed data. We demonstrate the advantages of the proposed framework
using synthetic simulations
Rank-based model selection for multiple ions quantum tomography
The statistical analysis of measurement data has become a key component of
many quantum engineering experiments. As standard full state tomography becomes
unfeasible for large dimensional quantum systems, one needs to exploit prior
information and the "sparsity" properties of the experimental state in order to
reduce the dimensionality of the estimation problem. In this paper we propose
model selection as a general principle for finding the simplest, or most
parsimonious explanation of the data, by fitting different models and choosing
the estimator with the best trade-off between likelihood fit and model
complexity. We apply two well established model selection methods -- the Akaike
information criterion (AIC) and the Bayesian information criterion (BIC) -- to
models consising of states of fixed rank and datasets such as are currently
produced in multiple ions experiments. We test the performance of AIC and BIC
on randomly chosen low rank states of 4 ions, and study the dependence of the
selected rank with the number of measurement repetitions for one ion states. We
then apply the methods to real data from a 4 ions experiment aimed at creating
a Smolin state of rank 4. The two methods indicate that the optimal model for
describing the data lies between ranks 6 and 9, and the Pearson test
is applied to validate this conclusion. Additionally we find that the mean
square error of the maximum likelihood estimator for pure states is close to
that of the optimal over all possible measurements.Comment: 24 pages, 6 figures, 3 table
Phase Retrieval From Binary Measurements
We consider the problem of signal reconstruction from quadratic measurements
that are encoded as +1 or -1 depending on whether they exceed a predetermined
positive threshold or not. Binary measurements are fast to acquire and
inexpensive in terms of hardware. We formulate the problem of signal
reconstruction using a consistency criterion, wherein one seeks to find a
signal that is in agreement with the measurements. To enforce consistency, we
construct a convex cost using a one-sided quadratic penalty and minimize it
using an iterative accelerated projected gradient-descent (APGD) technique. The
PGD scheme reduces the cost function in each iteration, whereas incorporating
momentum into PGD, notwithstanding the lack of such a descent property,
exhibits faster convergence than PGD empirically. We refer to the resulting
algorithm as binary phase retrieval (BPR). Considering additive white noise
contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB)
for the binary encoding model. Experimental results demonstrate that the BPR
algorithm yields a signal-to- reconstruction error ratio (SRER) of
approximately 25 dB in the absence of noise. In the presence of noise prior to
quantization, the SRER is within 2 to 3 dB of the CRB
- …