10,963 research outputs found
On the Phase Transition of Corrupted Sensing
In \cite{FOY2014}, a sharp phase transition has been numerically observed
when a constrained convex procedure is used to solve the corrupted sensing
problem. In this paper, we present a theoretical analysis for this phenomenon.
Specifically, we establish the threshold below which this convex procedure
fails to recover signal and corruption with high probability. Together with the
work in \cite{FOY2014}, we prove that a sharp phase transition occurs around
the sum of the squares of spherical Gaussian widths of two tangent cones.
Numerical experiments are provided to demonstrate the correctness and sharpness
of our results.Comment: To appear in Proceedings of IEEE International Symposium on
Information Theory 201
A robust parallel algorithm for combinatorial compressed sensing
In previous work two of the authors have shown that a vector with at most nonzeros can be recovered from an expander
sketch in operations via the
Parallel- decoding algorithm, where denotes the
number of nonzero entries in . In this paper we
present the Robust- decoding algorithm, which robustifies
Parallel- when the sketch is corrupted by additive noise. This
robustness is achieved by approximating the asymptotic posterior distribution
of values in the sketch given its corrupted measurements. We provide analytic
expressions that approximate these posteriors under the assumptions that the
nonzero entries in the signal and the noise are drawn from continuous
distributions. Numerical experiments presented show that Robust- is
superior to existing greedy and combinatorial compressed sensing algorithms in
the presence of small to moderate signal-to-noise ratios in the setting of
Gaussian signals and Gaussian additive noise
Corrupted Sensing with Sub-Gaussian Measurements
This paper studies the problem of accurately recovering a structured signal
from a small number of corrupted sub-Gaussian measurements. We consider three
different procedures to reconstruct signal and corruption when different kinds
of prior knowledge are available. In each case, we provide conditions for
stable signal recovery from structured corruption with added unstructured
noise. The key ingredient in our analysis is an extended matrix deviation
inequality for isotropic sub-Gaussian matrices.Comment: To appear in Proceedings of IEEE International Symposium on
Information Theory 201
Statistical Mechanics of High-Dimensional Inference
To model modern large-scale datasets, we need efficient algorithms to infer a
set of unknown model parameters from noisy measurements. What are
fundamental limits on the accuracy of parameter inference, given finite
signal-to-noise ratios, limited measurements, prior information, and
computational tractability requirements? How can we combine prior information
with measurements to achieve these limits? Classical statistics gives incisive
answers to these questions as the measurement density . However, these classical results are not
relevant to modern high-dimensional inference problems, which instead occur at
finite . We formulate and analyze high-dimensional inference as a
problem in the statistical physics of quenched disorder. Our analysis uncovers
fundamental limits on the accuracy of inference in high dimensions, and reveals
that widely cherished inference algorithms like maximum likelihood (ML) and
maximum-a posteriori (MAP) inference cannot achieve these limits. We further
find optimal, computationally tractable algorithms that can achieve these
limits. Intriguingly, in high dimensions, these optimal algorithms become
computationally simpler than MAP and ML, while still outperforming them. For
example, such optimal algorithms can lead to as much as a 20% reduction in the
amount of data to achieve the same performance relative to MAP. Moreover, our
analysis reveals simple relations between optimal high dimensional inference
and low dimensional scalar Bayesian inference, insights into the nature of
generalization and predictive power in high dimensions, information theoretic
limits on compressed sensing, phase transitions in quadratic inference, and
connections to central mathematical objects in convex optimization theory and
random matrix theory.Comment: See http://ganguli-gang.stanford.edu/pdf/HighDimInf.Supp.pdf for
supplementary materia
- β¦