49 research outputs found
On the Phase Transition of Corrupted Sensing
In \cite{FOY2014}, a sharp phase transition has been numerically observed
when a constrained convex procedure is used to solve the corrupted sensing
problem. In this paper, we present a theoretical analysis for this phenomenon.
Specifically, we establish the threshold below which this convex procedure
fails to recover signal and corruption with high probability. Together with the
work in \cite{FOY2014}, we prove that a sharp phase transition occurs around
the sum of the squares of spherical Gaussian widths of two tangent cones.
Numerical experiments are provided to demonstrate the correctness and sharpness
of our results.Comment: To appear in Proceedings of IEEE International Symposium on
Information Theory 201
Corrupted Sensing with Sub-Gaussian Measurements
This paper studies the problem of accurately recovering a structured signal
from a small number of corrupted sub-Gaussian measurements. We consider three
different procedures to reconstruct signal and corruption when different kinds
of prior knowledge are available. In each case, we provide conditions for
stable signal recovery from structured corruption with added unstructured
noise. The key ingredient in our analysis is an extended matrix deviation
inequality for isotropic sub-Gaussian matrices.Comment: To appear in Proceedings of IEEE International Symposium on
Information Theory 201
Almost Lossless Analog Signal Separation
We propose an information-theoretic framework for analog signal separation.
Specifically, we consider the problem of recovering two analog signals from a
noiseless sum of linear measurements of the signals. Our framework is inspired
by the groundbreaking work of Wu and Verd\'u (2010) on almost lossless analog
compression. The main results of the present paper are a general achievability
bound for the compression rate in the analog signal separation problem, an
exact expression for the optimal compression rate in the case of signals that
have mixed discrete-continuous distributions, and a new technique for showing
that the intersection of generic subspaces with subsets of sufficiently small
Minkowski dimension is empty. This technique can also be applied to obtain a
simplified proof of a key result in Wu and Verd\'u (2010).Comment: To be presented at IEEE Int. Symp. Inf. Theory 2013, Istanbul, Turke
Guarantees on learning depth-2 neural networks under a data-poisoning attack
In recent times many state-of-the-art machine learning models have been shown
to be fragile to adversarial attacks. In this work we attempt to build our
theoretical understanding of adversarially robust learning with neural nets. We
demonstrate a specific class of neural networks of finite size and a
non-gradient stochastic algorithm which tries to recover the weights of the net
generating the realizable true labels in the presence of an oracle doing a
bounded amount of malicious additive distortion to the labels. We prove (nearly
optimal) trade-offs among the magnitude of the adversarial attack, the accuracy
and the confidence achieved by the proposed algorithm.Comment: 11 page
Robust Lasso-Zero for sparse corruption and model selection with missing covariates
We propose Robust Lasso-Zero, an extension of the Lasso-Zero methodology
[Descloux and Sardy, 2018], initially introduced for sparse linear models, to
the sparse corruptions problem. We give theoretical guarantees on the sign
recovery of the parameters for a slightly simplified version of the estimator,
called Thresholded Justice Pursuit. The use of Robust Lasso-Zero is showcased
for variable selection with missing values in the covariates. In addition to
not requiring the specification of a model for the covariates, nor estimating
their covariance matrix or the noise variance, the method has the great
advantage of handling missing not-at random values without specifying a
parametric model. Numerical experiments and a medical application underline the
relevance of Robust Lasso-Zero in such a context with few available
competitors. The method is easy to use and implemented in the R library lass0