25,782 research outputs found
Analysis of A Nonsmooth Optimization Approach to Robust Estimation
In this paper, we consider the problem of identifying a linear map from
measurements which are subject to intermittent and arbitarily large errors.
This is a fundamental problem in many estimation-related applications such as
fault detection, state estimation in lossy networks, hybrid system
identification, robust estimation, etc. The problem is hard because it exhibits
some intrinsic combinatorial features. Therefore, obtaining an effective
solution necessitates relaxations that are both solvable at a reasonable cost
and effective in the sense that they can return the true parameter vector. The
current paper discusses a nonsmooth convex optimization approach and provides a
new analysis of its behavior. In particular, it is shown that under appropriate
conditions on the data, an exact estimate can be recovered from data corrupted
by a large (even infinite) number of gross errors.Comment: 17 pages, 9 figure
On a class of optimization-based robust estimators
We consider in this paper the problem of estimating a parameter matrix from
observations which are affected by two types of noise components: (i) a sparse
noise sequence which, whenever nonzero can have arbitrarily large amplitude
(ii) and a dense and bounded noise sequence of "moderate" amount. This is
termed a robust regression problem. To tackle it, a quite general
optimization-based framework is proposed and analyzed. When only the sparse
noise is present, a sufficient bound is derived on the number of nonzero
elements in the sparse noise sequence that can be accommodated by the estimator
while still returning the true parameter matrix. While almost all the
restricted isometry-based bounds from the literature are not verifiable, our
bound can be easily computed through solving a convex optimization problem.
Moreover, empirical evidence tends to suggest that it is generally tight. If in
addition to the sparse noise sequence, the training data are affected by a
bounded dense noise, we derive an upper bound on the estimation error.Comment: To appear in IEEE Transactions on Automatic Contro
Stable low-rank matrix recovery via null space properties
The problem of recovering a matrix of low rank from an incomplete and
possibly noisy set of linear measurements arises in a number of areas. In order
to derive rigorous recovery results, the measurement map is usually modeled
probabilistically. We derive sufficient conditions on the minimal amount of
measurements ensuring recovery via convex optimization. We establish our
results via certain properties of the null space of the measurement map. In the
setting where the measurements are realized as Frobenius inner products with
independent standard Gaussian random matrices we show that
measurements are enough to uniformly and stably recover an
matrix of rank at most . We then significantly generalize this result by
only requiring independent mean-zero, variance one entries with four finite
moments at the cost of replacing by some universal constant. We also study
the case of recovering Hermitian rank- matrices from measurement matrices
proportional to rank-one projectors. For rank-one projective
measurements onto independent standard Gaussian vectors, we show that nuclear
norm minimization uniformly and stably reconstructs Hermitian rank- matrices
with high probability. Next, we partially de-randomize this by establishing an
analogous statement for projectors onto independent elements of a complex
projective 4-designs at the cost of a slightly higher sampling rate . Moreover, if the Hermitian matrix to be recovered is known to be
positive semidefinite, then we show that the nuclear norm minimization approach
may be replaced by minimizing the -norm of the residual subject to the
positive semidefinite constraint. Then no estimate of the noise level is
required a priori. We discuss applications in quantum physics and the phase
retrieval problem.Comment: 26 page
- …