820 research outputs found
Conditions for Existence of Dual Certificates in Rank-One Semidefinite Problems
Several signal recovery tasks can be relaxed into semidefinite programs with
rank-one minimizers. A common technique for proving these programs succeed is
to construct a dual certificate. Unfortunately, dual certificates may not exist
under some formulations of semidefinite programs. In order to put problems into
a form where dual certificate arguments are possible, it is important to
develop conditions under which the certificates exist. In this paper, we
provide an example where dual certificates do not exist. We then present a
completeness condition under which they are guaranteed to exist. For programs
that do not satisfy the completeness condition, we present a completion process
which produces an equivalent program that does satisfy the condition. The
important message of this paper is that dual certificates may not exist for
semidefinite programs that involve orthogonal measurements with respect to
positive-semidefinite matrices. Such measurements can interact with the
positive-semidefinite constraint in a way that implies additional linear
measurements. If these additional measurements are not included in the problem
formulation, then dual certificates may fail to exist. As an illustration, we
present a semidefinite relaxation for the task of finding the sparsest element
in a subspace. One formulation of this program does not admit dual
certificates. The completion process produces an equivalent formulation which
does admit dual certificates
Scaling Law for Recovering the Sparsest Element in a Subspace
We address the problem of recovering a sparse -vector within a given
subspace. This problem is a subtask of some approaches to dictionary learning
and sparse principal component analysis. Hence, if we can prove scaling laws
for recovery of sparse vectors, it will be easier to derive and prove recovery
results in these applications. In this paper, we present a scaling law for
recovering the sparse vector from a subspace that is spanned by the sparse
vector and random vectors. We prove that the sparse vector will be the
output to one of linear programs with high probability if its support size
satisfies . The scaling law still holds when
the desired vector is approximately sparse. To get a single estimate for the
sparse vector from the linear programs, we must select which output is the
sparsest. This selection process can be based on any proxy for sparsity, and
the specific proxy has the potential to improve or worsen the scaling law. If
sparsity is interpreted in an sense, then the scaling law
can not be better than . Computer simulations show that
selecting the sparsest output in the or thresholded-
senses can lead to a larger parameter range for successful recovery than that
given by the sense
Stable optimizationless recovery from phaseless linear measurements
We address the problem of recovering an n-vector from m linear measurements
lacking sign or phase information. We show that lifting and semidefinite
relaxation suffice by themselves for stable recovery in the setting of m = O(n
log n) random sensing vectors, with high probability. The recovery method is
optimizationless in the sense that trace minimization in the PhaseLift
procedure is unnecessary. That is, PhaseLift reduces to a feasibility problem.
The optimizationless perspective allows for a Douglas-Rachford numerical
algorithm that is unavailable for PhaseLift. This method exhibits linear
convergence with a favorable convergence rate and without any parameter tuning
ShapeFit: Exact location recovery from corrupted pairwise directions
Let and consider the location recovery
problem: given a subset of pairwise direction observations , where a constant fraction of these
observations are arbitrarily corrupted, find up to a global
translation and scale. We propose a novel algorithm for the location recovery
problem, which consists of a simple convex program over real variables. We
prove that this program recovers a set of i.i.d. Gaussian locations exactly
and with high probability if the observations are given by an \erdosrenyi
graph, is large enough, and provided that at most a constant fraction of
observations involving any particular location are adversarially corrupted. We
also prove that the program exactly recovers Gaussian locations for if
the fraction of corrupted observations at each location is, up to
poly-logarithmic factors, at most a constant. Both of these recovery theorems
are based on a set of deterministic conditions that we prove are sufficient for
exact recovery
- β¦