820 research outputs found

    Conditions for Existence of Dual Certificates in Rank-One Semidefinite Problems

    Full text link
    Several signal recovery tasks can be relaxed into semidefinite programs with rank-one minimizers. A common technique for proving these programs succeed is to construct a dual certificate. Unfortunately, dual certificates may not exist under some formulations of semidefinite programs. In order to put problems into a form where dual certificate arguments are possible, it is important to develop conditions under which the certificates exist. In this paper, we provide an example where dual certificates do not exist. We then present a completeness condition under which they are guaranteed to exist. For programs that do not satisfy the completeness condition, we present a completion process which produces an equivalent program that does satisfy the condition. The important message of this paper is that dual certificates may not exist for semidefinite programs that involve orthogonal measurements with respect to positive-semidefinite matrices. Such measurements can interact with the positive-semidefinite constraint in a way that implies additional linear measurements. If these additional measurements are not included in the problem formulation, then dual certificates may fail to exist. As an illustration, we present a semidefinite relaxation for the task of finding the sparsest element in a subspace. One formulation of this program does not admit dual certificates. The completion process produces an equivalent formulation which does admit dual certificates

    Scaling Law for Recovering the Sparsest Element in a Subspace

    Full text link
    We address the problem of recovering a sparse nn-vector within a given subspace. This problem is a subtask of some approaches to dictionary learning and sparse principal component analysis. Hence, if we can prove scaling laws for recovery of sparse vectors, it will be easier to derive and prove recovery results in these applications. In this paper, we present a scaling law for recovering the sparse vector from a subspace that is spanned by the sparse vector and kk random vectors. We prove that the sparse vector will be the output to one of nn linear programs with high probability if its support size ss satisfies s≲n/klog⁑ns \lesssim n/\sqrt{k \log n}. The scaling law still holds when the desired vector is approximately sparse. To get a single estimate for the sparse vector from the nn linear programs, we must select which output is the sparsest. This selection process can be based on any proxy for sparsity, and the specific proxy has the potential to improve or worsen the scaling law. If sparsity is interpreted in an β„“1/β„“βˆž\ell_1/\ell_\infty sense, then the scaling law can not be better than s≲n/ks \lesssim n/\sqrt{k}. Computer simulations show that selecting the sparsest output in the β„“1/β„“2\ell_1/\ell_2 or thresholded-β„“0\ell_0 senses can lead to a larger parameter range for successful recovery than that given by the β„“1/β„“βˆž\ell_1/\ell_\infty sense

    Stable optimizationless recovery from phaseless linear measurements

    Get PDF
    We address the problem of recovering an n-vector from m linear measurements lacking sign or phase information. We show that lifting and semidefinite relaxation suffice by themselves for stable recovery in the setting of m = O(n log n) random sensing vectors, with high probability. The recovery method is optimizationless in the sense that trace minimization in the PhaseLift procedure is unnecessary. That is, PhaseLift reduces to a feasibility problem. The optimizationless perspective allows for a Douglas-Rachford numerical algorithm that is unavailable for PhaseLift. This method exhibits linear convergence with a favorable convergence rate and without any parameter tuning

    ShapeFit: Exact location recovery from corrupted pairwise directions

    Full text link
    Let t1,…,tn∈Rdt_1,\ldots,t_n \in \mathbb{R}^d and consider the location recovery problem: given a subset of pairwise direction observations {(tiβˆ’tj)/βˆ₯tiβˆ’tjβˆ₯2}i<j∈[n]Γ—[n]\{(t_i - t_j) / \|t_i - t_j\|_2\}_{i<j \in [n] \times [n]}, where a constant fraction of these observations are arbitrarily corrupted, find {ti}i=1n\{t_i\}_{i=1}^n up to a global translation and scale. We propose a novel algorithm for the location recovery problem, which consists of a simple convex program over dndn real variables. We prove that this program recovers a set of nn i.i.d. Gaussian locations exactly and with high probability if the observations are given by an \erdosrenyi graph, dd is large enough, and provided that at most a constant fraction of observations involving any particular location are adversarially corrupted. We also prove that the program exactly recovers Gaussian locations for d=3d=3 if the fraction of corrupted observations at each location is, up to poly-logarithmic factors, at most a constant. Both of these recovery theorems are based on a set of deterministic conditions that we prove are sufficient for exact recovery
    • …
    corecore