25 research outputs found

    RIPless compressed sensing from anisotropic measurements

    Full text link
    Compressed sensing is the art of reconstructing a sparse vector from its inner products with respect to a small set of randomly chosen measurement vectors. It is usually assumed that the ensemble of measurement vectors is in isotropic position in the sense that the associated covariance matrix is proportional to the identity matrix. In this paper, we establish bounds on the number of required measurements in the anisotropic case, where the ensemble of measurement vectors possesses a non-trivial covariance matrix. Essentially, we find that the required sampling rate grows proportionally to the condition number of the covariance matrix. In contrast to other recent contributions to this problem, our arguments do not rely on any restricted isometry properties (RIP's), but rather on ideas from convex geometry which have been systematically studied in the theory of low-rank matrix recovery. This allows for a simple argument and slightly improved bounds, but may lead to a worse dependency on noise (which we do not consider in the present paper).Comment: 19 pages. To appear in Linear Algebra and its Applications, Special Issue on Sparse Approximate Solution of Linear System

    On a class of optimization-based robust estimators

    Full text link
    We consider in this paper the problem of estimating a parameter matrix from observations which are affected by two types of noise components: (i) a sparse noise sequence which, whenever nonzero can have arbitrarily large amplitude (ii) and a dense and bounded noise sequence of "moderate" amount. This is termed a robust regression problem. To tackle it, a quite general optimization-based framework is proposed and analyzed. When only the sparse noise is present, a sufficient bound is derived on the number of nonzero elements in the sparse noise sequence that can be accommodated by the estimator while still returning the true parameter matrix. While almost all the restricted isometry-based bounds from the literature are not verifiable, our bound can be easily computed through solving a convex optimization problem. Moreover, empirical evidence tends to suggest that it is generally tight. If in addition to the sparse noise sequence, the training data are affected by a bounded dense noise, we derive an upper bound on the estimation error.Comment: To appear in IEEE Transactions on Automatic Contro
    corecore