264 research outputs found

    Simultaneously Structured Models with Application to Sparse and Low-rank Matrices

    Get PDF
    The topic of recovery of a structured model given a small number of linear observations has been well-studied in recent years. Examples include recovering sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and low-rank. Often norms that promote each individual structure are known, and allow for recovery using an order-wise optimal number of measurements (e.g., â„“1\ell_1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with these norms, then we can do no better, order-wise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and low-rank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the â„“1\ell_1 and nuclear norms requires many more measurements. This proves an order-wise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and low-rank tensor completion.Comment: 38 pages, 9 figure

    Convergence of fixed-point continuation algorithms for matrix rank minimization

    Full text link
    The matrix rank minimization problem has applications in many fields such as system identification, optimal control, low-dimensional embedding, etc. As this problem is NP-hard in general, its convex relaxation, the nuclear norm minimization problem, is often solved instead. Recently, Ma, Goldfarb and Chen proposed a fixed-point continuation algorithm for solving the nuclear norm minimization problem. By incorporating an approximate singular value decomposition technique in this algorithm, the solution to the matrix rank minimization problem is usually obtained. In this paper, we study the convergence/recoverability properties of the fixed-point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving affinely constrained matrix rank minimization problems are reported.Comment: Conditions on RIP constant for an approximate recovery are improve

    Geometrische Interpretationen und Algorithmische Verifikation von exakten Lösungen in Compressed Sensing

    Get PDF
    In an era dominated by the topic big data, in which everyone is confronted with spying scandals, personalized advertising, and retention of data, it is not surprising that a topic as compressed sensing is of such a great interest. Further the field of compressed sensing is very interesting for problems in signal- and image processing. Similarly, the question arises how many measurements are necessarily required to capture and represent high-resolution signal or objects. In the thesis at hand, the applicability of three of the most applied optimization problems with linear restrictions in compressed sensing is studied. These are basis pursuit, analysis l1-minimization und isotropic total variation minimization. Unique solutions of basis pursuit and analysis l1-minimization are considered and, on the basis of their characterizations, methods are designed which verify whether a given vector can be reconstructed exactly by basis pursuit or analysis l1-minimization. Further, a method is developed which guarantees that a given vector is the unique solution of isotropic total variation minimization. In addition, results on experiments for all three methods are presented where the linear restrictions are given as a random matrix and as a matrix which models the measurement process in computed tomography. Furthermore, in the present thesis geometrical interpretations are presented. By considering the theory of convex polytopes, three geometrical objects are examined and placed within the context of compressed sensing. The result is a comprehensive study of the geometry of basis pursuit which contains many new insights to necessary geometrical conditions for unique solutions and an explicit number of equivalence classes of unique solutions. The number of these equivalence classes itself is strongly related to the number of unique solutions of basis pursuit for an arbitrary matrix. In addition, the question is addressed for which linear restrictions do exist the most unique solutions of basis pursuit. For this purpose, upper bounds are developed and explicit restrictions are given under which the most vectors can be reconstructed via basis pursuit.In Zeiten von Big Data, in denen man nahezu täglich mit Überwachungsskandalen, personalisierter Werbung und Vorratsdatenspeicherung konfrontiert wird, ist es kein Wunder dass ein Forschungsgebiet wie Compressed Sensing von so grossem Interesse ist. Es stellt sich die Frage, wie viele Messungen tatsächlich nötig sind, um ein Signal oder ein Objekt hochaufgelöst darstellen zu können. In der vorliegenden Arbeit wird die Anwendungsmöglichkeit von drei in Compressed Sensing verwendeten Optimierungsprobleme mit linearen Nebenbedingungen untersucht. Hierbei handelt es sich namentlich um Basis Pursuit, Analysis l1-Minimierung und Isotropic Total Variation. Es werden eindeutige Lösungen von Basis Pursuit und der Analysis l1-Minimierung betrachtet, um auf der Grundlage ihrer Charakterisierungen Methoden vorzustellen, die Verifizieren ob ein gegebener Vektor exakt durch Basis Pursuit oder der Analysis l1-Minimierung rekonstruiert werden kann. Für Isotropic Total Variation werden hinreichende Bedingungen aufgestellt, die garantieren, dass ein gegebener Vektor die eindeutige Lösung von Isotropic Total Variation ist. Darüber hinaus werden Ergebnisse zu Experimenten mit Zufallsmatrizen als linearen Nebenbedingungen sowie Ergebnisse zu Experimenten mit Matrizen vorgestellt, die den Aufnahmeprozess bei Computertomographie simulieren. Weiterhin werden in der vorliegenden Arbeit verschiedene geometrische Interpretationen von Basis Pursuit vorgestellt. Unter Verwendung der konvexen Polytop-Theorie werden drei unterschiedliche geometrische Objekte untersucht und in den Zusammenhang mit Compressed Sensing gestellt. Das Ergebnis ist eine umfangreiche Studie der Geometrie von Basis Pursuit mit vielen neuen Einblicken in notwendige geometrische Bedingungen für eindeutige Lösungen und in die explizite Anzahl von Äquivalenzklassen eindeutiger Lösungen. Darüber hinaus wird der Frage nachgegangen, unter welchen linearen Nebenbedingungen die meisten eindeutigen Lösungen existieren. Zu diesem Zweck werden obere Schranken entwickelt, sowie explizite Nebenbedingungen genannt unter denen die meisten Vektoren exakt rekonstruiert werden können

    Sparse image reconstruction for molecular imaging

    Full text link
    The application that motivates this paper is molecular imaging at the atomic level. When discretized at sub-atomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. The paper therefore does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Unbiased estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.Comment: 12 pages, 8 figure
    • …
    corecore