1,012 research outputs found

    Self-Calibration and Biconvex Compressive Sensing

    Full text link
    The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where both x and the diagonal matrix D (which models the calibration error) are unknown. By "lifting" this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis

    Recovering Structured Low-rank Operators Using Nuclear Norms

    Get PDF
    This work considers the problem of recovering matrices and operators from limited and/or noisy observations. Whereas matrices result from summing tensor products of vectors, operators result from summing tensor products of matrices. These constructions lead to viewing both matrices and operators as the sum of "simple" rank-1 factors. A popular line of work in this direction is low-rank matrix recovery, i.e., using linear measurements of a matrix to reconstruct it as the sum of few rank-1 factors. Rank minimization problems are hard in general, and a popular approach to avoid them is convex relaxation. Using the trace norm as a surrogate for rank, the low-rank matrix recovery problem becomes convex. While the trace norm has received much attention in the literature, other convexifications are possible. This thesis focuses on the class of nuclear norms—a class that includes the trace norm itself. Much as the trace norm is a convex surrogate for the matrix rank, other nuclear norms provide convex complexity measures for additional matrix structure. Namely, nuclear norms measure the structure of the factors used to construct the matrix. Transitioning to the operator framework allows for novel uses of nuclear norms in recovering these structured matrices. In particular, this thesis shows how to lift structured matrix factorization problems to rank-1 operator recovery problems. This new viewpoint allows nuclear norms to measure richer types of structures present in matrix factorizations. This work also includes a Python software package to model and solve structured operator recovery problems. Systematic numerical experiments in operator denoising demonstrate the effectiveness of nuclear norms in recovering structured operators. In particular, choosing a specific nuclear norm that corresponds to the underlying factor structure of the operator improves the performance of the recovery procedures when compared, for instance, to the trace norm. Applications in hyperspectral imaging and self-calibration demonstrate the additional flexibility gained by utilizing operator (as opposed to matrix) factorization models.</p

    Multiresolution models in image restoration and reconstruction with medical and other applications

    Get PDF
    • …
    corecore