660 research outputs found

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    Compressed Sensing and Parallel Acquisition

    Full text link
    Parallel acquisition systems arise in various applications in order to moderate problems caused by insufficient measurements in single-sensor systems. These systems allow simultaneous data acquisition in multiple sensors, thus alleviating such problems by providing more overall measurements. In this work we consider the combination of compressed sensing with parallel acquisition. We establish the theoretical improvements of such systems by providing recovery guarantees for which, subject to appropriate conditions, the number of measurements required per sensor decreases linearly with the total number of sensors. Throughout, we consider two different sampling scenarios -- distinct (corresponding to independent sampling in each sensor) and identical (corresponding to dependent sampling between sensors) -- and a general mathematical framework that allows for a wide range of sensing matrices (e.g., subgaussian random matrices, subsampled isometries, random convolutions and random Toeplitz matrices). We also consider not just the standard sparse signal model, but also the so-called sparse in levels signal model. This model includes both sparse and distributed signals and clustered sparse signals. As our results show, optimal recovery guarantees for both distinct and identical sampling are possible under much broader conditions on the so-called sensor profile matrices (which characterize environmental conditions between a source and the sensors) for the sparse in levels model than for the sparse model. To verify our recovery guarantees we provide numerical results showing phase transitions for a number of different multi-sensor environments.Comment: 43 pages, 4 figure

    Finding a low-rank basis in a matrix subspace

    Full text link
    For a given matrix subspace, how can we find a basis that consists of low-rank matrices? This is a generalization of the sparse vector problem. It turns out that when the subspace is spanned by rank-1 matrices, the matrices can be obtained by the tensor CP decomposition. For the higher rank case, the situation is not as straightforward. In this work we present an algorithm based on a greedy process applicable to higher rank problems. Our algorithm first estimates the minimum rank by applying soft singular value thresholding to a nuclear norm relaxation, and then computes a matrix with that rank using the method of alternating projections. We provide local convergence results, and compare our algorithm with several alternative approaches. Applications include data compression beyond the classical truncated SVD, computing accurate eigenvectors of a near-multiple eigenvalue, image separation and graph Laplacian eigenproblems

    Expansions and factorizations of matrices and their applications

    Get PDF
    Abstract. Linear algebra is a foundation to decompositions and algorithms for extracting simple structures from complex data. In this thesis, we investigate and apply modern techniques from linear algebra to solve problems arising in signal processing and computer science. In particular, we focus on data that takes the shape of a matrix and we explore how to represent it as products of circulant and diagonal matrices. To this end, we study matrix decompositions, approximations, and structured matrix expansions whose elements are products of circulant and diagonal matrices. Computationally, we develop a matrix expansion with DCD matrices for approximating a given matrix. Remarkably, DCD matrices, i.e., a product of diagonal matrix, circulant matrix, and another diagonal matrix, give an natural extension to rank-one matrices. Inspired from the singular value decomposition, we introduce a notion of a matrix rank closely related to the expansion and compute the rank of some specific structured matrices. Specifically, Toeplitz matrix is a sum of two DCD matrices. Here, we present a greedy algorithmic framework to devise the expansion numerically. Finally, we show that the practical uses of the DCD expansion can be complemented by the proposed framework and perform two experiments with a view towards applications
    • …
    corecore