24,437 research outputs found

    Improved Recovery of Analysis Sparse Vectors in Presence of Prior Information

    Full text link
    In this work, we consider the problem of recovering analysis-sparse signals from under-sampled measurements when some prior information about the support is available. We incorporate such information in the recovery stage by suitably tuning the weights in a weighted β„“1\ell_1 analysis optimization problem. Indeed, we try to set the weights such that the method succeeds with minimum number of measurements. For this purpose, we exploit the upper-bound on the statistical dimension of a certain cone to determine the weights. Our numerical simulations confirm that the introduced method with tuned weights outperforms the standard β„“1\ell_1 analysis technique

    Robust Recovery of Signals From a Structured Union of Subspaces

    Full text link
    Traditional sampling theories consider the problem of reconstructing an unknown signal xx from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that xx lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which xx lies in a union of subspaces. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which xx lies in a sum of kk subspaces, chosen from a larger set of mm possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. We then propose a mixed β„“2/β„“1\ell_2/\ell_1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modelling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.Comment: 5 figures. 30 pages. This work has been submitted to the IEEE for possible publicatio

    Action Assembly: Sparse Imitation Learning for Text Based Games with Combinatorial Action Spaces

    Full text link
    We propose a computationally efficient algorithm that combines compressed sensing with imitation learning to solve text-based games with combinatorial action spaces. Specifically, we introduce a new compressed sensing algorithm, named IK-OMP, which can be seen as an extension to the Orthogonal Matching Pursuit (OMP). We incorporate IK-OMP into a supervised imitation learning setting and show that the combined approach (Sparse Imitation Learning, Sparse-IL) solves the entire text-based game of Zork1 with an action space of approximately 10 million actions given both perfect and noisy demonstrations.Comment: Under review at IJCAI 202

    Corrupted Sensing: Novel Guarantees for Separating Structured Signals

    Full text link
    We study the problem of corrupted sensing, a generalization of compressed sensing in which one aims to recover a signal from a collection of corrupted or unreliable measurements. While an arbitrary signal cannot be recovered in the face of arbitrary corruption, tractable recovery is possible when both signal and corruption are suitably structured. We quantify the relationship between signal recovery and two geometric measures of structure, the Gaussian complexity of a tangent cone and the Gaussian distance to a subdifferential. We take a convex programming approach to disentangling signal and corruption, analyzing both penalized programs that trade off between signal and corruption complexity, and constrained programs that bound the complexity of signal or corruption when prior information is available. In each case, we provide conditions for exact signal recovery from structured corruption and stable signal recovery from structured corruption with added unstructured noise. Our simulations demonstrate close agreement between our theoretical recovery bounds and the sharp phase transitions observed in practice. In addition, we provide new interpretable bounds for the Gaussian complexity of sparse vectors, block-sparse vectors, and low-rank matrices, which lead to sharper guarantees of recovery when combined with our results and those in the literature.Comment: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=671204

    Harmless interpolation of noisy data in regression

    Full text link
    A continuing mystery in understanding the empirical success of deep neural networks is their ability to achieve zero training error and generalize well, even when the training data is noisy and there are more parameters than data points. We investigate this overparameterized regime in linear regression, where all solutions that minimize training error interpolate the data, including noise. We characterize the fundamental generalization (mean-squared) error of any interpolating solution in the presence of noise, and show that this error decays to zero with the number of features. Thus, overparameterization can be explicitly beneficial in ensuring harmless interpolation of noise. We discuss two root causes for poor generalization that are complementary in nature -- signal "bleeding" into a large number of alias features, and overfitting of noise by parsimonious feature selectors. For the sparse linear model with noise, we provide a hybrid interpolating scheme that mitigates both these issues and achieves order-optimal MSE over all possible interpolating solutions.Comment: 52 pages, expanded version of the paper presented at ITA in San Diego in Feb 2019, ISIT in Paris in July 2019, at Simons in July, and as a plenary at ITW in Visby in August 201

    Signal Reconstruction from Modulo Observations

    Full text link
    We consider the problem of reconstructing a signal from under-determined modulo observations (or measurements). This observation model is inspired by a (relatively) less well-known imaging mechanism called modulo imaging, which can be used to extend the dynamic range of imaging systems; variations of this model have also been studied under the category of phase unwrapping. Signal reconstruction in the under-determined regime with modulo observations is a challenging ill-posed problem, and existing reconstruction methods cannot be used directly. In this paper, we propose a novel approach to solving the inverse problem limited to two modulo periods, inspired by recent advances in algorithms for phase retrieval under sparsity constraints. We show that given a sufficient number of measurements, our algorithm perfectly recovers the underlying signal and provides improved performance over other existing algorithms. We also provide experiments validating our approach on both synthetic and real data to depict its superior performance

    Demixing Structured Superposition Signals from Periodic and Aperiodic Nonlinear Observations

    Full text link
    We consider the demixing problem of two (or more) structured high-dimensional vectors from a limited number of nonlinear observations where this nonlinearity is due to either a periodic or an aperiodic function. We study certain families of structured superposition models, and propose a method which provably recovers the components given (nearly) m=O(s)m = \mathcal{O}(s) samples where ss denotes the sparsity level of the underlying components. This strictly improves upon previous nonlinear demixing techniques and asymptotically matches the best possible sample complexity. We also provide a range of simulations to illustrate the performance of the proposed algorithms.Comment: arXiv admin note: substantial text overlap with arXiv:1701.0659

    Exact Joint Sparse Frequency Recovery via Optimization Methods

    Full text link
    Frequency recovery/estimation from discrete samples of superimposed sinusoidal signals is a classic yet important problem in statistical signal processing. Its research has recently been advanced by atomic norm techniques which exploit signal sparsity, work directly on continuous frequencies, and completely resolve the grid mismatch problem of previous compressed sensing methods. In this work we investigate the frequency recovery problem in the presence of multiple measurement vectors (MMVs) which share the same frequency components, termed as joint sparse frequency recovery and arising naturally from array processing applications. To study the advantage of MMVs, we first propose an β„“2,0\ell_{2,0} norm like approach by exploiting joint sparsity and show that the number of recoverable frequencies can be increased except in a trivial case. While the resulting optimization problem is shown to be rank minimization that cannot be practically solved, we then propose an MMV atomic norm approach that is a convex relaxation and can be viewed as a continuous counterpart of the β„“2,1\ell_{2,1} norm method. We show that this MMV atomic norm approach can be solved by semidefinite programming. We also provide theoretical results showing that the frequencies can be exactly recovered under appropriate conditions. The above results either extend the MMV compressed sensing results from the discrete to the continuous setting or extend the recent super-resolution and continuous compressed sensing framework from the single to the multiple measurement vectors case. Extensive simulation results are provided to validate our theoretical findings and they also imply that the proposed MMV atomic norm approach can improve the performance in terms of reduced number of required measurements and/or relaxed frequency separation condition.Comment: 13 pages, double column, 4 figures, IEEE Transactions on Signal Processing, 201

    Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey

    Full text link
    In this survey paper, our goal is to discuss recent advances of compressive sensing (CS) based solutions in wireless sensor networks (WSNs) including the main ongoing/recent research efforts, challenges and research trends in this area. In WSNs, CS based techniques are well motivated by not only the sparsity prior observed in different forms but also by the requirement of efficient in-network processing in terms of transmit power and communication bandwidth even with nonsparse signals. In order to apply CS in a variety of WSN applications efficiently, there are several factors to be considered beyond the standard CS framework. We start the discussion with a brief introduction to the theory of CS and then describe the motivational factors behind the potential use of CS in WSN applications. Then, we identify three main areas along which the standard CS framework is extended so that CS can be efficiently applied to solve a variety of problems specific to WSNs. In particular, we emphasize on the significance of extending the CS framework to (i). take communication constraints into account while designing projection matrices and reconstruction algorithms for signal reconstruction in centralized as well in decentralized settings, (ii) solve a variety of inference problems such as detection, classification and parameter estimation, with compressed data without signal reconstruction and (iii) take practical communication aspects such as measurement quantization, physical layer secrecy constraints, and imperfect channel conditions into account. Finally, open research issues and challenges are discussed in order to provide perspectives for future research directions

    Rank Awareness in Joint Sparse Recovery

    Full text link
    In this paper we revisit the sparse multiple measurement vector (MMV) problem where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of the single channel sparse recovery problem which lies at the heart of the emerging field of compressed sensing. However the sparse approximation problem has origins which include links to the field of array signal processing where we find the inspiration for a new family of MMV algorithms based on the MUSIC algorithm. We highlight the role of the rank of the coefficient matrix X in determining the difficulty of the recovery problem. We derive the necessary and sufficient conditions for the uniqueness of the sparse MMV solution, which indicates that the larger the rank of X the less sparse X needs to be to ensure uniqueness. We also show that the larger the rank of X the less the computational effort required to solve the MMV problem through a combinatorial search. In the second part of the paper we consider practical suboptimal algorithms for solving the sparse MMV problem. We examine the rank awareness of popular algorithms such as SOMP and mixed norm minimization techniques and show them to be rank blind in terms of worst case analysis. We then consider a family of greedy algorithms that are rank aware. The simplest such algorithm is a discrete version of MUSIC and is guaranteed to recover the sparse vectors in the full rank MMV case under mild conditions. We extend this idea to develop a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case and also provides guaranteed recovery in the full rank multi-measurement case. Numerical simulations demonstrate that the rank aware algorithms are significantly better than existing algorithms in dealing with multiple measurements.Comment: 23 pages, 2 figure
    • …
    corecore