718 research outputs found

    One-Bit Quantization Design and Adaptive Methods for Compressed Sensing

    Full text link
    There have been a number of studies on sparse signal recovery from one-bit quantized measurements. Nevertheless, little attention has been paid to the choice of the quantization thresholds and its impact on the signal recovery performance. This paper examines the problem of one-bit quantizer design for sparse signal recovery. Our analysis shows that the magnitude ambiguity that ever plagues conventional one-bit compressed sensing methods can be resolved, and an arbitrarily small reconstruction error can be achieved by setting the quantization thresholds close enough to the original data samples without being quantized. Note that unquantized data samples are unaccessible in practice. To overcome this difficulty, we propose an adaptive quantization method that adaptively adjusts the quantization thresholds in a way such that the thresholds converges to the optimal thresholds. Numerical results are illustrated to collaborate our theoretical results and the effectiveness of the proposed algorithm

    A biconvex analysis for Lasso l1 reweighting

    Full text link
    l1 reweighting algorithms are very popular in sparse signal recovery and compressed sensing, since in the practice they have been observed to outperform classical l1 methods. Nevertheless, the theoretical analysis of their convergence is a critical point, and generally is limited to the convergence of the functional to a local minimum or to subsequence convergence. In this letter, we propose a new convergence analysis of a Lasso l1 reweighting method, based on the observation that the algorithm is an alternated convex search for a biconvex problem. Based on that, we are able to prove the numerical convergence of the sequence of the iterates generated by the algorithm. This is not yet the convergence of the sequence, but it is close enough for practical and numerical purposes. Furthermore, we propose an alternative iterative soft thresholding procedure, which is faster than the main algorithm

    Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey

    Full text link
    In this survey paper, our goal is to discuss recent advances of compressive sensing (CS) based solutions in wireless sensor networks (WSNs) including the main ongoing/recent research efforts, challenges and research trends in this area. In WSNs, CS based techniques are well motivated by not only the sparsity prior observed in different forms but also by the requirement of efficient in-network processing in terms of transmit power and communication bandwidth even with nonsparse signals. In order to apply CS in a variety of WSN applications efficiently, there are several factors to be considered beyond the standard CS framework. We start the discussion with a brief introduction to the theory of CS and then describe the motivational factors behind the potential use of CS in WSN applications. Then, we identify three main areas along which the standard CS framework is extended so that CS can be efficiently applied to solve a variety of problems specific to WSNs. In particular, we emphasize on the significance of extending the CS framework to (i). take communication constraints into account while designing projection matrices and reconstruction algorithms for signal reconstruction in centralized as well in decentralized settings, (ii) solve a variety of inference problems such as detection, classification and parameter estimation, with compressed data without signal reconstruction and (iii) take practical communication aspects such as measurement quantization, physical layer secrecy constraints, and imperfect channel conditions into account. Finally, open research issues and challenges are discussed in order to provide perspectives for future research directions

    Super-Resolution From Binary Measurements With Unknown Threshold

    Full text link
    We address the problem of super-resolution of point sources from binary measurements, where random projections of the blurred measurement of the actual signal are encoded using only the sign information. The threshold used for binary quantization is not known to the decoder. We develop an algorithm that solves convex programs iteratively and achieves signal recovery. The proposed algorithm, which we refer to as the binary super-resolution (BSR) algorithm, recovers point sources with reasonable accuracy, albeit up to a scale factor. We show through simulations that the BSR algorithm is successful in recovering the locations and the amplitudes of the point sources, even in the presence of significant amount of blurring. We also propose a framework for handling noisy measurements and demonstrate that BSR gives a reliable reconstruction (correspondingly, reconstruction signal-to-noise ratio (SNR) of about 22 dB) for a measurement SNR of 15 dB

    Efficient iterative thresholding algorithms with functional feedbacks and convergence analysis

    Full text link
    An accelerated class of adaptive scheme of iterative thresholding algorithms is studied analytically and empirically. They are based on the feedback mechanism of the null space tuning techniques (NST+HT+FB). The main contribution of this article is the accelerated convergence analysis and proofs with a variable/adaptive index selection and different feedback principles at each iteration. These convergence analysis require no longer a priori sparsity information ss of a signal. %key theory in this paper is the concept that the number of indices selected at each iteration should be considered in order to speed up the convergence. It is shown that uniform recovery of all ss-sparse signals from given linear measurements can be achieved under reasonable (preconditioned) restricted isometry conditions. Accelerated convergence rate and improved convergence conditions are obtained by selecting an appropriate size of the index support per iteration. The theoretical findings are sufficiently demonstrated and confirmed by extensive numerical experiments. It is also observed that the proposed algorithms have a clearly advantageous balance of efficiency, adaptivity and accuracy compared with all other state-of-the-art greedy iterative algorithms

    Compressed Sensing for Wireless Communications : Useful Tips and Tricks

    Full text link
    As a paradigm to recover the sparse signal from a small set of linear measurements, compressed sensing (CS) has stimulated a great deal of interest in recent years. In order to apply the CS techniques to wireless communication systems, there are a number of things to know and also several issues to be considered. However, it is not easy to come up with simple and easy answers to the issues raised while carrying out research on CS. The main purpose of this paper is to provide essential knowledge and useful tips that wireless communication researchers need to know when designing CS-based wireless systems. First, we present an overview of the CS technique, including basic setup, sparse recovery algorithm, and performance guarantee. Then, we describe three distinct subproblems of CS, viz., sparse estimation, support identification, and sparse detection, with various wireless communication applications. We also address main issues encountered in the design of CS-based wireless communication systems. These include potentials and limitations of CS techniques, useful tips that one should be aware of, subtle points that one should pay attention to, and some prior knowledge to achieve better performance. Our hope is that this article will be a useful guide for wireless communication researchers and even non-experts to grasp the gist of CS techniques

    Nonlinear Residual Minimization by Iteratively Reweighted Least Squares

    Full text link
    We address the numerical solution of minimal norm residuals of {\it nonlinear} equations in finite dimensions. We take inspiration from the problem of finding a sparse vector solution by using greedy algorithms based on iterative residual minimizations in the ℓp\ell_p-norm, for 1≤p≤21 \leq p \leq 2. Due to the mild smoothness of the problem, especially for p→1p \to 1, we develop and analyze a generalized version of Iteratively Reweighted Least Squares (IRLS). This simple and efficient algorithm performs the solution of optimization problems involving non-quadratic possibly non-convex and non-smooth cost functions, which can be transformed into a sequence of common least squares problems, which can be tackled more efficiently.While its analysis has been developed in many contexts when the model equation is {\it linear}, no results are provided in the {\it nonlinear} case. We address the convergence and the rate of error decay of IRLS for nonlinear problems. The convergence analysis is based on its reformulation as an alternating minimization of an energy functional, whose variables are the competitors to solutions of the intermediate reweighted least squares problems. Under specific conditions of coercivity and local convexity, we are able to show convergence of IRLS to minimizers of the nonlinear residual problem. For the case where we are lacking local convexity, we propose an appropriate convexification.. To illustrate the theoretical results we conclude the paper with several numerical experiments. We compare IRLS with standard Matlab functions for an easily presentable example and numerically validate our theoretical results in the more complicated framework of phase retrieval problems. Finally we examine the recovery capability of the algorithm in the context of data corrupted by impulsive noise where the sparsification of the residual is desired.Comment: 37 pages. arXiv admin note: text overlap with arXiv:0807.0575 by other author

    Solving OSCAR regularization problems by proximal splitting algorithms

    Full text link
    The OSCAR (octagonal selection and clustering algorithm for regression) regularizer consists of a L_1 norm plus a pair-wise L_inf norm (responsible for its grouping behavior) and was proposed to encourage group sparsity in scenarios where the groups are a priori unknown. The OSCAR regularizer has a non-trivial proximity operator, which limits its applicability. We reformulate this regularizer as a weighted sorted L_1 norm, and propose its grouping proximity operator (GPO) and approximate proximity operator (APO), thus making state-of-the-art proximal splitting algorithms (PSAs) available to solve inverse problems with OSCAR regularization. The GPO is in fact the APO followed by additional grouping and averaging operations, which are costly in time and storage, explaining the reason why algorithms with APO are much faster than that with GPO. The convergences of PSAs with GPO are guaranteed since GPO is an exact proximity operator. Although convergence of PSAs with APO is may not be guaranteed, we have experimentally found that APO behaves similarly to GPO when the regularization parameter of the pair-wise L_inf norm is set to an appropriately small value. Experiments on recovery of group-sparse signals (with unknown groups) show that PSAs with APO are very fast and accurate

    From Bayesian Sparsity to Gated Recurrent Nets

    Full text link
    The iterations of many first-order algorithms, when applied to minimizing common regularized regression functions, often resemble neural network layers with pre-specified weights. This observation has prompted the development of learning-based approaches that purport to replace these iterations with enhanced surrogates forged as DNN models from available training data. For example, important NP-hard sparse estimation problems have recently benefitted from this genre of upgrade, with simple feedforward or recurrent networks ousting proximal gradient-based iterations. Analogously, this paper demonstrates that more powerful Bayesian algorithms for promoting sparsity, which rely on complex multi-loop majorization-minimization techniques, mirror the structure of more sophisticated long short-term memory (LSTM) networks, or alternative gated feedback networks previously designed for sequence prediction. As part of this development, we examine the parallels between latent variable trajectories operating across multiple time-scales during optimization, and the activations within deep network structures designed to adaptively model such characteristic sequences. The resulting insights lead to a novel sparse estimation system that, when granted training data, can estimate optimal solutions efficiently in regimes where other algorithms fail, including practical direction-of-arrival (DOA) and 3D geometry recovery problems. The underlying principles we expose are also suggestive of a learning process for a richer class of multi-loop algorithms in other domains

    Solving Almost all Systems of Random Quadratic Equations

    Full text link
    This paper deals with finding an nn-dimensional solution xx to a system of quadratic equations of the form yi=∣⟨ai,x⟩∣2y_i=|\langle{a}_i,x\rangle|^2 for 1≤i≤m1\le i \le m, which is also known as phase retrieval and is NP-hard in general. We put forth a novel procedure for minimizing the amplitude-based least-squares empirical loss, that starts with a weighted maximal correlation initialization obtainable with a few power or Lanczos iterations, followed by successive refinements based upon a sequence of iteratively reweighted (generalized) gradient iterations. The two (both the initialization and gradient flow) stages distinguish themselves from prior contributions by the inclusion of a fresh (re)weighting regularization technique. The overall algorithm is conceptually simple, numerically scalable, and easy-to-implement. For certain random measurement models, the novel procedure is shown capable of finding the true solution xx in time proportional to reading the data {(ai;yi)}1≤i≤m\{(a_i;y_i)\}_{1\le i \le m}. This holds with high probability and without extra assumption on the signal xx to be recovered, provided that the number mm of equations is some constant c>0c>0 times the number nn of unknowns in the signal vector, namely, m>cnm>cn. Empirically, the upshots of this contribution are: i) (almost) 100%100\% perfect signal recovery in the high-dimensional (say e.g., n≥2,000n\ge 2,000) regime given only an information-theoretic limit number of noiseless equations, namely, m=2n−1m=2n-1 in the real-valued Gaussian case; and, ii) (nearly) optimal statistical accuracy in the presence of additive noise of bounded support. Finally, substantial numerical tests using both synthetic data and real images corroborate markedly improved signal recovery performance and computational efficiency of our novel procedure relative to state-of-the-art approaches.Comment: 27 pages, 8 figure
    • …
    corecore