503 research outputs found

    Correcting Errors in Linear Measurements and Compressed Sensing of Multiple Sources

    Full text link
    We present an algorithm for finding sparse solutions of the system of linear equations Φx=y\Phi\mathbf{x}=\mathbf{y} with rectangular matrices Φ\Phi of size n×Nn\times N, where n<Nn<N, when measurement vector y\mathbf{y} is corrupted by a sparse vector of errors e\mathbf e. We call our algorithm the ℓ1\ell^1-greedy-generous (LGGA) since it combines both greedy and generous strategies in decoding. Main advantage of LGGA over traditional error correcting methods consists in its ability to work efficiently directly on linear data measurements. It uses the natural residual redundancy of the measurements and does not require any additional redundant channel encoding. We show how to use this algorithm for encoding-decoding multichannel sources. This algorithm has a significant advantage over existing straightforward decoders when the encoded sources have different density/sparsity of the information content. That nice property can be used for very efficient blockwise encoding of the sets of data with a non-uniform distribution of the information. The images are the most typical example of such sources. The important feature of LGGA is its separation from the encoder. The decoder does not need any additional side information from the encoder except for linear measurements and the knowledge that those measurements created as a linear combination of different sources

    Sparse Signal Recovery from Nonadaptive Linear Measurements

    Full text link
    The theory of Compressed Sensing, the emerging sampling paradigm 'that goes against the common wisdom', asserts that 'one can recover signals in Rn from far fewer samples or measurements, if the signal has a sparse representation in some orthonormal basis', from m = O(klogn), k<< n nonadaptive measurements . The accuracy of the recovered signal is 'as good as that attainable with direct knowledge of the k most important coefficients and its locations'. Moreover, a good approximation to those important coefficients is extracted from the measurements by solving a L1 minimization problem viz. Basis Pursuit. 'The nonadaptive measurements have the character of random linear combinations of the basis/frame elements'. The theory has implications which are far reaching and immediately leads to a number of applications in Data Compression,Channel Coding and Data Acquisition. 'The last of these applications suggest that CS could have an enormous impact in areas where conventional hardware design has significant limitations', leading to 'efficient and revolutionary methods of data acquisition and storage in future'. The paper reviews fundamental mathematical ideas pertaining to compressed sensing viz. sparsity, incoherence, reduced isometry property and basis pursuit, exemplified by the sparse recovery of a speech signal and convergence of the L1- minimization algorithm.Comment: 5 Pages, 4 Figures. arXiv admin note: text overlap with arXiv:1106.6224 by other author

    Edge-adaptive l2 regularization image reconstruction from non-uniform Fourier data

    Full text link
    Total variation regularization based on the l1 norm is ubiquitous in image reconstruction. However, the resulting reconstructions are not always as sparse in the edge domain as desired. Iteratively reweighted methods provide some improvement in accuracy, but at the cost of extended runtime. In this paper we examine these methods for the case of data acquired as non-uniform Fourier samples. We then develop a non-iterative weighted regularization method that uses a pre-processing edge detection to find exactly where the sparsity should be in the edge domain. We show that its performance in terms of both accuracy and speed has the potential to outperform reweighted TV regularization methods

    Application of Compressive Sensing Techniques in Distributed Sensor Networks: A Survey

    Full text link
    In this survey paper, our goal is to discuss recent advances of compressive sensing (CS) based solutions in wireless sensor networks (WSNs) including the main ongoing/recent research efforts, challenges and research trends in this area. In WSNs, CS based techniques are well motivated by not only the sparsity prior observed in different forms but also by the requirement of efficient in-network processing in terms of transmit power and communication bandwidth even with nonsparse signals. In order to apply CS in a variety of WSN applications efficiently, there are several factors to be considered beyond the standard CS framework. We start the discussion with a brief introduction to the theory of CS and then describe the motivational factors behind the potential use of CS in WSN applications. Then, we identify three main areas along which the standard CS framework is extended so that CS can be efficiently applied to solve a variety of problems specific to WSNs. In particular, we emphasize on the significance of extending the CS framework to (i). take communication constraints into account while designing projection matrices and reconstruction algorithms for signal reconstruction in centralized as well in decentralized settings, (ii) solve a variety of inference problems such as detection, classification and parameter estimation, with compressed data without signal reconstruction and (iii) take practical communication aspects such as measurement quantization, physical layer secrecy constraints, and imperfect channel conditions into account. Finally, open research issues and challenges are discussed in order to provide perspectives for future research directions

    From Group Sparse Coding to Rank Minimization: A Novel Denoising Model for Low-level Image Restoration

    Full text link
    Recently, low-rank matrix recovery theory has been emerging as a significant progress for various image processing problems. Meanwhile, the group sparse coding (GSC) theory has led to great successes in image restoration (IR) problem with each group contains low-rank property. In this paper, we propose a novel low-rank minimization based denoising model for IR tasks under the perspective of GSC, an important connection between our denoising model and rank minimization problem has been put forward. To overcome the bias problem caused by convex nuclear norm minimization (NNM) for rank approximation, a more generalized and flexible rank relaxation function is employed, namely weighted nonconvex relaxation. Accordingly, an efficient iteratively-reweighted algorithm is proposed to handle the resulting minimization problem combing with the popular L_(1/2) and L_(2/3) thresholding operators. Finally, our proposed denoising model is applied to IR problems via an alternating direction method of multipliers (ADMM) strategy. Typical IR experiments on image compressive sensing (CS), inpainting, deblurring and impulsive noise removal demonstrate that our proposed method can achieve significantly higher PSNR/FSIM values than many relevant state-of-the-art methods.Comment: Accepted by Signal Processin

    Online Reweighted Least Squares Algorithm for Sparse Recovery and Application to Short-Wave Infrared Imaging

    Full text link
    We address the problem of sparse recovery in an online setting, where random linear measurements of a sparse signal are revealed sequentially and the objective is to recover the underlying signal. We propose a reweighted least squares (RLS) algorithm to solve the problem of online sparse reconstruction, wherein a system of linear equations is solved using conjugate gradient with the arrival of every new measurement. The proposed online algorithm is useful in a setting where one seeks to design a progressive decoding strategy to reconstruct a sparse signal from linear measurements so that one does not have to wait until all measurements are acquired. Moreover, the proposed algorithm is also useful in applications where it is infeasible to process all the measurements using a batch algorithm, owing to computational and storage constraints. It is not needed a priori to collect a fixed number of measurements; rather one can keep collecting measurements until the quality of reconstruction is satisfactory and stop taking further measurements once the reconstruction is sufficiently accurate. We provide a proof-of-concept by comparing the performance of our algorithm with the RLS-based batch reconstruction strategy, known as iteratively reweighted least squares (IRLS), on natural images. Experiments on a recently proposed focal plane array-based imaging setup show up to 1 dB improvement in output peak signal-to-noise ratio as compared with the total variation-based reconstruction

    Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration

    Full text link
    Since the matrix formed by nonlocal similar patches in a natural image is of low rank, the nuclear norm minimization (NNM) has been widely used in various image processing studies. Nonetheless, nuclear norm based convex surrogate of the rank function usually over-shrinks the rank components and makes different components equally, and thus may produce a result far from the optimum. To alleviate the above-mentioned limitations of the nuclear norm, in this paper we propose a new method for image restoration via the non-convex weighted Lp nuclear norm minimization (NCW-NNM), which is able to more accurately enforce the image structural sparsity and self-similarity simultaneously. To make the proposed model tractable and robust, the alternative direction multiplier method (ADMM) is adopted to solve the associated non-convex minimization problem. Experimental results on various types of image restoration problems, including image deblurring, image inpainting and image compressive sensing (CS) recovery, demonstrate that the proposed method outperforms many current state-of-the-art methods in both the objective and the perceptual qualities.Comment: arXiv admin note: text overlap with arXiv:1611.0898

    Compressive Sensing via Low-Rank Gaussian Mixture Models

    Full text link
    We develop a new compressive sensing (CS) inversion algorithm by utilizing the Gaussian mixture model (GMM). While the compressive sensing is performed globally on the entire image as implemented in our lensless camera, a low-rank GMM is imposed on the local image patches. This low-rank GMM is derived via eigenvalue thresholding of the GMM trained on the projection of the measurement data, thus learned {\em in situ}. The GMM and the projection of the measurement data are updated iteratively during the reconstruction. Our GMM algorithm degrades to the piecewise linear estimator (PLE) if each patch is represented by a single Gaussian model. Inspired by this, a low-rank PLE algorithm is also developed for CS inversion, constituting an additional contribution of this paper. Extensive results on both simulation data and real data captured by the lensless camera demonstrate the efficacy of the proposed algorithm. Furthermore, we compare the CS reconstruction results using our algorithm with the JPEG compression. Simulation results demonstrate that when limited bandwidth is available (a small number of measurements), our algorithm can achieve comparable results as JPEG.Comment: 12 pages, 8 figure

    Training Sparse Neural Networks using Compressed Sensing

    Full text link
    Pruning the weights of neural networks is an effective and widely-used technique for reducing model size and inference complexity. We develop and test a novel method based on compressed sensing which combines the pruning and training into a single step. Specifically, we utilize an adaptively weighted â„“1\ell^1 penalty on the weights during training, which we combine with a generalization of the regularized dual averaging (RDA) algorithm in order to train sparse neural networks. The adaptive weighting we introduce corresponds to a novel regularizer based on the logarithm of the absolute value of the weights. Numerical experiments on the CIFAR-10 and CIFAR-100 datasets demonstrate that our method 1) trains sparser, more accurate networks than existing state-of-the-art methods; 2) can also be used effectively to obtain structured sparsity; 3) can be used to train sparse networks from scratch, i.e. from a random initialization, as opposed to initializing with a well-trained base model; 4) acts as an effective regularizer, improving generalization accuracy

    A Weighted â„“1\ell_1-Minimization Approach For Wavelet Reconstruction of Signals and Images

    Full text link
    In this effort, we propose a convex optimization approach based on weighted â„“1\ell_1-regularization for reconstructing objects of interest, such as signals or images, that are sparse or compressible in a wavelet basis. We recover the wavelet coefficients associated to the functional representation of the object of interest by solving our proposed optimization problem. We give a specific choice of weights and show numerically that the chosen weights admit efficient recovery of objects of interest from either a set of sub-samples or a noisy version. Our method not only exploits sparsity but also helps promote a particular kind of structured sparsity often exhibited by many signals and images. Furthermore, we illustrate the effectiveness of the proposed convex optimization problem by providing numerical examples using both orthonormal wavelets and a frame of wavelets. We also provide an adaptive choice of weights which is a modification of the iteratively reweighted â„“1\ell_1-minimization method.Comment: 16 pages and 20 figure
    • …
    corecore