30,450 research outputs found

    Discrete and Continuous Sparse Recovery Methods and Their Applications

    Get PDF
    Low dimensional signal processing has drawn an increasingly broad amount of attention in the past decade, because prior information about a low-dimensional space can be exploited to aid in the recovery of the signal of interest. Among all the different forms of low di- mensionality, in this dissertation we focus on the synthesis and analysis models of sparse recovery. This dissertation comprises two major topics. For the first topic, we discuss the synthesis model of sparse recovery and consider the dictionary mismatches in the model. We further introduce a continuous sparse recovery to eliminate the existing off-grid mismatches for DOA estimation. In the second topic, we focus on the analysis model, with an emphasis on efficient algorithms and performance analysis. In considering the sparse recovery method with structured dictionary mismatches for the synthesis model, we exploit the joint sparsity between the mismatch parameters and original sparse signal. We demonstrate that by exploiting this information, we can obtain a robust reconstruction under mild conditions on the sensing matrix. This model is very useful for radar and passive array applications. We propose several efficient algorithms to solve the joint sparse recovery problem. Using numerical examples, we demonstrate that our proposed algorithms outperform several methods in the literature. We further extend the mismatch model to a continuous sparse model, using the mathematical theory of super resolution. Statistical analysis shows the robustness of the proposed algorithm. A number-detection algorithm is also proposed for the co-prime arrays. By using numerical examples, we show that continuous sparse recovery further improves the DOA estimation accuracy, over both the joint sparse method and also MUSIC with spatial smoothing. In the second topic, we visit the corresponding analysis model of sparse recovery. Instead of assuming a sparse decomposition of the original signal, the analysis model focuses on the existence of a linear transformation which can make the original signal sparse. In this work we use a monotone version of the fast iterative shrinkage- thresholding algorithm (MFISTA) to yield efficient algorithms to solve the sparse recovery. We examine two widely used relaxation techniques, namely smoothing and decomposition, to relax the optimization. We show that although these two techniques are equivalent in their objective functions, the smoothing technique converges faster than the decomposition technique. We also compute the performance guarantee for the analysis model when a LASSO type of reconstruction is performed. By using numerical examples, we are able to show that the proposed algorithm is more efficient than other state of the art algorithms

    Fast Dictionary Learning for Sparse Representations of Speech Signals

    Get PDF
    © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Published version: IEEE Journal of Selected Topics in Signal Processing 5(5): 1025-1031, Sep 2011. DOI: 10.1109/JSTSP.2011.2157892

    Exploiting Prior Knowledge in Compressed Sensing Wireless ECG Systems

    Full text link
    Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.Comment: Accepted for publication at IEEE Journal of Biomedical and Health Informatic

    A Tensor-Based Dictionary Learning Approach to Tomographic Image Reconstruction

    Full text link
    We consider tomographic reconstruction using priors in the form of a dictionary learned from training images. The reconstruction has two stages: first we construct a tensor dictionary prior from our training data, and then we pose the reconstruction problem in terms of recovering the expansion coefficients in that dictionary. Our approach differs from past approaches in that a) we use a third-order tensor representation for our images and b) we recast the reconstruction problem using the tensor formulation. The dictionary learning problem is presented as a non-negative tensor factorization problem with sparsity constraints. The reconstruction problem is formulated in a convex optimization framework by looking for a solution with a sparse representation in the tensor dictionary. Numerical results show that our tensor formulation leads to very sparse representations of both the training images and the reconstructions due to the ability of representing repeated features compactly in the dictionary.Comment: 29 page

    Multiscale Adaptive Representation of Signals: I. The Basic Framework

    Full text link
    We introduce a framework for designing multi-scale, adaptive, shift-invariant frames and bi-frames for representing signals. The new framework, called AdaFrame, improves over dictionary learning-based techniques in terms of computational efficiency at inference time. It improves classical multi-scale basis such as wavelet frames in terms of coding efficiency. It provides an attractive alternative to dictionary learning-based techniques for low level signal processing tasks, such as compression and denoising, as well as high level tasks, such as feature extraction for object recognition. Connections with deep convolutional networks are also discussed. In particular, the proposed framework reveals a drawback in the commonly used approach for visualizing the activations of the intermediate layers in convolutional networks, and suggests a natural alternative
    • …
    corecore