1,063 research outputs found

    A Generalized Framework for Learning and Recovery of Structured Sparse Signals

    Get PDF
    Engineering: 1st Place (The Ohio State University Edward F. Hayes Graduate Research Forum)We report on a framework for recovering single- or multi-timestep sparse signals that can learn and exploit a variety of probabilistic forms of structure. Message passing-based inference and empirical Bayesian parameter learning form the backbone of the recovery procedure. We further describe an object-oriented software paradigm for implementing our framework, which consists of assembling modular software components that collectively define a desired statistical signal model. Lastly, numerical results for an example structured sparse signal model are provided.A one-year embargo was granted for this item

    Simultaneous use of Individual and Joint Regularization Terms in Compressive Sensing: Joint Reconstruction of Multi-Channel Multi-Contrast MRI Acquisitions

    Get PDF
    Purpose: A time-efficient strategy to acquire high-quality multi-contrast images is to reconstruct undersampled data with joint regularization terms that leverage common information across contrasts. However, these terms can cause leakage of uncommon features among contrasts, compromising diagnostic utility. The goal of this study is to develop a compressive sensing method for multi-channel multi-contrast magnetic resonance imaging (MRI) that optimally utilizes shared information while preventing feature leakage. Theory: Joint regularization terms group sparsity and colour total variation are used to exploit common features across images while individual sparsity and total variation are also used to prevent leakage of distinct features across contrasts. The multi-channel multi-contrast reconstruction problem is solved via a fast algorithm based on Alternating Direction Method of Multipliers. Methods: The proposed method is compared against using only individual and only joint regularization terms in reconstruction. Comparisons were performed on single-channel simulated and multi-channel in-vivo datasets in terms of reconstruction quality and neuroradiologist reader scores. Results: The proposed method demonstrates rapid convergence and improved image quality for both simulated and in-vivo datasets. Furthermore, while reconstructions that solely use joint regularization terms are prone to leakage-of-features, the proposed method reliably avoids leakage via simultaneous use of joint and individual terms. Conclusion: The proposed compressive sensing method performs fast reconstruction of multi-channel multi-contrast MRI data with improved image quality. It offers reliability against feature leakage in joint reconstructions, thereby holding great promise for clinical use.Comment: 13 pages, 13 figures. Submitted for possible publicatio

    Vector Approximate Message Passing for the Generalized Linear Model

    Full text link
    The generalized linear model (GLM), where a random vector x\boldsymbol{x} is observed through a noisy, possibly nonlinear, function of a linear transform output z=Ax\boldsymbol{z}=\boldsymbol{Ax}, arises in a range of applications such as robust regression, binary classification, quantized compressed sensing, phase retrieval, photon-limited imaging, and inference from neural spike trains. When A\boldsymbol{A} is large and i.i.d. Gaussian, the generalized approximate message passing (GAMP) algorithm is an efficient means of MAP or marginal inference, and its performance can be rigorously characterized by a scalar state evolution. For general A\boldsymbol{A}, though, GAMP can misbehave. Damping and sequential-updating help to robustify GAMP, but their effects are limited. Recently, a "vector AMP" (VAMP) algorithm was proposed for additive white Gaussian noise channels. VAMP extends AMP's guarantees from i.i.d. Gaussian A\boldsymbol{A} to the larger class of rotationally invariant A\boldsymbol{A}. In this paper, we show how VAMP can be extended to the GLM. Numerical experiments show that the proposed GLM-VAMP is much more robust to ill-conditioning in A\boldsymbol{A} than damped GAMP

    Properties of spatial coupling in compressed sensing

    Full text link
    In this paper we address a series of open questions about the construction of spatially coupled measurement matrices in compressed sensing. For hardware implementations one is forced to depart from the limiting regime of parameters in which the proofs of the so-called threshold saturation work. We investigate quantitatively the behavior under finite coupling range, the dependence on the shape of the coupling interaction, and optimization of the so-called seed to minimize distance from optimality. Our analysis explains some of the properties observed empirically in previous works and provides new insight on spatially coupled compressed sensing.Comment: 5 pages, 6 figure

    Turbo Bayesian Compressed Sensing

    Get PDF
    Compressed sensing (CS) theory specifies a new signal acquisition approach, potentially allowing the acquisition of signals at a much lower data rate than the Nyquist sampling rate. In CS, the signal is not directly acquired but reconstructed from a few measurements. One of the key problems in CS is how to recover the original signal from measurements in the presence of noise. This dissertation addresses signal reconstruction problems in CS. First, a feedback structure and signal recovery algorithm, orthogonal pruning pursuit (OPP), is proposed to exploit the prior knowledge to reconstruct the signal in the noise-free situation. To handle the noise, a noise-aware signal reconstruction algorithm based on Bayesian Compressed Sensing (BCS) is developed. Moreover, a novel Turbo Bayesian Compressed Sensing (TBCS) algorithm is developed for joint signal reconstruction by exploiting both spatial and temporal redundancy. Then, the TBCS algorithm is applied to a UWB positioning system for achieving mm-accuracy with low sampling rate ADCs. Finally, hardware implementation of BCS signal reconstruction on FPGAs and GPUs is investigated. Implementation on GPUs and FPGAs of parallel Cholesky decomposition, which is a key component of BCS, is explored. Simulation results on software and hardware have demonstrated that OPP and TBCS outperform previous approaches, with UWB positioning accuracy improved by 12.8x. The accelerated computation helps enable real-time application of this work
    corecore