24 research outputs found

    Sparse Power Factorization: Balancing peakiness and sample complexity

    Full text link
    In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse Power Factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery guarantees to a significantly enlarged and more realistic signal class at the expense of a moderately increased number of measurements.Comment: 18 page

    Auto-Calibration and Biconvex Compressive Sensing with Applications to Parallel MRI

    Full text link
    We study an auto-calibration problem in which a transform-sparse signal is compressive-sensed by multiple sensors in parallel with unknown sensing parameters. The problem has an important application in pMRI reconstruction, where explicit coil calibrations are often difficult and costly to achieve in practice, but nevertheless a fundamental requirement for high-precision reconstructions. Most auto-calibrated strategies result in reconstruction that corresponds to solving a challenging biconvex optimization problem. We transform the auto-calibrated parallel sensing as a convex optimization problem using the idea of `lifting'. By exploiting sparsity structures in the signal and the redundancy introduced by multiple sensors, we solve a mixed-norm minimization problem to recover the underlying signal and the sensing parameters simultaneously. Robust and stable recovery guarantees are derived in the presence of noise and sparsity deficiencies in the signals. For the pMRI application, our method provides a theoretically guaranteed approach to self-calibrated parallel imaging to accelerate MRI acquisitions under appropriate assumptions. Developments in MRI are discussed, and numerical simulations using the analytical phantom and simulated coil sensitives are presented to support our theoretical results.Comment: Keywords: Self-calibration, Compressive sensing, Convex optimization, Random matrices, Parallel MR

    Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing

    Full text link
    We study the question of extracting a sequence of functions {fi,gi}i=1s\{\boldsymbol{f}_i, \boldsymbol{g}_i\}_{i=1}^s from observing only the sum of their convolutions, i.e., from y=∑i=1sfi∗gi\boldsymbol{y} = \sum_{i=1}^s \boldsymbol{f}_i\ast \boldsymbol{g}_i. While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper, we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the information-theoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom. We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM
    corecore