3,497 research outputs found

    Parallel Magnetic Resonance Imaging as Approximation in a Reproducing Kernel Hilbert Space

    Full text link
    In Magnetic Resonance Imaging (MRI) data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a Reproducing Kernel Hilbert Space (RKHS) with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional g-factor noise analysis to both noise amplification and approximation errors. This is demonstrated with numerical examples.Comment: 28 pages, 7 figure

    A mixed 1\ell_1 regularization approach for sparse simultaneous approximation of parameterized PDEs

    Full text link
    We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based 1\ell_1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best ss-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.Comment: 23 pages, 4 figure

    Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations

    Full text link
    We analyze the convergence of compressive sensing based sampling techniques for the efficient evaluation of functionals of solutions for a class of high-dimensional, affine-parametric, linear operator equations which depend on possibly infinitely many parameters. The proposed algorithms are based on so-called "non-intrusive" sampling of the high-dimensional parameter space, reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a functional of the parametric solution is then computed via compressive sensing methods from samples of functionals of the solution. A key ingredient in our analysis of independent interest consists in a generalization of recent results on the approximate sparsity of generalized polynomial chaos representations (gpc) of the parametric solution families, in terms of the gpc series with respect to tensorized Chebyshev polynomials. In particular, we establish sufficient conditions on the parametric inputs to the parametric operator equation such that the Chebyshev coefficients of the gpc expansion are contained in certain weighted p\ell_p-spaces for 0<p10<p\leq 1. Based on this we show that reconstructions of the parametric solutions computed from the sampled problems converge, with high probability, at the L2L_2, resp. LL_\infty convergence rates afforded by best ss-term approximations of the parametric solution up to logarithmic factors.Comment: revised version, 27 page

    Limits on Sparse Data Acquisition: RIC Analysis of Finite Gaussian Matrices

    Full text link
    One of the key issues in the acquisition of sparse data by means of compressed sensing (CS) is the design of the measurement matrix. Gaussian matrices have been proven to be information-theoretically optimal in terms of minimizing the required number of measurements for sparse recovery. In this paper we provide a new approach for the analysis of the restricted isometry constant (RIC) of finite dimensional Gaussian measurement matrices. The proposed method relies on the exact distributions of the extreme eigenvalues for Wishart matrices. First, we derive the probability that the restricted isometry property is satisfied for a given sufficient recovery condition on the RIC, and propose a probabilistic framework to study both the symmetric and asymmetric RICs. Then, we analyze the recovery of compressible signals in noise through the statistical characterization of stability and robustness. The presented framework determines limits on various sparse recovery algorithms for finite size problems. In particular, it provides a tight lower bound on the maximum sparsity order of the acquired data allowing signal recovery with a given target probability. Also, we derive simple approximations for the RICs based on the Tracy-Widom distribution.Comment: 11 pages, 6 figures, accepted for publication in IEEE transactions on information theor

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Support Recovery with Sparsely Sampled Free Random Matrices

    Full text link
    Consider a Bernoulli-Gaussian complex nn-vector whose components are Vi=XiBiV_i = X_i B_i, with X_i \sim \Cc\Nc(0,\Pc_x) and binary BiB_i mutually independent and iid across ii. This random qq-sparse vector is multiplied by a square random matrix \Um, and a randomly chosen subset, of average size npn p, p[0,1]p \in [0,1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where \Um is typically %A16 the identity or a matrix with iid components, to allow \Um satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verd\'u, as well as a number of information theoretic bounds, to study the input-output mutual information and the support recovery error rate in the limit of nn \to \infty. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and 1\ell_1 relaxation

    Explicit measurements with almost optimal thresholds for compressed sensing

    Get PDF
    We consider the deterministic construction of a measurement matrix and a recovery method for signals that are block sparse. A signal that has dimension N = nd, which consists of n blocks of size d, is called (s, d)-block sparse if only s blocks out of n are nonzero. We construct an explicit linear mapping Φ that maps the (s, d)-block sparse signal to a measurement vector of dimension M, where s•d <N(1-(1-M/N)^(d/(d+1))-o(1). We show that if the (s, d)- block sparse signal is chosen uniformly at random then the signal can almost surely be reconstructed from the measurement vector in O(N^3) computations
    corecore