98 research outputs found

    CPGD: Cadzow Plug-and-Play Gradient Descent for Generalised FRI

    Get PDF
    Finite rate of innovation (FRI) is a powerful reconstruction framework enabling the recovery of sparse Dirac streams from uniform low-pass filtered samples. An extension of this framework, called generalised FRI (genFRI), has been recently proposed for handling cases with arbitrary linear measurement models. In this context, signal reconstruction amounts to solving a joint constrained optimisation problem, yielding estimates of both the Fourier series coefficients of the Dirac stream and its so-called annihilating filter, involved in the regularisation term. This optimisation problem is however highly non convex and non linear in the data. Moreover, the proposed numerical solver is computationally intensive and without convergence guarantee. In this work, we propose an implicit formulation of the genFRI problem. To this end, we leverage a novel regularisation term which does not depend explicitly on the unknown annihilating filter yet enforces sufficient structure in the solution for stable recovery. The resulting optimisation problem is still non convex, but simpler since linear in the data and with less unknowns. We solve it by means of a provably convergent proximal gradient descent (PGD) method. Since the proximal step does not admit a simple closed-form expression, we propose an inexact PGD method, coined as Cadzow plug-and-play gradient descent (CPGD). The latter approximates the proximal steps by means of Cadzow denoising, a well-known denoising algorithm in FRI. We provide local fixed-point convergence guarantees for CPGD. Through extensive numerical simulations, we demonstrate the superiority of CPGD against the state-of-the-art in the case of non uniform time samples.Comment: 16 pages, 8 figure

    A compressed-sensing approach for ultrasound imaging

    Get PDF
    Ultrasonography uses multiple piezo-electric element probes to image tissues. Current time-domain beamforming techniques require the signal at each transducer-element to be sampled at a rate higher than the Nyquist criterion, resulting in an extensive amount of data to be received, stored and processed. In this work, we propose to exploit sparsity of the signal received at each transducer-element. The proposed approach uses multiple compressive multiplexers for signal encoding and solves an l1-minimization in the decoding step, resulting in the reduction of 75 % of the amount of data, the number of cables and the number of analog-to-digital converters required to perform high quality reconstruction

    « Je meesmes me sui mis en queste, et assez ai traveillé » : Questes, quinze ans de recherche et d’amitié

    Get PDF
    En 2001, dans le prolongement d’un séminaire d’élève de l’ENS, un groupe de doctorants se rassemble sur l’impulsion d’Estelle Doudet. Il prend le nom de « Questes » : Nous aurions pu nous passer de nom ou prendre un intitulé plus universitairement correct comme « groupe de travail sur le Moyen Âge ». Mais nous trouvions qu’un nom original, éventuellement déclinable en substantif pour nous désigner, nous donnerait une forme d’identité. D’où l’idée d’un mot médiéval, soit en latin soit en ancie..

    Beamforming-deconvolution: A novel concept of deconvolution for ultrasound imaging

    Get PDF
    In ultrasound (US) imaging, beamforming is usually separated from the deconvolution or some other post-processing techniques. The former processes raw data to build radio-frequency (RF) images while the latter restore high-resolution images, denoted as tissue reflectivity function (TRF), from RF images. This work is the very first trial to perform deconvolution directly with raw data, bridging the gap between beamforming and deconvolution, and thus reducing the estimation errors from two separate steps. The proposed approach retrieves both high quality RF and TRF images and exhibits better RF image quality than a classical beamforming approach

    Pulse-Stream Models In Time-Of-Flight Imaging

    Get PDF
    This paper considers the problem of reconstructing raw signals from random projections in the context of time-of-flight imaging with an array of sensors. It presents a new signal model, coined as multi-channel pulse-stream model, which exploits pulse-stream models and accounts for additional structure induced by inter-sensor dependencies. We propose a sampling theorem and a reconstruction algorithm, based on l1-minimization, for signals belonging to such a model. We demonstrate the benefits of the proposed approach by means of numerical simulations and on a real nondestructive- evaluation application where the peak-signal-to-noise ratio is increased by 3 dB compared to standard compressed-sensing strategies

    A sparse reconstruction framework for Fourier-based plane wave imaging

    Get PDF
    International audienceUltrafast imaging based on plane-wave (PW) insonification is an active area of research due to its capability of reaching high frame rates. Among PW imaging methods, Fourier-based approaches have demonstrated to be competitive compared with traditional delay and sum methods. Motivated by the success of compressed sensing techniques in other Fourier imaging modalities, like magnetic resonance imaging, we propose a new sparse regularization framework to reconstruct high-quality ultrasound (US) images. The framework takes advantage of both the ability to formulate the imaging inverse problem in the Fourier domain and the sparsity of US images in a sparsifying domain. We show, by means of simulations, in vitro and in vivo data, that the proposed framework significantly reduces image artifacts, i.e., measurement noise and sidelobes, compared with classical methods, leading to an increase of the image quality

    Learning the weight matrix for sparsity averaging in compressive imaging

    Get PDF
    We propose to map the fast iterative shrinkage-thresholding algorithm to a deep neural network (DNN), with a sparsity prior in a concatenation of wavelet bases, in the context of compressive imaging. We exploit the DNN architecture to learn the optimal weight matrix of the corresponding reweighted l1-minimization problem. We later use the learned weight matrix for the image reconstruction process, which is recast as a simple l1-minimization problem. The approach, denoted as learned extended FISTA, shows promising results in terms of image quality, compared to state-of-the-art algorithms, and significantly reduces the reconstruction time required to solve the reweighted l1-minimization problem
    corecore