100 research outputs found
Compressed sensing and finite rate of innovation for efficient data acquisition of quantitative acoustic microscopy images
La microscopie acoustique quantitative (MAQ) est une modalité d'imagerie bien établie qui donne accès à des cartes paramétriques 2D représentatives des propriétés mécaniques des tissus à une échelle microscopique. Dans la plupart des études sur MAQ, l'échantillons est scanné ligne par ligne (avec un pas de 2µm) à l'aide d'un transducteur à 250 MHz. Ce type d'acquisition permet d'obtenir un cube de données RF 3D, avec deux dimensions spatiales et une dimension temporelle. Chaque signal RF correspondant à une position spatiale dans l'échantillon permet d'estimer des paramètres acoustiques comme par exemple la vitesse du son ou l'impédance. Le temps d'acquisition en MAQ est directement proportionnel à la taille de l'échantillon et peut aller de quelques minutes à quelques dizaines de minutes. Afin d'assurer des conditions d'acquisition stables et étant donnée la sensibilité des échantillons à ces conditions, diminuer le temps d'acquisition est un des grand défis en MAQ. Afin de relever ce défi, ce travail de thèse propose plusieurs solutions basées sur l'échantillonnage compressé (EC) et la théories des signaux ayant un faible nombre de degré de liberté (finite rate of innovation - FRI, en anglais). Le principe de l'EC repose sur la parcimonie des données, sur l'échantillonnage incohérent de celles-ci et sur les algorithmes d'optimisation numérique. Dans cette thèse, les phénomènes physiques derrière la MAQ sont exploités afin de créer des modèles adaptés aux contraintes de l'EC et de la FRI. Plus particulièrement, ce travail propose plusieurs pistes d'application de l'EC en MAQ : un schéma d'acquisition spatiale innovant, un algorithme de reconstruction d'images exploitant les statistiques des coefficients en ondelettes des images paramétriques, un modèle FRI adapté aux signaux RF et un schéma d'acquisition compressée dans le domaine temporel.Quantitative acoustic microscopy (QAM) is a well-accepted modality for forming 2D parameter maps making use of mechanical properties of soft tissues at microscopic scales. In leading edge QAM studies, the sample is raster-scanned (spatial step size of 2µm) using a 250 MHz transducer resulting in a 3D RF data cube, and each RF signal for each spatial location is processed to obtain acoustic parameters, e.g., speed of sound or acoustic impedance. The scanning time directly depends on the sample size and can range from few minutes to tens of minutes. In order to maintain constant experimental conditions for the sensitive thin sectioned samples, the scanning time is an important practical issue. To deal with the current challenge, we propose the novel approach inspired by compressed sensing (CS) and finite rate of innovation (FRI). The success of CS relies on the sparsity of data under consideration, incoherent measurement and optimization technique. On the other hand, the idea behind FRI is supported by a signal model fully characterized as a limited number of parameters. From this perspective, taking into account the physics leading to data acquisition of QAM system, the QAM data can be regarded as an adequate application amenable to the state of the art technologies aforementioned. However, when it comes to the mechanical structure of QAM system which does not support canonical CS measurement manners on the one hand, and the compositions of the RF signal model unsuitable to existing FRI schemes on the other hand, the advanced frameworks are still not perfect methods to resolve the problems that we are facing. In this thesis, to overcome the limitations, a novel sensing framework for CS is presented in spatial domain: a recently proposed approximate message passing (AMP) algorithm is adapted to account for the underlying data statistics of samples sparsely collected by proposed scanning patterns. In time domain, as an approach for achieving an accurate recovery from a small set of samples of QAM RF signals, we employ sum of sincs (SoS) sampling kernel and autoregressive (AR) model estimator. The spiral scanning manner, introduced as an applicable sensing technique to QAM system, contributed to the significant reduction of the number of spatial samples when reconstructing speed of sound images of a human lymph node. Furthermore, the scanning time was also hugely saved due to the merit of the mechanical movement of the proposed sensing pattern. Together with the achievement in spatial domain, the introduction of SoS kernel and AR estimator responsible for an innovation rate sampling and a parameter estimation respectively led to dramatic reduction of the required number of samples per RF signal compared to a conventional approach. Finally, we showed that both data acquisition frameworks based on the CS and FRI can be combined into a single spatio-temporal solution to maximize the benefits stated above
A Deterministic and Generalized Framework for Unsupervised Learning with Restricted Boltzmann Machines
Restricted Boltzmann machines (RBMs) are energy-based neural-networks which
are commonly used as the building blocks for deep architectures neural
architectures. In this work, we derive a deterministic framework for the
training, evaluation, and use of RBMs based upon the Thouless-Anderson-Palmer
(TAP) mean-field approximation of widely-connected systems with weak
interactions coming from spin-glass theory. While the TAP approach has been
extensively studied for fully-visible binary spin systems, our construction is
generalized to latent-variable models, as well as to arbitrarily distributed
real-valued spin systems with bounded support. In our numerical experiments, we
demonstrate the effective deterministic training of our proposed models and are
able to show interesting features of unsupervised learning which could not be
directly observed with sampling. Additionally, we demonstrate how to utilize
our TAP-based framework for leveraging trained RBMs as joint priors in
denoising problems
Recommended from our members
Learning-based Optimization for Signal and Image Processing
Incorporating machine learning techniques into optimization problems and solvers attracts increasing attention. Given a particular type of optimization problem that needs to be solved repeatedly, machine learning techniques can find some features for this category of optimization and develop algorithms with excellent performance. This thesis deals with algorithms and convergence analysis in learning-based optimization in three aspects: learning dictionaries, learning optimization solvers and learning regularizers.Learning dictionaries for sparse coding is significant for signal processing. Convolutional sparse coding is a form of sparse coding with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in the batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data size that can be used. I proposed two online convolutional dictionary learning algorithms that offered far better scaling of memory and computational cost than batch methods and provided a rigorous theoretical analysis of these methods.Learning fast solvers for optimization is a rising research topic. In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. I studied unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery and established its convergence. Based on the properties of parameters required by convergence, the model can be significantly simplified and, consequently, has much less training cost and better recovery performance.Learning regularizers or priors improves the performance of optimization solvers, especially for signal and image processing tasks. Plug-and-play (PnP) is a non-convex framework that integrates modern priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this thesis, the theoretical convergence of PnP-FBS and PnP-ADMM was established, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. Furthermore, real spectral normalization was proposed for training deep learning-based denoisers to satisfy the proposed Lipschitz condition
Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition
The plug-and-play priors (PnP) and regularization by denoising (RED) methods
have become widely used for solving inverse problems by leveraging pre-trained
deep denoisers as image priors. While the empirical imaging performance and the
theoretical convergence properties of these algorithms have been widely
investigated, their recovery properties have not previously been theoretically
analyzed. We address this gap by showing how to establish theoretical recovery
guarantees for PnP/RED by assuming that the solution of these methods lies near
the fixed-points of a deep neural network. We also present numerical results
comparing the recovery performance of PnP/RED in compressive sensing against
that of recent compressive sensing algorithms based on generative models. Our
numerical results suggest that PnP with a pre-trained artifact removal network
provides significantly better results compared to the existing state-of-the-art
methods.Comment: 27 pages, 13 figure
Deep learning methods for solving linear inverse problems: Research directions and paradigms
The linear inverse problem is fundamental to the development of various scientific areas. Innumerable attempts have been carried out to solve different variants of the linear inverse problem in different applications. Nowadays, the rapid development of deep learning provides a fresh perspective for solving the linear inverse problem, which has various well-designed network architectures results in state-of-the-art performance in many applications. In this paper, we present a comprehensive survey of the recent progress in the development of deep learning for solving various linear inverse problems. We review how deep learning methods are used in solving different linear inverse problems, and explore the structured neural network architectures that incorporate knowledge used in traditional methods. Furthermore, we identify open challenges and potential future directions along this research line
- …