307 research outputs found

    Compressed Sensing for Elastography in Portable Ultrasound

    Get PDF
    Bonghun Shin, Soo Jeon, Jeongwon Ryu and Hyock Ju Kwon, “Compressed Sensing for Elastography in Portable Ultrasound,” Ultrasonic Imaging, 39(6), pp. 393-413, Copyright © The Author(s) 2017. Reprinted by permission of SAGE Publications. https://doi.org/10.1177/0161734617716938Portable wireless ultrasound has many advantages such as high portability, easy connectivity, strong individuality, as well as on-site diagnostic ability in real-time. Some of the modern portable ultrasound devices offer high image quality and multiple ultrasound modes comparable to console style ultrasound, however, none of them provides ultrasound elastography function that enables the diagnosis of malignant legions using elastic properties. This is mainly due to the limitations of hardware performance and wireless data transfer speed for processing the large amount of data for elastography. Therefore, reduction of the data transfer size is one of the feasible solutions to overcome these limitations. Recently compressive sensing (CS) theory has been rigorously studied as a means to break the conventional Nyquist sampling rate and thus can significantly decrease the amount of measurement signals without sacrificing signal quality. In this research, we implemented various CS reconstruction frameworks and comparatively evaluated their reconstruction performance for realizing ultrasound elastography function on portable ultrasound. Combinations of three most common model bases (FT, DCT, and WA) and two reconstruction algorithms (l_1 minimization and BSBL) were considered for CS frameworks. Two kinds of numerical phantoms, echoic and elastography phantoms, were developed to evaluate performance of CS on B-mode images and elastograms, respectively. To assess the reconstruction quality, mean absolute error (MAE), signal-to-noise (SNRe) and contrast-to-noise (CNRe) were measured on the B-mode images and elastograms from CS reconstructions. Results suggest that CS reconstruction adopting BSBL algorithm with DCT model basis can yield the best results for all the measures tested, and the maximum data reduction rate for producing readily discernable elastograms is around 60%.Natural Sciences and Engineering Research Council || RGPIN-2015-05273, RGPIN-2015-04118, RGPAS-354703-201

    Reconstruction par acquisition compressée en imagerie ultrasonore médicale 3D et Doppler

    Get PDF
    This thesis is dedicated to the application of the novel compressed sensing theory to the acquisition and reconstruction of 3D US images and Doppler signals. In 3D US imaging, one of the major difficulties concerns the number of RF lines that has to be acquired to cover the complete volume. The acquisition of each line takes an incompressible time due to the finite velocity of the ultrasound wave. One possible solution for increasing the frame rate consists in reducing the acquisition time by skipping some RF lines. The reconstruction of the missing information in post processing is then a typical application of compressed sensing. Another excellent candidate for this theory is the Doppler duplex imaging that implies alternating two modes of emission, one for B-mode imaging and the other for flow estimation. Regarding 3D imaging, we propose a compressed sensing framework using learned overcomplete dictionaries. Such dictionaries allow for much sparser representations of the signals since they are optimized for a particular class of images such as US images.We also focus on the measurement sensing setup and propose a line-wise sampling of entire RF lines which allows to decrease the amount of data and is feasible in a relatively simple setting of the 3D US equipment. The algorithm was validated on 3D simulated and experimental data. For the Doppler application, we proposed a CS based framework for randomly interleaving Doppler and US emissions. The proposed method reconstructs the Doppler signal using a block sparse Bayesian learning algorithm that exploits the correlation structure within a signal and has the ability of recovering partially sparse signals as long as they are correlated. This method is validated on simulated and experimental Doppler data.DL’objectif de cette thèse est le développement de techniques adaptées à l’application de la théorie de l’acquisition compressée en imagerie ultrasonore 3D et Doppler. En imagerie ultrasonore 3D une des principales difficultés concerne le temps d’acquisition très long lié au nombre de lignes RF à acquérir pour couvrir l’ensemble du volume. Afin d’augmenter la cadence d’imagerie une solution possible consiste à choisir aléatoirement des lignes RF qui ne seront pas acquises. La reconstruction des données manquantes est une application typique de l’acquisition compressée. Une autre application d’intérêt correspond aux acquisitions Doppler duplex où des stratégies d’entrelacement des acquisitions sont nécessaires et conduisent donc à une réduction de la quantité de données disponibles. Dans ce contexte, nous avons réalisé de nouveaux développements permettant l’application de l’acquisition compressée à ces deux modalités d’acquisition ultrasonore. Dans un premier temps, nous avons proposé d’utiliser des dictionnaires redondants construits à partir des signaux d’intérêt pour la reconstruction d’images 3D ultrasonores. Une attention particulière a aussi été apportée à la configuration du système d’acquisition et nous avons choisi de nous concentrer sur un échantillonnage des lignes RF entières, réalisable en pratique de façon relativement simple. Cette méthode est validée sur données 3D simulées et expérimentales. Dans un deuxième temps, nous proposons une méthode qui permet d’alterner de manière aléatoire les émissions Doppler et les émissions destinées à l’imagerie mode-B. La technique est basée sur une approche bayésienne qui exploite la corrélation et la parcimonie des blocs du signal. L’algorithme est validé sur des données Doppler simulées et expérimentales

    Development of a Feasible Elastography Framework for Portable Ultrasound

    Get PDF
    Portable wireless ultrasound is emerging as a new ultrasound device due to the advantages such as small size, lightweight and affordable price. Its high portability allows practitioners to make diagnostic and therapeutic decisions in real-time without having to take the patients out of their environment. Recent portable ultrasound devices are equipped with sophisticated processors and image processing algorithms providing high image quality. Some of them are able to deliver multiple ultrasound modes including color Doppler, echocardiography, and endovaginal examination. Nevertheless, they are still lack of elastography functions due to the limitations in computational performance and data transfer speed via wireless communication. In order to implement the elastography function in the wireless portable ultrasound devices, this thesis proposes a new strain estimation method to significantly reduce the computation time and a compressive sensing framework to minimize the data transfer size. Firstly, a robust phase-based strain estimator (RPSE) is developed to overcome the limited hardware performance of portable ultrasound. The RPSE is not only computationally efficient but also robust to variations of the speed of sound, sampling frequency and pulse repetition. The RPSE has been compared with other representative strain estimators including time-delay, displacement-gradient, and conventional phase-based strain estimators (TSE, DSE and PSE, respectively). It has been shown that the RPSE is superior in several elastographic image quality measures, including signal-to-noise (SNRe) and contrast-to-noise (CNRe), and the computational efficiency. The study indicates that the RPSE method can deliver the acceptable level of elastography and fast computational speed for the ultrasound echo data sets from the numerical and experimental phantoms. According to the results from the numerical phantom experiment, RPSE can achieve highest values of SNRe and CNRe (around 5.22 and 47.62 dB) among all strain estimators tested, and almost 100 times higher computational efficiency than TSE and DSE (around 0.06 vs. 5.76 seconds per frame for RPSE and TSE, respectively). Secondly, as a means to reduce the large amount of ultrasound measurement data that has to be transmitted via wireless communication, the compressive sensing (CS) framework has been applied to elastography. The performance of CS is highly dependent on the selection of model basis to represent the sparse expansion as well as the reconstruction algorithm to recover the original data from the compressed signal. Therefore, it is essential to compose the optimal combination of model basis and reconstruction algorithm for CS framework to achieve the best CS performance in terms of image quality and the maximum data reduction. In this thesis, three model bases, discrete Fourier transform (FT), discrete cosine transform (DCT), and wave atoms (WA), along with two reconstruction algorithms, L1 minimization (L1) and Block sparse Bayesian learning (BSBL) are tested. Using B-mode and elastogram images of simulated numerical phantoms, the quality of CS reconstruction is assessed in terms of three image quality measures, mean absolute error (MAE), SNRe, and CNRe, at varying data reduction (subsampling) rates. The results illustrate that BSBL based CS frameworks can generally deliver much higher image quality and subsampling rate compared with L1-based ones. In particular, the CS frameworks adopting DCT and BSBL offer the best CS performance. The results also suggests that the maximum subsampling rates without causing image degradation are 40% for L1-based framework and 60% for BSBL-based framework, respectively. The contributions of this thesis help realize elastography functionality in portable ultrasound, thereby significantly expanding its utility. For example, the diagnosis of malignant lesions, even when a patient cannot be moved to hospital immediately, is possible with the portable ultrasound. Furthermore, the SPSE method and the CS framework can be individually employed for the conventional ultrasound device as well as other telemedicine applications, to enhance computational efficiency and image quality

    Reconstruction of enhanced ultrasound images from compressed measurements

    Get PDF
    L'intérêt de l'échantillonnage compressé dans l'imagerie ultrasonore a été récemment évalué largement par plusieurs équipes de recherche. Suite aux différentes configurations d'application, il a été démontré que les données RF peuvent être reconstituées à partir d'un faible nombre de mesures et / ou en utilisant un nombre réduit d'émission d'impulsions ultrasonores. Selon le modèle de l'échantillonnage compressé, la résolution des images ultrasonores reconstruites à partir des mesures compressées dépend principalement de trois aspects: la configuration d'acquisition, c.à.d. l'incohérence de la matrice d'échantillonnage, la régularisation de l'image, c.à.d. l'a priori de parcimonie et la technique d'optimisation. Nous nous sommes concentrés principalement sur les deux derniers aspects dans cette thèse. Néanmoins, la résolution spatiale d'image RF, le contraste et le rapport signal sur bruit dépendent de la bande passante limitée du transducteur d'imagerie et du phénomène physique lié à la propagation des ondes ultrasonores. Pour surmonter ces limitations, plusieurs techniques de traitement d'image en fonction de déconvolution ont été proposées pour améliorer les images ultrasonores. Dans cette thèse, nous proposons d'abord un nouveau cadre de travail pour l'imagerie ultrasonore, nommé déconvolution compressée, pour combiner l'échantillonnage compressé et la déconvolution. Exploitant une formulation unifiée du modèle d'acquisition directe, combinant des projections aléatoires et une convolution 2D avec une réponse impulsionnelle spatialement invariante, l'avantage de ce cadre de travail est la réduction du volume de données et l'amélioration de la qualité de l'image. Une méthode d'optimisation basée sur l'algorithme des directions alternées est ensuite proposée pour inverser le modèle linéaire, en incluant deux termes de régularisation exprimant la parcimonie des images RF dans une base donnée et l'hypothèse statistique gaussienne généralisée sur les fonctions de réflectivité des tissus. Nous améliorons les résultats ensuite par la méthode basée sur l'algorithme des directions simultanées. Les deux algorithmes sont évalués sur des données simulées et des données in vivo. Avec les techniques de régularisation, une nouvelle approche basée sur la minimisation alternée est finalement développée pour estimer conjointement les fonctions de réflectivité des tissus et la réponse impulsionnelle. Une investigation préliminaire est effectuée sur des données simulées.The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. According to the model of compressive sampling, the resolution of reconstructed ultrasound images from compressed measurements mainly depends on three aspects: the acquisition setup, i.e. the incoherence of the sampling matrix, the image regularization, i.e. the sparsity prior, and the optimization technique. We mainly focused on the last two aspects in this thesis. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to Ultrasound wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this thesis, we first propose a novel framework for Ultrasound imaging, named compressive deconvolution, to combine the compressive sampling and deconvolution. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of this framework is the joint data volume reduction and image quality improvement. An optimization method based on the Alternating Direction Method of Multipliers is then proposed to invert the linear model, including two regularization terms expressing the sparsity of the RF images in a given basis and the generalized Gaussian statistical assumption on tissue reflectivity functions. It is improved afterwards by the method based on the Simultaneous Direction Method of Multipliers. Both algorithms are evaluated on simulated and in vivo data. With regularization techniques, a novel approach based on Alternating Minimization is finally developed to jointly estimate the tissue reflectivity function and the point spread function. A preliminary investigation is made on simulated data

    Semi-blind ultrasound image deconvolution from compressed measurements

    Get PDF
    The recently proposed framework of ultrasound compressive deconvolution offers the possibility of decreasing the acquired data while improving the image spatial resolution. By combining compressive sampling and image deconvolution, the direct model of compressive deconvolution combines random projections and 2D convolution with a spatially invariant point spread function. Considering the point spread function known, existing algorithms have shown the ability of this framework to reconstruct enhanced ultrasound images from compressed measurements by inverting the forward linear model. In this paper, we propose an extension of the previous approach for compressive blind deconvolution, whose aim is to jointly estimate the ultrasound image and the system point spread function. The performance of the method is evaluated on both simulated and in vivo ultrasound data

    Structured Compressed Sensing: From Theory to Applications

    Full text link
    Compressed sensing (CS) is an emerging field that has attracted considerable research interest over the past few years. Previous review articles in CS limit their scope to standard discrete-to-discrete measurement architectures using matrices of randomized nature and signal models based on standard sparsity. In recent years, CS has worked its way into several new application areas. This, in turn, necessitates a fresh look on many of the basics of CS. The random matrix measurement operator must be replaced by more structured sensing architectures that correspond to the characteristics of feasible acquisition hardware. The standard sparsity prior has to be extended to include a much richer class of signals and to encode broader data models, including continuous-time signals. In our overview, the theme is exploiting signal and measurement structure in compressive sensing. The prime focus is bridging theory and practice; that is, to pinpoint the potential of structured CS strategies to emerge from the math to the hardware. Our summary highlights new directions as well as relations to more traditional CS, with the hope of serving both as a review to practitioners wanting to join this emerging field, and as a reference for researchers that attempts to put some of the existing ideas in perspective of practical applications.Comment: To appear as an overview paper in IEEE Transactions on Signal Processin

    Ultrasound compressive deconvolution with lp-norm prior

    Get PDF
    International audienceIt has been recently shown that compressive sampling is an interesting perspective for fast ultrasound imaging. This paper addresses the problem of compressive deconvolution for ultrasound imaging systems using an assumption of generalized Gaussian distributed tissue reflectivity function. The benefit of compressive deconvolution is the joint volume reduction of the acquired data and the image resolution improvement. The main contribution of this work is to apply the framework of compressive deconvolution on ultrasound imaging and to propose a novel ℓp-norm (1 ≤ p ≤ 2) algorithm based on Alternating Direction Method of Multipliers. The performance of the proposed algorithm is tested on simulated data and compared with those obtained by a more intuitive sequential compressive deconvolution method

    Efficient algorithms and data structures for compressive sensing

    Get PDF
    Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stößt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kürzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen über das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verändert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso müssen viele Hardwarekonzepte für praktische Anwendungen neu überdacht werden. Das heißt, dass man zwischen der Menge an Information, die man über Signale gewinnen kann, und dem Aufwand für das Design und Betreiben eines Signalverarbeitungssystems abwägen kann und muss. Die hier vorgestellte Arbeit trägt dazu bei, dass bei diesem Abwägen CS mehr begünstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende Leistungsfähigkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode präsentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschätzt werden kann. Wir zeigen auf, dass dieser Ansatz für Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler führt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. Für die einfachere Anwendung dieser Darstellung präsentieren wir eine freie Softwarearchitektur und demonstrieren deren Vorzüge, wenn sie für die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits für das Ermitteln von Defektpositionen in Prüfkörpern mittels Ultraschall, und andererseits für das Schätzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. Darüber hinaus stellen wir für die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate für die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulässt, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeinträchtigen. Für die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende Schätzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten Funkkanälen. Um den inhärenten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die Einschränkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schätzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schätzen. Weiterhin zeigen wir, wie dieser Ansatz zur Richtungsschätzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang präsentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut für Parameterschätzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die Schätzgenauigkeit über den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wünschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline

    Fast Single Image Super-Resolution Using a New Analytical Solution for l2–l2 Problems

    Get PDF
    International audienceThis paper addresses the problem of single image super-resolution (SR), which consists of recovering a high- resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strate- gies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 –l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques
    • …
    corecore