56 research outputs found

    Speckle Noise Reduction in Medical Ultrasound Images Using Modelling of Shearlet Coefficients as a Nakagami Prior

    Get PDF
    The diagnosis of UltraSound (US) medical images is affected due to the presence of speckle noise. This noise degrades the diagnostic quality of US images by reducing small details and edges present in the image. This paper presents a novel method based on shearlet coefficients modeling of log-transformed US images. Noise-free log-transformed coefficients are modeled as Nakagami distribution and speckle noise coefficients are modeled as Gaussian distribution. Method of Log Cumulants (MoLC) and Method of Moments (MoM) are used for parameter estimation of Nakagami distribution and noise free shearlet coefficients respectively. Then noise free shearlet coefficients are obtained using Maximum a Posteriori (MaP) estimation of noisy coefficients. The experimental results were presented by performing various experiments on synthetic and real US images. Subjective and objective quality assessment of the proposed method is presented and is compared with six other existing methods. The effectiveness of the proposed method over other methods can be seen from the obtained results

    Statistical modeling and processing of high frequency ultrasound images: application to dermatologic oncology

    Get PDF
    Cette thèse étudie le traitement statistique des images d’ultrasons de haute fréquence, avec application à l’exploration in-vivo de la peau humaine et l’évaluation non invasive de lésions. Des méthodes Bayésiennes sont considérées pour la segmentation d’images échographiques de la peau. On y établit que les ultrasons rétrodiffusés par la peau convergent vers un processus aléatoire complexe de type Levy-Flight, avec des statistiques non Gaussiennes alpha-stables. L’enveloppe du signal suit une distribution Rayleigh généralisée à queue lourde. A partir de ces résultats, il est proposé de modéliser l’image ultrason de multiples tissus comme un mélange spatialement cohérent de lois Rayleigh à queues lourdes. La cohérence spatiale inhérente aux tissus biologiques est modélisée par un champ aléatoire de Potts-Markov pour représenter la dépendance locale entre les composantes du mélange. Un algorithme Bayésien original combiné à une méthode Monte Carlo par chaine de Markov (MCMC) est proposé pour conjointement estimer les paramètres du modèle et classifier chaque voxel dans un tissu. L’approche proposée est appliquée avec succès à la segmentation de tumeurs de la peau in-vivo dans des images d’ultrasons de haute fréquence en 2D et 3D. Cette méthode est ensuite étendue en incluant l’estimation du paramètre B de régularisation du champ de Potts dans la chaine MCMC. Les méthodes MCMC classiques ne sont pas directement applicables à ce problème car la vraisemblance du champ de Potts ne peut pas être évaluée. Ce problème difficile est traité en adoptant un algorithme Metropolis-Hastings “sans vraisemblance” fondé sur la statistique suffisante du Potts. La méthode de segmentation non supervisée, ainsi développée, est appliquée avec succès à des images échographiques 3D. Finalement, le problème du calcul de la borne de Cramer-Rao (CRB) du paramètre B est étudié. Cette borne dépend des dérivées de la constante de normalisation du modèle de Potts, dont le calcul est infaisable. Ce problème est résolu en proposant un algorithme Monte Carlo original, qui est appliqué avec succès au calcul de la borne CRB des modèles d’Ising et de Potts. ABSTRACT : This thesis studies statistical image processing of high frequency ultrasound imaging, with application to in-vivo exploration of human skin and noninvasive lesion assessment. More precisely, Bayesian methods are considered in order to perform tissue segmentation in ultrasound images of skin. It is established that ultrasound signals backscattered from skin tissues converge to a complex Levy Flight random process with non-Gaussian _-stable statistics. The envelope signal follows a generalized (heavy-tailed) Rayleigh distribution. Based on these results, it is proposed to model the distribution of multiple-tissue ultrasound images as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by a Potts Markov random field. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. The proposed method is successfully applied to the segmentation of in-vivo skin tumors in high frequency 2D and 3D ultrasound images. This method is subsequently extended by including the estimation of the Potts regularization parameter B within the Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because the likelihood of B is intractable. This difficulty is addressed by using a likelihood-free Metropolis-Hastings algorithm based on the sufficient statistic of the Potts model. The resulting unsupervised segmentation method is successfully applied to tridimensional ultrasound images. Finally, the problem of computing the Cramer-Rao bound (CRB) of B is studied. The CRB depends on the derivatives of the intractable normalizing constant of the Potts model. This is resolved by proposing an original Monte Carlo algorithm, which is successfully applied to compute the CRB of the Ising and Potts models

    Point process simulation of generalised inverse Gaussian processes and estimation of the Jaeger integral

    Get PDF
    In this paper novel simulation methods are provided for the generalised inverse Gaussian (GIG) L\'{e}vy process. Such processes are intractable for simulation except in certain special edge cases, since the L\'{e}vy density associated with the GIG process is expressed as an integral involving certain Bessel Functions, known as the Jaeger integral in diffusive transport applications. We here show for the first time how to solve the problem indirectly, using generalised shot-noise methods to simulate the underlying point processes and constructing an auxiliary variables approach that avoids any direct calculation of the integrals involved. The resulting augmented bivariate process is still intractable and so we propose a novel thinning method based on upper bounds on the intractable integrand. Moreover our approach leads to lower and upper bounds on the Jaeger integral itself, which may be compared with other approximation methods. The shot noise method involves a truncated infinite series of decreasing random variables, and as such is approximate, although the series are found to be rapidly convergent in most cases. We note that the GIG process is the required Brownian motion subordinator for the generalised hyperbolic (GH) L\'{e}vy process and so our simulation approach will straightforwardly extend also to the simulation of these intractable proceses. Our new methods will find application in forward simulation of processes of GIG and GH type, in financial and engineering data, for example, as well as inference for states and parameters of stochastic processes driven by GIG and GH L\'{e}vy processes

    Electromagnetic Tracking in High Dose Rate Brachytherapy - A Composite Analysis Model

    Get PDF
    Electromagnetic tracking (EMT) in high dose rate Brachytherapy has to face a number of signal processing challenges which we summarize in this study. We propose a coherent signal processing chain which encompasses a particle filter tracking of the state space trajectory of the sensors inside catheters implanted surgically into the breast of female patients. Singular spectrum analysis is employed to remove high amplitude artifact signals from the recordings as well as to decompose simultaneously recorded signals from additional fiducial sensors used to monitor breathing motions. Ensemble empirical mode decomposition is applied to both, the fiducial and solenoid sensor signals to decompose them into their intrinsic modes. Information-theoretic similarity measures serve to identify those intrinsic modes which carry information about the breathing mode contamination of the observed solenoid sensor signals. Finally, a multi-dimensional scaling achieves a common principal coordinate system where both, the various EMT signals and related data deduced from an initial X-ray CT imaging can be compared quantitatively to identify any deviations from the treatment plan established with the CT data. We also consider the distributions of such deviations and demonstrate their heavy-tailed character. A Hartigan dip test is employed to establish a uni- or bi-modal character of these distributions which we approximate by alpha-stable distributions

    Review : Deep learning in electron microscopy

    Get PDF
    Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy

    Unbiased risk estimate algorithms for image deconvolution.

    Get PDF
    本論文工作的主題是圖像反卷積問題。在很多實際應用,例如生物醫學成像,地震學,天文學,遙感和光學成像中,觀測數據經常會出現令人不愉快的退化現象,這種退化一般由模糊效應(例如光學衍射限條件)和噪聲汙染(比如光子計數噪聲和讀出噪聲)造成的,這兩者都是物理儀器自身的條件限制造成的。作為一個標准的線性反問題,圖像反卷積經常被用作恢複觀測到的模糊的有噪點的圖像。我們旨在基于無偏差風險估計准則研究新的反卷積算法。本論文工作主要分為以下兩大部分。首先,我們考慮在加性高斯白噪聲條件下的圖像非盲反卷積問題,即准確的點擴散函數已知。我們的研究准則是最小化均方誤差的無偏差估計,即SURE. SURE- LET方法最初被應用于圖像降噪問題。本論文工作擴展該方法至討論圖像反卷積問題.我們提出了一個新的SURE-LET算法,用于快速有效地實現圖像複原功能。具體而言,我們將反卷積過程參數化表示為有限個基本函數的線性組合,稱作LET方法。反卷積問題最終簡化為求解該線性組合的最優線性系數。由于SURE的二次項本質和線性參數化表示,求解線性系數可由求解線性方程組而得。實驗結果顯示該論文提出的方法在信噪比,圖像的視覺質量和運算時間等方面均優于其他迄今最優秀的算法。論文的第二部分討論圖像盲複原中的點擴散函數估計問題。我們提出了blur-SURE -一個均方誤差修正版的無偏差估計 - 作為點擴散函數估計的最新准則,即點擴散函數由最小化這個新的目標函數獲得。然後我們利用這個估計的點擴散函數,用第一部分所提出的SURE-LET算法進行圖像的非盲複原。我們以一些典型的點擴散函數形式(高斯函數最為典型)為例詳細闡述該blur-SURE理論框架。實驗結果顯示最小化blur-SURE能夠更准確的估計點擴散函數,從而獲得更加優越的反卷積佳能。相比于圖像非盲複原,盲複原所得的圖片的視覺質量損失可忽略不計。本論文所提出的基于無偏差估計的算法可擴展至其他噪聲模型。由于本論文以SURE基礎的方法在理論上並不僅限于卷積問題,該方法可用于解決數據的其他線性失真問題。The subject of this thesis is image deconvolution. In many real applications, e.g. biomedical imaging, seismology, astronomy, remote sensing and optical imaging, undesirable degradations by blurring effect (e.g. optical diffraction-limited condition) and noise corruption (e.g. photon-counting noise and readout noise) are inherent to any physical acquisition device. Image deconvolution, as a standard linear inverse problem, is often applied to recover the images from their blurred and noisy observations. Our interest lies in novel deconvolution algorithms based on unbiased risk estimate. This thesis is organized in two main parts as briefly summarized below.We first consider non-blind image deconvolution with the corruption of additive white Gaussian noise (AWGN), where the point spread function (PSF) is exactly known. Our driving principle is the minimization of an unbiased estimate of mean squared error (MSE) between observed and clean data, known as "Stein's unbiased risk estimate" (SURE). The SURE-LET approach, which was originally developed for denoising, is extended to the deconvolution problem: a new SURE-LET deconvolution algorithm for fast and efficient implementation is proposed. More specifically, we parametrize the deconvolution process as a linear combination of a small number of known basic processings, which we call the linear expansion of thresholds (LET), and then minimize the SURE over the unknown linear coefficients. Due to the quadratic nature of SURE and the linear parametrization, the optimal linear weights of the combination is finally achieved by solving a linear system of equations. Experiments show that the proposed approach outperforms other state-of-the-art methods in terms of PSNR, SSIM, visual quality, as well as computation time.The second part of this thesis is concerned with PSF estimation for blind deconvolution. We propose a "blur-SURE" - an unbiased estimate of a filtered version of MSE - as a novel criterion for estimating the PSF, from the observed image only, i.e. the PSF is identified by minimizing this new objective functional, whose validity has been theoretically verified. The blur-SURE framework is exemplified with a number of parametric forms of the PSF, most typically, the Gaussian kernel. Experiments show that the blur-SURE minimization yields highly accurate estimate of PSF parameters. We then perform non-blind deconvolution using the SURE-LET algorithm proposed in Part I, with the estimated PSF. Experiments show that the estimated PSF results in superior deconvolution performance, with a negligible quality loss, compared to the deconvolution with the exact PSF.One may extend the algorithms based on unbiased risk estimate to other noise model. Since the SURE-based approaches does not restrict themselves to convolution operation, it is possible to extend them to other distortion scenarios.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Xue, Feng.Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.Includes bibliographical references (leaves 119-130).Abstracts also in Chinese.Dedication --- p.iAcknowledgments --- p.iiiAbstract --- p.ixList of Notations --- p.xiContents --- p.xviList of Figures --- p.xxList of Tables --- p.xxiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivations and objectives --- p.1Chapter 1.2 --- Mathematical formulation for problem statement --- p.2Chapter 1.3 --- Survey of non-blind deconvolution approaches --- p.2Chapter 1.3.1 --- Regularization --- p.2Chapter 1.3.2 --- Regularized inversion followed by denoising --- p.4Chapter 1.3.3 --- Bayesian approach --- p.4Chapter 1.3.4 --- Remark --- p.5Chapter 1.4 --- Survey of blind deconvolution approaches --- p.5Chapter 1.4.1 --- Non-parametric blind deconvolution --- p.5Chapter 1.4.2 --- Parametric blind deconvolution --- p.7Chapter 1.5 --- Objective assessment of the deconvolution quality --- p.8Chapter 1.5.1 --- Peak Signal-to-Noise Ratio (PSNR) --- p.8Chapter 1.5.2 --- Structural Similarity Index (SSIM) --- p.8Chapter 1.6 --- Thesis contributions --- p.9Chapter 1.6.1 --- Theoretical contributions --- p.9Chapter 1.6.2 --- Algorithmic contributions --- p.10Chapter 1.7 --- Organization --- p.11Chapter I --- The SURE-LET Approach to Non-blind Deconvolution --- p.13Chapter 2 --- The SURE-LET Framework for Deconvolution --- p.15Chapter 2.1 --- Motivations --- p.15Chapter 2.2 --- Related work --- p.15Chapter 2.3 --- Problem statement --- p.17Chapter 2.4 --- Stein's Unbiased Risk Estimate (SURE) for deconvolution --- p.17Chapter 2.4.1 --- Original SURE --- p.17Chapter 2.4.2 --- Regularized approximation of SURE --- p.18Chapter 2.5 --- The SURE-LET approach --- p.19Chapter 2.6 --- Summary --- p.20Chapter 3 --- Multi-Wiener SURE-LET Approach --- p.23Chapter 3.1 --- Problem statement --- p.23Chapter 3.2 --- Linear deconvolution: multi-Wiener filtering --- p.23Chapter 3.3 --- SURE-LET in orthonormal wavelet representation --- p.24Chapter 3.3.1 --- Mathematical formulation --- p.24Chapter 3.3.2 --- SURE minimization in orthonormal wavelet domain --- p.26Chapter 3.3.3 --- Computational issues --- p.27Chapter 3.4 --- SURE-LET approach for redundant wavelet representation --- p.30Chapter 3.5 --- Computational aspects --- p.32Chapter 3.5.1 --- Periodic boundary extensions --- p.33Chapter 3.5.2 --- Symmetric convolution --- p.36Chapter 3.5.3 --- Half-point symmetric boundary extensions --- p.36Chapter 3.5.4 --- Whole-point symmetric boundary extensions --- p.43Chapter 3.6 --- Results and discussions --- p.46Chapter 3.6.1 --- Experimental setting --- p.46Chapter 3.6.2 --- Influence of the number of Wiener lters --- p.47Chapter 3.6.3 --- Influence of the parameters on the deconvolution performance --- p.48Chapter 3.6.4 --- Influence of the boundary conditions: periodic vs symmetric --- p.52Chapter 3.6.5 --- Comparison with the state-of-the-art --- p.52Chapter 3.6.6 --- Analysis of computational complexity --- p.59Chapter 3.7 --- Conclusion --- p.60Chapter II --- The SURE-based Approach to Blind Deconvolution --- p.63Chapter 4 --- The Blur-SURE Framework to PSF Estimation --- p.65Chapter 4.1 --- Introduction --- p.65Chapter 4.2 --- Problem statement --- p.66Chapter 4.3 --- The blur-SURE framework for general linear model --- p.66Chapter 4.3.1 --- Blur-MSE: a modified version of MSE --- p.66Chapter 4.3.2 --- Blur-MSE minimization --- p.67Chapter 4.3.3 --- Blur-SURE: an unbiased estimate of the blur-MSE --- p.67Chapter 4.4 --- Application of blur-SURE framework for PSF estimation --- p.68Chapter 4.4.1 --- Problem statement in the context of convolution --- p.68Chapter 4.4.2 --- Blur-MSE minimization for PSF estimation --- p.69Chapter 4.4.3 --- Approximation of exact Wiener filtering --- p.70Chapter 4.4.4 --- Blur-SURE minimization for PSF estimation --- p.72Chapter 4.5 --- Concluding remarks --- p.72Chapter 5 --- The Blur-SURE Approach to Parametric PSF Estimation --- p.75Chapter 5.1 --- Introduction --- p.75Chapter 5.1.1 --- Overview of parametric PSF estimation --- p.75Chapter 5.1.2 --- Gaussian PSF as a typical example --- p.75Chapter 5.1.3 --- Outline of this chapter --- p.76Chapter 5.2 --- Parametric estimation: problem formulation --- p.77Chapter 5.3 --- Examples of PSF parameter estimation --- p.77Chapter 5.3.1 --- Gaussian kernel --- p.77Chapter 5.3.2 --- Non-Gaussian PSF with scaling factor s --- p.78Chapter 5.4 --- Minimization via the approximated function λ = λ (s) --- p.79Chapter 5.5 --- Results and discussions --- p.82Chapter 5.5.1 --- Experimental setting --- p.82Chapter 5.5.2 --- Non-Gaussian functions: estimation of scaling factor s --- p.83Chapter 5.5.3 --- Gaussian function: estimation of standard deviation s --- p.84Chapter 5.5.4 --- Comparison of deconvolution performance with the state-of-the-art --- p.84Chapter 5.5.5 --- Application to real images --- p.87Chapter 5.6 --- Conclusion --- p.90Chapter 6 --- The Blur-SURE Approach to Motion Deblurring --- p.93Chapter 6.1 --- Introduction --- p.93Chapter 6.1.1 --- Background of motion deblurring --- p.93Chapter 6.1.2 --- Related work: parametric estimation of motion blur --- p.93Chapter 6.1.3 --- Outline of this chapter --- p.94Chapter 6.2 --- Parametric estimation of motion blur: problem formulation --- p.94Chapter 6.2.1 --- Parametrized form of linear motion blur --- p.94Chapter 6.2.2 --- The blur-SURE framework to motion blur estimation --- p.94Chapter 6.3 --- An example of the blur-SURE approach to motion blur estimation --- p.95Chapter 6.4 --- Implementation issues --- p.96Chapter 6.4.1 --- Estimation of motion direction --- p.97Chapter 6.4.2 --- Estimation of blur length --- p.97Chapter 6.4.3 --- Short summary --- p.98Chapter 6.5 --- Results and discussions --- p.98Chapter 6.5.1 --- Experimental setting --- p.98Chapter 6.5.2 --- Estimations of blur direction and length --- p.99Chapter 6.5.3 --- Motion deblurring: the synthetic experiments --- p.99Chapter 6.5.4 --- Motion deblurring: the real experiment --- p.101Chapter 6.6 --- Conclusion --- p.103Chapter 7 --- Epilogue --- p.107Chapter 7.1 --- Summary --- p.107Chapter 7.2 --- Perspectives --- p.108Chapter A --- Proof --- p.109Chapter A.1 --- Proof of Theorem 2.1 --- p.109Chapter A.2 --- Proof of Eq.(2.6) in Section 2.4.2 --- p.110Chapter A.3 --- Proof of Eq.(3.5) in Section 3.3.1 --- p.110Chapter A.4 --- Proof of Theorem 3.6 --- p.112Chapter A.5 --- Proof of Theorem 3.12 --- p.112Chapter A.6 --- Derivation of noise variance in 2-D case (Section 3.5.4) --- p.114Chapter A.7 --- Proof of Theorem 4.1 --- p.116Chapter A.8 --- Proof of Theorem 4.2 --- p.11

    Contributions à la segmentation d'image : phase locale et modèles statistiques

    Get PDF
    Ce document presente une synthèse de mes travaux apres these, principalement sur la problematique de la segmentation d’images

    The Convergence of Human and Artificial Intelligence on Clinical Care - Part I

    Get PDF
    This edited book contains twelve studies, large and pilots, in five main categories: (i) adaptive imputation to increase the density of clinical data for improving downstream modeling; (ii) machine-learning-empowered diagnosis models; (iii) machine learning models for outcome prediction; (iv) innovative use of AI to improve our understanding of the public view; and (v) understanding of the attitude of providers in trusting insights from AI for complex cases. This collection is an excellent example of how technology can add value in healthcare settings and hints at some of the pressing challenges in the field. Artificial intelligence is gradually becoming a go-to technology in clinical care; therefore, it is important to work collaboratively and to shift from performance-driven outcomes to risk-sensitive model optimization, improved transparency, and better patient representation, to ensure more equitable healthcare for all
    corecore