67 research outputs found

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    International Union of Angiology (IUA) consensus paper on imaging strategies in atherosclerotic carotid artery imaging: From basic strategies to advanced approaches

    Get PDF
    Cardiovascular disease (CVD) is the leading cause of mortality and disability in developed countries. According to WHO, an estimated 17.9 million people died from CVDs in 2019, representing 32% of all global deaths. Of these deaths, 85% were due to major adverse cardiac and cerebral events. Early detection and care for individuals at high risk could save lives, alleviate suffering, and diminish economic burden associated with these diseases. Carotid artery disease is not only a well-established risk factor for ischemic stroke, contributing to 10%–20% of strokes or transient ischemic attacks (TIAs), but it is also a surrogate marker of generalized atherosclerosis and a predictor of cardiovascular events. In addition to diligent history, physical examination, and laboratory detection of metabolic abnormalities leading to vascular changes, imaging of carotid arteries adds very important information in assessing stroke and overall cardiovascular risk. Spanning from carotid intima-media thickness (IMT) measurements in arteriopathy to plaque burden, morphology and biology in more advanced disease, imaging of carotid arteries could help not only in stroke prevention but also in ameliorating cardiovascular events in other territories (e.g. in the coronary arteries). While ultrasound is the most widely available and affordable imaging methods, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), their combination and other more sophisticated methods have introduced novel concepts in detection of carotid plaque characteristics and risk assessment of stroke and other cardiovascular events. However, in addition to robust progress in usage of these methods, all of them have limitations which should be taken into account. The main purpose of this consensus document is to discuss pros but also cons in clinical, epidemiological and research use of all these techniques

    Methods for Photoacoustic Image Reconstruction Exploiting Properties of Curvelet Frame

    Get PDF
    Curvelet frame is of special significance for photoacoustic tomography (PAT) due to its sparsifying and microlocalisation properties. In this PhD project, we explore the methods for image reconstruction in PAT with flat sensor geometry using Curvelet properties. This thesis makes five distinct contributions: (i) We investigate formulation of the forward, adjoint and inverse operators for PAT in Fourier domain. We derive a one-to-one map between wavefront directions in image and data spaces in PAT. Combining the Fourier operators with the wavefront map allows us to create the appropriate PAT operators for solving limited-view problems due to limited angular sensor sensitivity. (ii) We devise a concept of wedge restricted Curvelet transform, a modification of standard Curvelet transform, which allows us to formulate a tight frame of wedge restricted Curvelets on the range of the PAT forward operator for PAT data representation. We consider details specific to PAT data such as symmetries, time oversampling and their consequences. We further adapt the wedge restricted Curvelet to decompose the wavefronts into visible and invisible parts in the data domain as well as in the image domain. (iii) We formulate a two step approach based on the recovery of the complete volume of the photoacoustic data from the sub-sampled data followed by the acoustic inversion, and a one step approach where the photoacoustic image is directly recovered from the subsampled data. The wedge restricted Curvelet is used as the sparse representation of the photoacoustic data in the two step approach. (iv) We discuss a joint variational approach that incorporates Curvelet sparsity in photoacoustic image domain and spatio-temporal regularization via optical flow constraint to achieve improved results for dynamic PAT reconstruction. (v) We consider the limited-view problem due to limited angular sensitivity of the sensor (see (i) for the formulation of the corresponding fast operators in Fourier domain). We propose complementary information learning approach based on splitting the problem into visible and invisible singularities. We perform a sparse reconstruction of the visible Curvelet coefficients using compressed sensing techniques and propose a tailored deep neural network architecture to recover the invisible coefficients

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Quantifying atherosclerosis in vasculature using ultrasound imaging

    Get PDF
    Cerebrovascular disease accounts for approximately 30% of the global burden associated with cardiovascular diseases [1]. According to the World Stroke Organisation, there are approximately 13.7 million new stroke cases annually, and just under six million people will die from stroke each year [2]. The underlying cause of this disease is atherosclerosis – a vascular pathology which is characterised by thickening and hardening of blood vessel walls. When fatty substances such as cholesterol accumulate on the inner linings of an artery, they cause a progressive narrowing of the lumen referred to as a stenosis. Localisation and grading of the severity of a stenosis, is important for practitioners to assess the risk of rupture which leads to stroke. Ultrasound imaging is popular for this purpose. It is low cost, non-invasive, and permits a quick assessment of vessel geometry and stenosis by measuring the intima media thickness. Research is showing that 3D monitoring of plaque progression may provide a better indication of sites which are at risk of rupture. Various metrics have been proposed. From these, the quantification of plaques by measuring vessel wall volume (VWV) using the segmented media-adventitia boundaries (MAB) and lumen-intima boundaries (LIB) has been shown to be sensitive to temporal changes in carotid plaque burden. Thus, methods to segment these boundaries are required to help generate VWV measurements with high accuracy, less user interaction and increased robustness to variability in di↵erent user acquisition protocols.ii This work proposes three novel methods to address these requirements, to ultimately produce a highly accurate, fully automated segmentation algorithm which works on intensity-invariant data. The first method proposed was that of generating a novel, intensity-invariant representation of ultrasound data by creating phase-congruency maps from raw unprocessed radio-frequency ultrasound information. Experiments carried out showed that this representation retained the necessary anatomical structural information to facilitate segmentation, while concurrently being invariant to changes in amplitude from the user. The second method proposed was the novel application of Deep Convolutional Networks (DCN) to carotid ultrasound images to achieve fully automatic delineation of the MAB boundaries, in addition to the use of a novel fusion of amplitude and phase congruency data as an image source. Experiments carried out showed that the DCN produces highly accurate and automated results, and that the fusion of amplitude and phase yield superior results to either one alone. The third method proposed was a new geometrically constrained objective function for the network's Stochastic Gradient Descent optimisation, thus tuning it to the segmentation problem at hand, while also developing the network further to concurrently delineate both the MAB and LIB to produce vessel wall contours. Experiments carried out here also show that the novel geometric constraints improve the segmentation results on both MAB and LIB contours. In conclusion, the presented work provides significant novel contributions to field of Carotid Ultrasound segmentation, and with future work, this could lead to implementations which facilitate plaque progression analysis for the end�user

    Generative Adversarial Network (GAN) for Medical Image Synthesis and Augmentation

    Get PDF
    Medical image processing aided by artificial intelligence (AI) and machine learning (ML) significantly improves medical diagnosis and decision making. However, the difficulty to access well-annotated medical images becomes one of the main constraints on further improving this technology. Generative adversarial network (GAN) is a DNN framework for data synthetization, which provides a practical solution for medical image augmentation and translation. In this study, we first perform a quantitative survey on the published studies on GAN for medical image processing since 2017. Then a novel adaptive cycle-consistent adversarial network (Ad CycleGAN) is proposed. We respectively use a malaria blood cell dataset (19,578 images) and a COVID-19 chest X-ray dataset (2,347 images) to test the new Ad CycleGAN. The quantitative metrics include mean squared error (MSE), root mean squared error (RMSE), peak signal-to-noise ratio (PSNR), universal image quality index (UIQI), spatial correlation coefficient (SCC), spectral angle mapper (SAM), visual information fidelity (VIF), Frechet inception distance (FID), and the classification accuracy of the synthetic images. The CycleGAN and variant autoencoder (VAE) are also implemented and evaluated as comparison. The experiment results on malaria blood cell images indicate that the Ad CycleGAN generates more valid images compared to CycleGAN or VAE. The synthetic images by Ad CycleGAN or CycleGAN have better quality than those by VAE. The synthetic images by Ad CycleGAN have the highest accuracy of 99.61%. In the experiment on COVID-19 chest X-ray, the synthetic images by Ad CycleGAN or CycleGAN have higher quality than those generated by variant autoencoder (VAE). However, the synthetic images generated through the homogenous image augmentation process have better quality than those synthesized through the image translation process. The synthetic images by Ad CycleGAN have higher accuracy of 95.31% compared to the accuracy of the images by CycleGAN of 93.75%. In conclusion, the proposed Ad CycleGAN provides a new path to synthesize medical images with desired diagnostic or pathological patterns. It is considered a new approach of conditional GAN with effective control power upon the synthetic image domain. The findings offer a new path to improve the deep neural network performance in medical image processing

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Medical Image Enhancement using Deep Learning and Tensor Factorization Techniques

    Get PDF
    La résolution spatiale des images acquises par tomographie volumique à faisceau conique (CBCT) est limitée par la géométrie des capteurs, leur sensibilité, les mouvements du patient, les techniques de reconstruction d'images et la limitation de la dose de rayonnement. Le modèle de dégradation d'image considéré dans cette thèse consiste en un opérateur de ou avec la fonction d'étalement du système d'imagerie (PSF), un opérateur de décimation, et du bruit, qui relient les volumes CBCT à une image 3D super-résolue à estimer. Les méthodes proposées dans cette thèse (SISR - single image super-résolution) ont comme objectif d'inverser ce modèle direct, c'est à dire d'estimer un volume haute résolution à partir d'une image CBCT. Les algorithmes ont été évalués dans le cadre d'une application dentaire, avec comme vérité terrain les images haute résolution acquises par micro CT (µCT), qui utilise des doses de rayonnement très importantes, incompatibles avec les applications cliniques. Nous avons proposé une approche de SISR par deep learning, appliquée individuellement à des coupes CBCT. Deux types de réseaux ont été évalués : U-net et subpixel. Les deux ont amélioré les volumes CBCT, avec un gain en PSNR de 21 à 22 dB et en coefficient de Dice pour la segmentation canalaire de 1 à 2.2 %. Le gain a été plus particulièrement important dans la partie apicale des dents, ce qui représente un résultat important étant donnée son importance pour les applications cliniques. Nous avons proposé des algorithmes de SISR basés sur la décomposition canonique polyadique des tenseurs. Le principal avantage de cette méthode, lié à l'utilisation de la théorie des tenseur, est d'utiliser la structure 3D des volumes CBCT. L'algorithme proposé regroupe plusieurs étapes: débruitage base sur la factorisation des tenseurs, déconvolution et super-résolution, avec un faible nombre d'hyperparamètres. Le temps d'exécution est très faible par rapport aux algorithmes existants (deux ordres de magnitude plus petit), pour des performances légèrement supérieures (gain de 1.2 à 1.5 dB en PSNR). La troisième contribution de la thèse est en lien avec la contribution 2 : l'algorithme de SISR basé sur la décomposition canonique polyadique des tenseurs est combiné avec une méthode d'estimation de la PSF, inconnues dans les applications pratiques. L'algorithme résultant effectue les deux tâche de manière alternée, et s'avère précis et rapide sur des données de simulation et expérimentales. La dernière contribution de la thèse a été d'évaluer l'intérêt d'un autre type de décomposition tensorielle, la décomposition de Tucker, dans le cadre d'un algorithme de SISR. Avant la déconvolution, le volume CBCT est débruité en tronquant sa décomposition de Tucker. Comparé à l'algorithme de la contribution 2, cette approche permet de diminuer encore plus le temps de calcul, d'un facteur 10, pour des performances similaires pour des SNR importants et légèrement supérieures pour de faibles SNR. Le lien entre cette méthode et les algorithmes 2D basés sur une SVD facilite le réglage des hyperparamètres comparé à la décomposition canonique polyadique.The resolution of dental cone beam computed tomography (CBCT) images is imited by detector geometry, sensitivity, patient movement, the reconstruction technique and the need to minimize radiation dose. The corresponding image degradation model assumes that the CBCT image is a blurred (with a point spread function, PSF), downsampled, noisy version of a high resolution image. The quality of the image is crucial for precise diagnosis and treatment planning. The methods proposed in this thesis aim to give a solution for the single image super-resolution (SISR) problem. The algorithms were evaluated on dental CBCT and corresponding highresolution (and high radiation-dose) µCT image pairs of extracted teeth. I have designed a deep learning framework for the SISR problem, applied to CBCT slices. I have tested the U-net and subpixel neural networks, which both improved the PSNR by 21-22 dB, and the Dice coe_cient of the canal segmentation by 1-2.2%, more significantly in the medically critical apical region. I have designed an algorithm for the 3D SISR problem, using the canonical polyadic decomposition of tensors. This implementation conserves the 3D structure of the volume, integrating the factorization-based denoising, deblurring with a known PSF, and upsampling of the image in a lightweight algorithm with a low number of parameters. It outperforms the state-of-the-art 3D reconstruction-based algorithms with two orders of magnitude faster run-time and provides similar PSNR (improvement of 1.2-1.5 dB) and segmentation metrics (Dice coe_cient increased on average to 0.89 and 0.90). Thesis II b: I have implemented a joint alternating recovery of the unknown PSF parameters and of the high-resolution 3D image using CPD-SISR. The algorithm was compared to a state-of-the-art 3D reconstruction-based algorithm, combined with the proposed alternating PSF-optimization. The two algorithms have shown similar improvement in PSNR, but CPD-SISR-blind converged roughly 40 times faster, under 6 minutes both in simulation and on experimental dental computed tomography data. I have proposed a solution for the 3D SISR problem using the Tucker decomposition (TD-SISR). The denoising step is realized _rst by TD in order to mitigate the ill-posedness of the subsequent deconvolution. Compared to CPDSISR the algorithm runs ten times faster. Depending on the amount of noise, higher PSNR (0.3 - 3.5 dB), SSI (0.58 - 2.43%) and segmentation values (Dice coefficient, 2% improvement) were measured. The parameters in TD-SISR are familiar from 2D SVD-based algorithms, so their tuning is easier compared to CPD-SISR
    • …
    corecore