16 research outputs found

    Multiresolution image models and estimation techniques

    Get PDF

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Automating the Reconstruction of Neuron Morphological Models: the Rivulet Algorithm Suite

    Get PDF
    The automatic reconstruction of single neuron cells is essential to enable large-scale data-driven investigations in computational neuroscience. The problem remains an open challenge due to various imaging artefacts that are caused by the fundamental limits of light microscopic imaging. Few previous methods were able to generate satisfactory neuron reconstruction models automatically without human intervention. The manual tracing of neuron models is labour heavy and time-consuming, making the collection of large-scale neuron morphology database one of the major bottlenecks in morphological neuroscience. This thesis presents a suite of algorithms that are developed to target the challenge of automatically reconstructing neuron morphological models with minimum human intervention. We first propose the Rivulet algorithm that iteratively backtracks the neuron fibres from the termini points back to the soma centre. By refining many details of the Rivulet algorithm, we later propose the Rivulet2 algorithm which not only eliminates a few hyper-parameters but also improves the robustness against noisy images. A soma surface reconstruction method was also proposed to make the neuron models biologically plausible around the soma body. The tracing algorithms, including Rivulet and Rivulet2, normally need one or more hyper-parameters for segmenting the neuron body out of the noisy background. To make this pipeline fully automatic, we propose to use 2.5D neural network to train a model to enhance the curvilinear structures of the neuron fibres. The trained neural networks can quickly highlight the fibres of interests and suppress the noise points in the background for the neuron tracing algorithms. We evaluated the proposed methods in the data released by both the DIADEM and the BigNeuron challenge. The experimental results show that our proposed tracing algorithms achieve the state-of-the-art results

    Correction des effets de volume partiel en tomographie d'Ă©mission

    Get PDF
    Ce mĂ©moire est consacrĂ© Ă  la compensation des effets de flous dans une image, communĂ©ment appelĂ©s effets de volume partiel (EVP), avec comme objectif d application l amĂ©lioration qualitative et quantitative des images en mĂ©decine nuclĂ©aire. Ces effets sont la consĂ©quence de la faible rĂ©solutions spatiale qui caractĂ©rise l imagerie fonctionnelle par tomographie Ă  Ă©mission mono-photonique (TEMP) ou tomographie Ă  Ă©mission de positons (TEP) et peuvent ĂȘtre caractĂ©risĂ©s par une perte de signal dans les tissus prĂ©sentant une taille comparable Ă  celle de la rĂ©solution spatiale du systĂšme d imagerie, reprĂ©sentĂ©e par sa fonction de dispersion ponctuelle (FDP). Outre ce phĂ©nomĂšne, les EVP peuvent Ă©galement entrainer une contamination croisĂ©e des intensitĂ©s entre structures adjacentes prĂ©sentant des activitĂ©s radioactives diffĂ©rentes. Cet effet peut conduire Ă  une sur ou sous estimation des activitĂ©s rĂ©ellement prĂ©sentes dans ces rĂ©gions voisines. DiffĂ©rentes techniques existent actuellement pour attĂ©nuer voire corriger les EVP et peuvent ĂȘtre regroupĂ©es selon le fait qu elles interviennent avant, durant ou aprĂšs le processus de reconstruction des images et qu elles nĂ©cessitent ou non la dĂ©finition de rĂ©gions d intĂ©rĂȘt provenant d une imagerie anatomique de plus haute rĂ©solution(tomodensitomĂ©trie TDM ou imagerie par rĂ©sonance magnĂ©tique IRM). L approche post-reconstruction basĂ©e sur le voxel (ne nĂ©cessitant donc pas de dĂ©finition de rĂ©gions d intĂ©rĂȘt) a Ă©tĂ© ici privilĂ©giĂ©e afin d Ă©viter la dĂ©pendance aux reconstructions propres Ă  chaque constructeur, exploitĂ©e et amĂ©liorĂ©e afin de corriger au mieux des EVP. Deux axes distincts ont Ă©tĂ© Ă©tudiĂ©s. Le premier est basĂ© sur une approche multi-rĂ©solution dans le domaine des ondelettes exploitant l apport d une image anatomique haute rĂ©solution associĂ©e Ă  l image fonctionnelle. Le deuxiĂšme axe concerne l amĂ©lioration de processus de dĂ©convolution itĂ©rative et ce par l apport d outils comme les ondelettes et leurs extensions que sont les curvelets apportant une dimension supplĂ©mentaire Ă  l analyse par la notion de direction. Ces diffĂ©rentes approches ont Ă©tĂ© mises en application et validĂ©es par des analyses sur images synthĂ©tiques, simulĂ©es et cliniques que ce soit dans le domaine de la neurologie ou dans celui de l oncologie. Finalement, les camĂ©ras commerciales actuelles intĂ©grant de plus en plus des corrections de rĂ©solution spatiale dans leurs algorithmes de reconstruction, nous avons choisi de comparer de telles approches en TEP et en TEMP avec une approche de dĂ©convolution itĂ©rative proposĂ©e dans ce mĂ©moire.Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images andthis PhD work is dedicated to their correction with the objectives of qualitative and quantitativeimprovement of such images. PVE arise from the limited spatial resolution of functional imaging witheither Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography(SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at HalfMaximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity crosscontamination between adjacent structures with different tracer uptakes. This can lead to under or overestimation of the real activity of such analyzed regions. Various methodologies currently exist tocompensate or even correct for PVE and they may be classified depending on their place in theprocessing chain: either before, during or after the image reconstruction process, as well as theirdependency on co-registered anatomical images with higher spatial resolution, for instance ComputedTomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstructionapproach was chosen for this work to avoid regions of interest definition and dependency onproprietary reconstruction developed by each manufacturer, in order to improve the PVE correction.Two different contributions were carried out in this work: the first one is based on a multi-resolutionmethodology in the wavelet domain using the higher resolution details of a co-registered anatomicalimage associated to the functional dataset to correct. The second one is the improvement of iterativedeconvolution based methodologies by using tools such as directional wavelets and curveletsextensions. These various developed approaches were applied and validated using synthetic, simulatedand clinical images, for instance with neurology and oncology applications in mind. Finally, ascurrently available PET/CT scanners incorporate more and more spatial resolution corrections in theirimplemented reconstruction algorithms, we have compared such approaches in SPECT and PET to aniterative deconvolution methodology that was developed in this work.TOURS-Bibl.Ă©lectronique (372610011) / SudocSudocFranceF

    Fast imaging in non-standard X-ray computed tomography geometries

    Get PDF

    Large Scale Inverse Problems

    Get PDF
    This book is thesecond volume of a three volume series recording the "Radon Special Semester 2011 on Multiscale Simulation &amp Analysis in Energy and the Environment" that took placein Linz, Austria, October 3-7, 2011. This volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications. The solution of inverse problems is fundamental to a wide variety of applications such as weather forecasting, medical tomography, and oil exploration. Regularisation techniques are needed to ensure solutions of sufficient quality to be useful, and soundly theoretically based. This book addresses the common techniques required for all the applications, and is thus truly interdisciplinary. This collection of survey articles focusses on the large inverse problems commonly arising in simulation and forecasting in the earth sciences

    Methods for Photoacoustic Image Reconstruction Exploiting Properties of Curvelet Frame

    Get PDF
    Curvelet frame is of special significance for photoacoustic tomography (PAT) due to its sparsifying and microlocalisation properties. In this PhD project, we explore the methods for image reconstruction in PAT with flat sensor geometry using Curvelet properties. This thesis makes five distinct contributions: (i) We investigate formulation of the forward, adjoint and inverse operators for PAT in Fourier domain. We derive a one-to-one map between wavefront directions in image and data spaces in PAT. Combining the Fourier operators with the wavefront map allows us to create the appropriate PAT operators for solving limited-view problems due to limited angular sensor sensitivity. (ii) We devise a concept of wedge restricted Curvelet transform, a modification of standard Curvelet transform, which allows us to formulate a tight frame of wedge restricted Curvelets on the range of the PAT forward operator for PAT data representation. We consider details specific to PAT data such as symmetries, time oversampling and their consequences. We further adapt the wedge restricted Curvelet to decompose the wavefronts into visible and invisible parts in the data domain as well as in the image domain. (iii) We formulate a two step approach based on the recovery of the complete volume of the photoacoustic data from the sub-sampled data followed by the acoustic inversion, and a one step approach where the photoacoustic image is directly recovered from the subsampled data. The wedge restricted Curvelet is used as the sparse representation of the photoacoustic data in the two step approach. (iv) We discuss a joint variational approach that incorporates Curvelet sparsity in photoacoustic image domain and spatio-temporal regularization via optical flow constraint to achieve improved results for dynamic PAT reconstruction. (v) We consider the limited-view problem due to limited angular sensitivity of the sensor (see (i) for the formulation of the corresponding fast operators in Fourier domain). We propose complementary information learning approach based on splitting the problem into visible and invisible singularities. We perform a sparse reconstruction of the visible Curvelet coefficients using compressed sensing techniques and propose a tailored deep neural network architecture to recover the invisible coefficients

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Deep learning-based diagnostic system for malignant liver detection

    Get PDF
    Cancer is the second most common cause of death of human beings, whereas liver cancer is the fifth most common cause of mortality. The prevention of deadly diseases in living beings requires timely, independent, accurate, and robust detection of ailment by a computer-aided diagnostic (CAD) system. Executing such intelligent CAD requires some preliminary steps, including preprocessing, attribute analysis, and identification. In recent studies, conventional techniques have been used to develop computer-aided diagnosis algorithms. However, such traditional methods could immensely affect the structural properties of processed images with inconsistent performance due to variable shape and size of region-of-interest. Moreover, the unavailability of sufficient datasets makes the performance of the proposed methods doubtful for commercial use. To address these limitations, I propose novel methodologies in this dissertation. First, I modified a generative adversarial network to perform deblurring and contrast adjustment on computed tomography (CT) scans. Second, I designed a deep neural network with a novel loss function for fully automatic precise segmentation of liver and lesions from CT scans. Third, I developed a multi-modal deep neural network to integrate pathological data with imaging data to perform computer-aided diagnosis for malignant liver detection. The dissertation starts with background information that discusses the proposed study objectives and the workflow. Afterward, Chapter 2 reviews a general schematic for developing a computer-aided algorithm, including image acquisition techniques, preprocessing steps, feature extraction approaches, and machine learning-based prediction methods. The first study proposed in Chapter 3 discusses blurred images and their possible effects on classification. A novel multi-scale GAN network with residual image learning is proposed to deblur images. The second method in Chapter 4 addresses the issue of low-contrast CT scan images. A multi-level GAN is utilized to enhance images with well-contrast regions. Thus, the enhanced images improve the cancer diagnosis performance. Chapter 5 proposes a deep neural network for the segmentation of liver and lesions from abdominal CT scan images. A modified Unet with a novel loss function can precisely segment minute lesions. Similarly, Chapter 6 introduces a multi-modal approach for liver cancer variants diagnosis. The pathological data are integrated with CT scan images to diagnose liver cancer variants. In summary, this dissertation presents novel algorithms for preprocessing and disease detection. Furthermore, the comparative analysis validates the effectiveness of proposed methods in computer-aided diagnosis
    corecore