12 research outputs found

    Modélisation locale en imagerie par résonance magnétique de diffusion : de l'acquisition comprimée au connectome

    Get PDF
    L’imagerie par résonance magnétique pondérée en diffusion est une modalité d’imagerie médicale non invasive qui permet de mesurer les déplacements microscopiques des molécules d’eau dans les tissus biologiques. Il est possible d’utiliser cette information pour inférer la structure du cerveau. Les techniques de modélisation locale de la diffusion permettent de calculer l’orientation et la géométrie des tissus de la matière blanche. Cette thèse s’intéresse à l’optimisation des métaparamètres utilisés par les modèles locaux. Nous dérivons des paramètres optimaux qui améliorent la qualité des métriques de diffusion locale, de la tractographie de la matière blanche et de la connectivité globale. L’échantillonnage de l’espace-q est un des paramètres principaux qui limitent les types de modèle et d’inférence applicable sur des données acquises en clinique. Dans cette thèse, nous développons une technique d’échantillonnage de l’espace-q permettant d’utiliser l’acquisition comprimée pour réduire le temps d’acquisition nécessaire

    Reconstruction et description des fonctions de distribution d'orientation en imagerie de diffusion à haute résolution angulaire

    Get PDF
    This thesis concerns the reconstruction and description of orientation distribution functions (ODFs) in high angular resolution diffusion imaging (HARDI) such as q-ball imaging (QBI). QBI is used to analyze more accurately fiber structures (crossing, bending, fanning, etc.) in a voxel. In this field, the ODF reconstructed from QBI is widely used for resolving complex intravoxel fiber configuration problem. However, until now, the assessment of the characteristics or quality of ODFs remains mainly visual and qualitative, although the use of a few objective quality metrics is also reported that are directly borrowed from classical signal and image processing theory. At the same time, although some metrics such as generalized anisotropy (GA) and generalized fractional anisotropy (GFA) have been proposed for classifying intravoxel fiber configurations, the classification of the latters is still a problem. On the other hand, QBI often needs an important number of acquisitions (usually more than 60 directions) to compute accurately ODFs. So, reducing the quantity of QBI data (i.e. shortening acquisition time) while maintaining ODF quality is a real challenge. In this context, we have addressed the problems of how to reconstruct high-quality ODFs and assess their characteristics. We have proposed a new paradigm allowing describing the characteristics of ODFs more quantitatively. It consists of regarding an ODF as a general three-dimensional (3D) point cloud, projecting a 3D point cloud onto an angle-distance map (ADM), constructing an angle-distance matrix (ADMAT), and calculating morphological characteristics of the ODF such as length ratio, separability and uncertainty. In particular, a new metric, called PEAM (PEAnut Metric), which is based on computing the deviation of ODFs from a single fiber ODF represented by a peanut, was proposed and used to classify intravoxel fiber configurations. Several ODF reconstruction methods have also been compared using the proposed metrics. The results showed that the characteristics of 3D point clouds can be well assessed in a relatively complete and quantitative manner. Concerning the reconstruction of high-quality ODFs with reduced data, we have proposed two methods. The first method is based on interpolation by Delaunay triangulation and imposing constraints in both q-space and spatial space. The second method combines random gradient diffusion direction sampling, compressed sensing, resampling density increasing, and missing diffusion signal recovering. The results showed that the proposed missing diffusion signal recovering approaches enable us to obtain accurate ODFs with relatively fewer number of diffusion signals.Ce travail de thèse porte sur la reconstruction et la description des fonctions de distribution d'orientation (ODF) en imagerie de diffusion à haute résolution angulaire (HARDI) telle que l’imagerie par q-ball (QBI). Dans ce domaine, la fonction de distribution d’orientation (ODF) en QBI est largement utilisée pour étudier le problème de configuration complexe des fibres. Toutefois, jusqu’à présent, l’évaluation des caractéristiques ou de la qualité des ODFs reste essentiellement visuelle et qualitative, bien que l’utilisation de quelques mesures objectives de qualité ait également été reportée dans la littérature, qui sont directement empruntées de la théorie classique de traitement du signal et de l’image. En même temps, l’utilisation appropriée de ces mesures pour la classification des configurations des fibres reste toujours un problème. D'autre part, le QBI a souvent besoin d'un nombre important d’acquisitions pour calculer avec précision les ODFs. Ainsi, la réduction du temps d’acquisition des données QBI est un véritable défi. Dans ce contexte, nous avons abordé les problèmes de comment reconstruire des ODFs de haute qualité et évaluer leurs caractéristiques. Nous avons proposé un nouveau paradigme permettant de décrire les caractéristiques des ODFs de manière plus quantitative. Il consiste à regarder un ODF comme un nuage général de points tridimensionnels (3D), projeter ce nuage de points 3D sur un plan angle-distance (ADM), construire une matrice angle-distance (ADMAT), et calculer des caractéristiques morphologiques de l'ODF telles que le rapport de longueurs, la séparabilité et l'incertitude. En particulier, une nouvelle métrique, appelé PEAM (PEAnut Metric) et qui est basée sur le calcul de l'écart des ODFs par rapport à l’ODF (représenté par une forme arachide) d’une seule fibre, a été proposée et utilisée pour classifier des configurations intravoxel des fibres. Plusieurs méthodes de reconstruction des ODFs ont également été comparées en utilisant les paramètres proposés. Les résultats ont montré que les caractéristiques du nuage de points 3D peuvent être évaluées d'une manière relativement complète et quantitative. En ce qui concerne la reconstruction de l'ODF de haute qualité avec des données réduites, nous avons proposé deux méthodes. La première est basée sur une interpolation par triangulation de Delaunay et sur des contraintes imposées à la fois dans l’espace-q et dans l'espace spatial. La deuxième méthode combine l’échantillonnage aléatoire des directions de gradient de diffusion, le compressed sensing, l’augmentation de la densité de ré-échantillonnage, et la reconstruction des signaux de diffusion manquants. Les résultats ont montré que les approches de reconstruction des signaux de diffusion manquants proposées nous permettent d'obtenir des ODFs précis à partir d’un nombre relativement faible de signaux de diffusion

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    Improved Modeling and Image Generation for Fluorescence Molecular Tomography (FMT) and Positron Emission Tomography (PET)

    Get PDF
    In this thesis, we aim to improve quantitative medical imaging with advanced image generation algorithms. We focus on two specific imaging modalities: fluorescence molecular tomography (FMT) and positron emission tomography (PET). For FMT, we present a novel photon propagation model for its forward model, and in addition, we propose and investigate a reconstruction algorithm for its inverse problem. In the first part, we develop a novel Neumann-series-based radiative transfer equation (RTE) that incorporates reflection boundary conditions in the model. In addition, we propose a novel reconstruction technique for diffuse optical imaging that incorporates this Neumann-series-based RTE as forward model. The proposed model is assessed using a simulated 3D diffuse optical imaging setup, and the results demonstrate the importance of considering photon reflection at boundaries when performing photon propagation modeling. In the second part, we propose a statistical reconstruction algorithm for FMT. The algorithm is based on sparsity-initialized maximum-likelihood expectation maximization (MLEM), taking into account the Poisson nature of data in FMT and the sparse nature of images. The proposed method is compared with a pure sparse reconstruction method as well as a uniform-initialized MLEM reconstruction method. Results indicate the proposed method is more robust to noise and shows improved qualitative and quantitative performance. For PET, we present an MRI-guided partial volume correction algorithm for brain imaging, aiming to recover qualitative and quantitative loss due to the limited resolution of PET system, while keeping image noise at a low level. The proposed method is based on an iterative deconvolution model with regularization using parallel level sets. A non-smooth optimization algorithm is developed so that the proposed method can be feasibly applied for 3D images and avoid additional blurring caused by conventional smooth optimization process. We evaluate the proposed method using both simulation data and in vivo human data collected from the Baltimore Longitudinal Study of Aging (BLSA). Our proposed method is shown to generate images with reduced noise and improved structure details, as well as increased number of statistically significant voxels in study of aging. Results demonstrate our method has promise to provide superior performance in clinical imaging scenarios

    Visual attention models and arse representations for morphometrical image analysis

    Get PDF
    Abstract. Medical diagnosis, treatment, follow-up and research activities are nowadays strongly supported on different types of diagnostic images, whose main goal is to provide an useful exchange of medical knowledge. This multi-modal information needs to be processed in order to extract information exploitable within the context of a particular medical task. In despite of the relevance of these complementary sources of medical knowledge, medical images are rarely further processed in actual clinical practice, so the specialists take decisions only based in the raw data. A new trend in the development of medical image processing and analysis tools follows the idea of biologically-inspired methods, which resemble the performance of the human vision system. Visual attention models and sparse representations are examples of this tendency. Based on this, the aim of this thesis was the development of a set of computational methods for automatic morph metrical analysis, combining the relevant region extraction power of visual attention models with the incorporation of a priori information capabilities of sparse representations. The combination of these biologically inspired tools with common machine learning techniques allowed the identification of visual patterns relevant for pathology discrimination, improving the accuracy and interpretability of morph metric measures and comparisons. After extensive validations with different image data sets, the computational methods proposed in this thesis seems to be promising tools for the definition of anatomical biomarkers, based on visual pattern analysis, and suitable for patient's diagnosis, prognosis and follow-up.Las actividades de diagnóstico, tratamiento, seguimiento e investigación en medicina están actualmente soportadas en diferentes clases de imágenes diagnósticas, cuyo objetivo principal es el de proveer un intercambio efectivo de conocimiento médico. Esta información multimodal necesita ser procesada con el objetivo de extraer información aprovechable en el contexto de una tarea médica particular. A pesar de la relevancia de estas fuentes complementarias de información clínica, las imágenes médicas son raramente procesadas en la práctica clínica actual, de forma que los especialistas sólo toman decisiones basados en los datos crudos. Una nueva tendencia en el desarrollo de herramientas de análisis y procesamiento de imágenes médicas persigue la idea de métodos biológicamente inspirados, que se asemejan al sistema de visión humana. Son ejemplos de esta tendencia los modelos de atención visual y las representaciones escasas (sparse representations). Con base en esto, el objetivo de esta tesis fue el desarrollo de un conjunto de métodos computacionales para soportar automáticamente los análisis morfo métricos, combinando el poder de extracción de regiones relevantes de los modelos de atención visual junto con la capacidad de incorporación de información a priori de las representaciones escasas. La combinación de estos métodos biológicamente inspirados con técnicas de aprendizaje de maquina facilito la identificación de patrones visuales relevantes para discriminar patologías cerebrales, mejorando la precisión e interpretabilidad de las medidas y comparaciones morfo métricas. Después de extensivas validaciones con diferentes conjuntos de imágenes, los métodos computacionales propuestos en esta tesis se perfilan como herramientas prometedoras para la definición de biomarcadores anatómicos, basados en el análisis visual de patrones, y convenientes para el diagnóstico, pronóstico y seguimiento del paciente.Doctorad

    Compendio de métodos para caracterizar la geometría de los tejidos cerebrales a partir de imágenes de resonancia magnética por difusión del agua.

    Get PDF
    221 p.FIDMAG Hermanas Hospitalarias Research Foundation; CIBERSAM:Centro de Investigación Biomédica en Re

    Learning, Inference, and Unmixing of Weak, Structured Signals in Noise

    Full text link
    In this thesis, we study two methods that can be used to learn, infer, and unmix weak, structured signals in noise: the Dynamic Mode Decomposition algorithm and the sparse Principal Component Analysis problem. Both problems take as input samples of a multivariate signal that is corrupted by noise, and produce a set of structured signals. We present performance guarantees for each algorithm and validate our findings with numerical simulations. First, we study the Dynamic Mode Decomposition (DMD) algorithm. We demonstrate that DMD can be used to solve the source separation problem. That is, we apply DMD to a data matrix whose rows are linearly independent, additive mixtures of latent time series. We show that when the latent time series are uncorrelated at a lag of one time-step then the recovered dynamic modes will approximate the columns of the mixing matrix. That is, DMD unmixes linearly mixed sources that have a particular correlation structure. We next broaden our analysis beyond the noise-free, fully observed data setting. We study the DMD algorithm with a truncated-SVD denoising step, and present recovery guarantees for both the noisy data and missing data settings. We also present some preliminary characterizations of DMD performed directly on noisy data. We end with some complementary perspectives on DMD, including an optimization-based formulation. Second, we study the sparse Principal Component Analysis (PCA) problem. We demonstrate that the sparse inference problem can be viewed in a variable selection framework and analyze the performance of various decision statistics. A major contribution of this work is the introduction of False Discovery Rate (FDR) control for the principal component estimation problem, made possible by the sparse structure. We derive lower bounds on the size of detectable coordinates of the principal component vectors, and utilize these lower bounds to derive lower bounds on the worst-case risk.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155061/1/prasadan_1.pd

    Unsupervised deep learning of human brain diffusion magnetic resonance imaging tractography data

    Get PDF
    L'imagerie par résonance magnétique de diffusion est une technique non invasive permettant de connaître la microstructure organisationnelle des tissus biologiques. Les méthodes computationnelles qui exploitent la préférence orientationnelle de la diffusion dans des structures restreintes pour révéler les voies axonales de la matière blanche du cerveau sont appelées tractographie. Ces dernières années, diverses méthodes de tractographie ont été utilisées avec succès pour découvrir l'architecture de la matière blanche du cerveau. Pourtant, ces techniques de reconstruction souffrent d'un certain nombre de défauts dérivés d'ambiguïtés fondamentales liées à l'information orientationnelle. Cela a des conséquences dramatiques, puisque les cartes de connectivité de la matière blanche basées sur la tractographie sont dominées par des faux positifs. Ainsi, la grande proportion de voies invalides récupérées demeure un des principaux défis à résoudre par la tractographie pour obtenir une description anatomique fiable de la matière blanche. Des approches méthodologiques innovantes sont nécessaires pour aider à résoudre ces questions. Les progrès récents en termes de puissance de calcul et de disponibilité des données ont rendu possible l'application réussie des approches modernes d'apprentissage automatique à une variété de problèmes, y compris les tâches de vision par ordinateur et d'analyse d'images. Ces méthodes modélisent et trouvent les motifs sous-jacents dans les données, et permettent de faire des prédictions sur de nouvelles données. De même, elles peuvent permettre d'obtenir des représentations compactes des caractéristiques intrinsèques des données d'intérêt. Les approches modernes basées sur les données, regroupées sous la famille des méthodes d'apprentissage profond, sont adoptées pour résoudre des tâches d'analyse de données d'imagerie médicale, y compris la tractographie. Dans ce contexte, les méthodes deviennent moins dépendantes des contraintes imposées par les approches classiques utilisées en tractographie. Par conséquent, les méthodes inspirées de l'apprentissage profond conviennent au changement de paradigme requis, et peuvent ouvrir de nouvelles possibilités de modélisation, en améliorant ainsi l'état de l'art en tractographie. Dans cette thèse, un nouveau paradigme basé sur les techniques d'apprentissage de représentation est proposé pour générer et analyser des données de tractographie. En exploitant les architectures d'autoencodeurs, ce travail tente d'explorer leur capacité à trouver un code optimal pour représenter les caractéristiques des fibres de la matière blanche. Les contributions proposées exploitent ces représentations pour une variété de tâches liées à la tractographie, y compris (i) le filtrage et (ii) le regroupement efficace sur les résultats générés par d'autres méthodes, ainsi que (iii) la reconstruction proprement dite des fibres de la matière blanche en utilisant une méthode générative. Ainsi, les méthodes issues de cette thèse ont été nommées (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), et (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectivement. Les performances des méthodes proposées sont évaluées par rapport aux méthodes de l'état de l'art sur des données de diffusion synthétiques et des données de cerveaux humains chez l'adulte sain in vivo. Les résultats montrent que (i) la méthode de filtrage proposée offre une sensibilité et spécificité supérieures par rapport à d'autres méthodes de l'état de l'art; (ii) le regroupement des tractes dans des faisceaux est fait de manière consistante; et (iii) l'approche générative échantillonnant des tractes comble mieux l'espace de la matière blanche dans des régions difficiles à reconstruire. Enfin, cette thèse révèle les possibilités des autoencodeurs pour l'analyse des données des fibres de la matière blanche, et ouvre la voie à fournir des données de tractographie plus fiables.Abstract : Diffusion magnetic resonance imaging is a non-invasive technique providing insights into the organizational microstructure of biological tissues. The computational methods that exploit the orientational preference of the diffusion in restricted structures to reveal the brain's white matter axonal pathways are called tractography. In recent years, a variety of tractography methods have been successfully used to uncover the brain's white matter architecture. Yet, these reconstruction techniques suffer from a number of shortcomings derived from fundamental ambiguities inherent to the orientation information. This has dramatic consequences, since current tractography-based white matter connectivity maps are dominated by false positive connections. Thus, the large proportion of invalid pathways recovered remains one of the main challenges to be solved by tractography to obtain a reliable anatomical description of the white matter. Methodological innovative approaches are required to help solving these questions. Recent advances in computational power and data availability have made it possible to successfully apply modern machine learning approaches to a variety of problems, including computer vision and image analysis tasks. These methods model and learn the underlying patterns in the data, and allow making accurate predictions on new data. Similarly, they may enable to obtain compact representations of the intrinsic features of the data of interest. Modern data-driven approaches, grouped under the family of deep learning methods, are being adopted to solve medical imaging data analysis tasks, including tractography. In this context, the proposed methods are less dependent on the constraints imposed by current tractography approaches. Hence, deep learning-inspired methods are suit for the required paradigm shift, may open new modeling possibilities, and thus improve the state of the art in tractography. In this thesis, a new paradigm based on representation learning techniques is proposed to generate and to analyze tractography data. By harnessing autoencoder architectures, this work explores their ability to find an optimal code to represent the features of the white matter fiber pathways. The contributions exploit such representations for a variety of tractography-related tasks, including efficient (i) filtering and (ii) clustering on results generated by other methods, and (iii) the white matter pathway reconstruction itself using a generative method. The methods issued from this thesis have been named (i) FINTA (Filtering in Tractography using Autoencoders), (ii) CINTA (Clustering in Tractography using Autoencoders), and (iii) GESTA (Generative Sampling in Bundle Tractography using Autoencoders), respectively. The proposed methods' performance is assessed against current state-of-the-art methods on synthetic data and healthy adult human brain in vivo data. Results show that the (i) introduced filtering method has superior sensitivity and specificity over other state-of-the-art methods; (ii) the clustering method groups streamlines into anatomically coherent bundles with a high degree of consistency; and (iii) the generative streamline sampling technique successfully improves the white matter coverage in hard-to-track bundles. In summary, this thesis unlocks the potential of deep autoencoder-based models for white matter data analysis, and paves the way towards delivering more reliable tractography data

    Data driven regularization models of non-linear ill-posed inverse problems in imaging

    Get PDF
    Imaging technologies are widely used in application fields such as natural sciences, engineering, medicine, and life sciences. A broad class of imaging problems reduces to solve ill-posed inverse problems (IPs). Traditional strategies to solve these ill-posed IPs rely on variational regularization methods, which are based on minimization of suitable energies, and make use of knowledge about the image formation model (forward operator) and prior knowledge on the solution, but lack in incorporating knowledge directly from data. On the other hand, the more recent learned approaches can easily learn the intricate statistics of images depending on a large set of data, but do not have a systematic method for incorporating prior knowledge about the image formation model. The main purpose of this thesis is to discuss data-driven image reconstruction methods which combine the benefits of these two different reconstruction strategies for the solution of highly nonlinear ill-posed inverse problems. Mathematical formulation and numerical approaches for image IPs, including linear as well as strongly nonlinear problems are described. More specifically we address the Electrical impedance Tomography (EIT) reconstruction problem by unrolling the regularized Gauss-Newton method and integrating the regularization learned by a data-adaptive neural network. Furthermore we investigate the solution of non-linear ill-posed IPs introducing a deep-PnP framework that integrates the graph convolutional denoiser into the proximal Gauss-Newton method with a practical application to the EIT, a recently introduced promising imaging technique. Efficient algorithms are then applied to the solution of the limited electrods problem in EIT, combining compressive sensing techniques and deep learning strategies. Finally, a transformer-based neural network architecture is adapted to restore the noisy solution of the Computed Tomography problem recovered using the filtered back-projection method

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications
    corecore