38 research outputs found

    Model-based reconstruction of accelerated quantitative magnetic resonance imaging (MRI)

    Get PDF
    Quantitative MRI refers to the determination of quantitative parameters (T1,T2,diffusion, perfusion etc.) in magnetic resonance imaging (MRI). The ’parameter maps’ are estimated from a set of acquired MR images using a parameter model, i.e. a set of mathematical equations that describes the MR images as a function of the parameter(s). A precise and accurate highresolution estimation of the parameters is needed in order to detect small changes and/or to visualize small structures. Particularly in clinical diagnostics, the method provides important information about tissue structures and respective pathologic alterations. Unfortunately, it also requires comparatively long measurement times which preclude widespread practical applications. To overcome such limitations, approaches like Parallel Imaging (PI) and Compressed Sensing (CS) along with the model-based reconstruction concept has been proposed. These methods allow for the estimation of quantitative maps from only a fraction of the usually required data. The present work deals with the model-based reconstruction methods that are applicable for the most widely available Cartesian (rectilinear) acquisition scheme. The initial implementation was based on accelerating the T*2 mapping using Maximum Likelihood estimation and Parallel Imaging (PI). The method was tested on a Multiecho Gradient Echo (MEGE) T*2 mapping experiment in a phantom and a human brain with retrospective undersampling. Since T*2 is very sensitive to phase perturbations as a result of magnetic field inhomogeneity further work was done to address this. The importance of coherent phase information in improving the accuracy of the accelerated T*2 mapping fitting was investigated. Using alternating minimization, the method extends the MLE approach based on complex exponential model fitting which avoids loss of phase information in recovering T*2 relaxation times. The implementation of this method was tested on prospective(real time) undersampling in addition to retrospective. Compared with fully sampled reference scans, the use of phase information reduced the error of the accelerated T*2 maps by up to 20% as compared to baseline magnitude-only method. The total scan time for the four times accelerated 3D T*2 mapping was 7 minutes which is clinically acceptable. The second main part of this thesis focuses on the development of a model-based super-resolution framework for the T2 mapping. 2D multi-echo spin-echo (MESE) acquisitions suffer from low spatial resolution in the slice dimension. To overcome this limitation while keeping acceptable scan times, we combined a classical super-resolution method with an iterative model-based reconstruction to reconstruct T2 maps from highly undersampled MESE data. Based on an optimal protocol determined from simulations, we were able to reconstruct 1mm3 isotropic T2 maps of both phantom and healthy volunteer data. Comparison of T2 values obtained with the proposed method with fully sampled reference MESE results showed good agreement. In summary, this thesis has introduced new approaches to employ signal models in different applications, with the aim of either accelerating an acquisition, or improving the accuracy of an existing method. These approaches may help to take the next step away from qualitative towards a fully quantitative MR imaging modality, facilitating precision medicine and personalized treatment

    Proceedings Virtual Imaging Trials in Medicine 2024

    Get PDF
    This submission comprises the proceedings of the 1st Virtual Imaging Trials in Medicine conference, organized by Duke University on April 22-24, 2024. The listed authors serve as the program directors for this conference. The VITM conference is a pioneering summit uniting experts from academia, industry and government in the fields of medical imaging and therapy to explore the transformative potential of in silico virtual trials and digital twins in revolutionizing healthcare. The proceedings are categorized by the respective days of the conference: Monday presentations, Tuesday presentations, Wednesday presentations, followed by the abstracts for the posters presented on Monday and Tuesday

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    Estimation of Cerebral Physiology and Hemodynamics via Near-Infrared Spectroscopy

    Get PDF
    Near-infrared spectroscopy (NIRS) is a non-invasive optical imaging technique that has rapidly been gaining popularity for study of the brain. Near-infrared spectroscopy measures absorption of light, primarily due to hemoglobin, through an array of light sources and detectors that are coupled to the scalp. Measurements can generally be divided into measurements of baseline physiology (related to total absorption) and measurements of hemodynamic time-series data (related to relative absorption changes). Because light intensity drops off rapidly with depth, NIRS measurements are highly sensitive to extracerebral tissues. Attempts to recover baseline physiology measurements of the brain can be confounded by high sensitivity to the scalp and skull. Time-series measurements contain high contributions of systemic physiology signals, including cardiac, respiratory, and blood pressure waves. Furthermore, measurements over time inevitably introduce artifacts due to subject motion. The aim of this thesis was to develop improved analysis methods in the context of these NIRS specific confounding factors. The thesis consists of four articles that address specific issues in NIRS data analysis: (i) assessment of common data analysis procedures used to estimate oxygen saturation and hemoglobin content that assume a semi-infinite, homogeneous medium, (ii) testing the feasibility of improving oxygen saturation and hemoglobin measurements using multi-layered models, (iii) development of methods to estimate the general linear model for functional brain imaging that are robust to systemic physiology signals and motion artifacts, and (iv) the extension of (iii) to an adaptive method that is suitable for real-time analysis. Overall, this thesis helps to validate and advance analysis methods for NIRS

    Preclinical MRI of the kidney : methods and protocols

    Get PDF
    This Open Access volume provides readers with an open access protocol collection and wide-ranging recommendations for preclinical renal MRI used in translational research. The chapters in this book are interdisciplinary in nature and bridge the gaps between physics, physiology, and medicine. They are designed to enhance training in renal MRI sciences and improve the reproducibility of renal imaging research. Chapters provide guidance for exploring, using and developing small animal renal MRI in your laboratory as a unique tool for advanced in vivo phenotyping, diagnostic imaging, and research into potential new therapies. Written in the highly successful Methods in Molecular Biology series format, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step, readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Cutting-edge and thorough, Preclinical MRI of the Kidney: Methods and Protocols is a valuable resource and will be of importance to anyone interested in the preclinical aspect of renal and cardiorenal diseases in the fields of physiology, nephrology, radiology, and cardiology. This publication is based upon work from COST Action PARENCHIMA, supported by European Cooperation in Science and Technology (COST). COST (www.cost.eu) is a funding agency for research and innovation networks. COST Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. PARENCHIMA (renalmri.org) is a community-driven Action in the COST program of the European Union, which unites more than 200 experts in renal MRI from 30 countries with the aim to improve the reproducibility and standardization of renal MRI biomarkers

    Use of Multicomponent Non-Rigid Registration to Improve Alignment of Serial Oncological PET/CT Studies

    Get PDF
    Non-rigid registration of serial head and neck FDG PET/CT images from a combined scanner can be problematic. Registration techniques typically rely on similarity measures calculated from voxel intensity values; CT-CT registration is superior to PET-PET registration due to the higher quality of anatomical information present in this modality. However, when metal artefacts from dental fillings are present in a pair of CT images, a nonrigid registration will incorrectly attempt to register the two artefacts together since they are strong features compared to the features that represent the actual anatomy. This leads to localised registration errors in the deformation field in the vicinity of the artefacts. Our objective was to develop a registration technique which overcomes these limitations by using combined information from both modalities. To study the effect of artefacts on registration, metal artefacts were simulated with one CT image rotated by a small angle in the sagittal plane. Image pairs containing these simulated artifacts were then registered to evaluate the resulting errors. To improve the registration in the vicinity where there were artefacts, intensity information from the PET images was incorporated using several techniques. A well-established B-splines based non-rigid registration code was reworked to allow multicomponent registration. A similarity measure with four possible weighted components relating to the ways in which the CT and PET information can be combined to drive the registration of a pair of these dual-valued images was employed. Several registration methods based on using this multicomponent similarity measure were implemented with the goal of effectively registering the images containing the simulated artifacts. A method was also developed to swap control point displacements from the PET-derived transformation in the vicinity of the artefact. This method yielded the best result on the simulated images and was evaluated on images where actual dental artifacts were present

    Accurate 3D-reconstruction and -navigation for high-precision minimal-invasive interventions

    Get PDF
    The current lateral skull base surgery is largely invasive since it requires wide exposure and direct visualization of anatomical landmarks to avoid damaging critical structures. A multi-port approach aiming to reduce such invasiveness has been recently investigated. Thereby three canals are drilled from the skull surface to the surgical region of interest: the first canal for the instrument, the second for the endoscope, and the third for material removal or an additional instrument. The transition to minimal invasive approaches in the lateral skull base surgery requires sub-millimeter accuracy and high outcome predictability, which results in high requirements for the image acquisition as well as for the navigation. Computed tomography (CT) is a non-invasive imaging technique allowing the visualization of the internal patient organs. Planning optimal drill channels based on patient-specific models requires high-accurate three-dimensional (3D) CT images. This thesis focuses on the reconstruction of high quality CT volumes. Therefore, two conventional imaging systems are investigated: spiral CT scanners and C-arm cone-beam CT (CBCT) systems. Spiral CT scanners acquire volumes with typically anisotropic resolution, i.e. the voxel spacing in the slice-selection-direction is larger than the in-the-plane spacing. A new super-resolution reconstruction approach is proposed to recover images with high isotropic resolution from two orthogonal low-resolution CT volumes. C-arm CBCT systems offers CT-like 3D imaging capabilities while being appropriate for interventional suites. A main drawback of these systems is the commonly encountered CT artifacts due to several limitations in the imaging system, such as the mechanical inaccuracies. This thesis contributes new methods to enhance the CBCT reconstruction quality by addressing two main reconstruction artifacts: the misalignment artifacts caused by mechanical inaccuracies, and the metal-artifacts caused by the presence of metal objects in the scanned region. CBCT scanners are appropriate for intra-operative image-guided navigation. For instance, they can be used to control the drill process based on intra-operatively acquired 2D fluoroscopic images. For a successful navigation, accurate estimate of C-arm pose relative to the patient anatomy and the associated surgical plan is required. A new algorithm has been developed to fulfill this task with high-precision. The performance of the introduced methods is demonstrated on simulated and real data

    Preclinical MRI of the Kidney

    Get PDF
    This Open Access volume provides readers with an open access protocol collection and wide-ranging recommendations for preclinical renal MRI used in translational research. The chapters in this book are interdisciplinary in nature and bridge the gaps between physics, physiology, and medicine. They are designed to enhance training in renal MRI sciences and improve the reproducibility of renal imaging research. Chapters provide guidance for exploring, using and developing small animal renal MRI in your laboratory as a unique tool for advanced in vivo phenotyping, diagnostic imaging, and research into potential new therapies. Written in the highly successful Methods in Molecular Biology series format, chapters include introductions to their respective topics, lists of the necessary materials and reagents, step-by-step, readily reproducible laboratory protocols, and tips on troubleshooting and avoiding known pitfalls. Cutting-edge and thorough, Preclinical MRI of the Kidney: Methods and Protocols is a valuable resource and will be of importance to anyone interested in the preclinical aspect of renal and cardiorenal diseases in the fields of physiology, nephrology, radiology, and cardiology. This publication is based upon work from COST Action PARENCHIMA, supported by European Cooperation in Science and Technology (COST). COST (www.cost.eu) is a funding agency for research and innovation networks. COST Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation. PARENCHIMA (renalmri.org) is a community-driven Action in the COST program of the European Union, which unites more than 200 experts in renal MRI from 30 countries with the aim to improve the reproducibility and standardization of renal MRI biomarkers

    Recalage déformable à base de graphes : mise en correspondance coupe-vers-volume et méthodes contextuelles

    Get PDF
    Image registration methods, which aim at aligning two or more images into one coordinate system, are among the oldest and most widely used algorithms in computer vision. Registration methods serve to establish correspondence relationships among images (captured at different times, from different sensors or from different viewpoints) which are not obvious for the human eye. A particular type of registration algorithm, known as graph-based deformable registration methods, has become popular during the last decade given its robustness, scalability, efficiency and theoretical simplicity. The range of problems to which it can be adapted is particularly broad. In this thesis, we propose several extensions to the graph-based deformable registration theory, by exploring new application scenarios and developing novel methodological contributions.Our first contribution is an extension of the graph-based deformable registration framework, dealing with the challenging slice-to-volume registration problem. Slice-to-volume registration aims at registering a 2D image within a 3D volume, i.e. we seek a mapping function which optimally maps a tomographic slice to the 3D coordinate space of a given volume. We introduce a scalable, modular and flexible formulation accommodating low-rank and high order terms, which simultaneously selects the plane and estimates the in-plane deformation through a single shot optimization approach. The proposed framework is instantiated into different variants based on different graph topology, label space definition and energy construction. Simulated and real-data in the context of ultrasound and magnetic resonance registration (where both framework instantiations as well as different optimization strategies are considered) demonstrate the potentials of our method.The other two contributions included in this thesis are related to how semantic information can be encompassed within the registration process (independently of the dimensionality of the images). Currently, most of the methods rely on a single metric function explaining the similarity between the source and target images. We argue that incorporating semantic information to guide the registration process will further improve the accuracy of the results, particularly in the presence of semantic labels making the registration a domain specific problem.We consider a first scenario where we are given a classifier inferring probability maps for different anatomical structures in the input images. Our method seeks to simultaneously register and segment a set of input images, incorporating this information within the energy formulation. The main idea is to use these estimated maps of semantic labels (provided by an arbitrary classifier) as a surrogate for unlabeled data, and combine them with population deformable registration to improve both alignment and segmentation.Our last contribution also aims at incorporating semantic information to the registration process, but in a different scenario. In this case, instead of supposing that we have pre-trained arbitrary classifiers at our disposal, we are given a set of accurate ground truth annotations for a variety of anatomical structures. We present a methodological contribution that aims at learning context specific matching criteria as an aggregation of standard similarity measures from the aforementioned annotated data, using an adapted version of the latent structured support vector machine (LSSVM) framework.Les méthodes de recalage d’images, qui ont pour but l’alignement de deux ou plusieurs images dans un même système de coordonnées, sont parmi les algorithmes les plus anciens et les plus utilisés en vision par ordinateur. Les méthodes de recalage servent à établir des correspondances entre des images (prises à des moments différents, par différents senseurs ou avec différentes perspectives), lesquelles ne sont pas évidentes pour l’œil humain. Un type particulier d’algorithme de recalage, connu comme « les méthodes de recalage déformables à l’aide de modèles graphiques » est devenu de plus en plus populaire ces dernières années, grâce à sa robustesse, sa scalabilité, son efficacité et sa simplicité théorique. La gamme des problèmes auxquels ce type d’algorithme peut être adapté est particulièrement vaste. Dans ce travail de thèse, nous proposons plusieurs extensions à la théorie de recalage déformable à l’aide de modèles graphiques, en explorant de nouvelles applications et en développant des contributions méthodologiques originales.Notre première contribution est une extension du cadre du recalage à l’aide de graphes, en abordant le problème très complexe du recalage d’une tranche avec un volume. Le recalage d’une tranche avec un volume est le recalage 2D dans un volume 3D, comme par exemple le mapping d’une tranche tomographique dans un système de coordonnées 3D d’un volume en particulier. Nos avons proposé une formulation scalable, modulaire et flexible pour accommoder des termes d'ordre élevé et de rang bas, qui peut sélectionner le plan et estimer la déformation dans le plan de manière simultanée par une seule approche d'optimisation. Le cadre proposé est instancié en différentes variantes, basés sur différentes topologies du graph, définitions de l'espace des étiquettes et constructions de l'énergie. Le potentiel de notre méthode a été démontré sur des données réelles ainsi que des données simulées dans le cadre d’une résonance magnétique d’ultrason (où le cadre d’installation et les stratégies d’optimisation ont été considérés).Les deux autres contributions inclues dans ce travail de thèse, sont liées au problème de l’intégration de l’information sémantique dans la procédure de recalage (indépendamment de la dimensionnalité des images). Actuellement, la plupart des méthodes comprennent une seule fonction métrique pour expliquer la similarité entre l’image source et l’image cible. Nous soutenons que l'intégration des informations sémantiques pour guider la procédure de recalage pourra encore améliorer la précision des résultats, en particulier en présence d'étiquettes sémantiques faisant du recalage un problème spécifique adapté à chaque domaine.Nous considérons un premier scénario en proposant un classificateur pour inférer des cartes de probabilité pour les différentes structures anatomiques dans les images d'entrée. Notre méthode vise à recaler et segmenter un ensemble d'images d'entrée simultanément, en intégrant cette information dans la formulation de l'énergie. L'idée principale est d'utiliser ces cartes estimées des étiquettes sémantiques (fournie par un classificateur arbitraire) comme un substitut pour les données non-étiquettées, et les combiner avec le recalage déformable pour améliorer l'alignement ainsi que la segmentation.Notre dernière contribution vise également à intégrer l'information sémantique pour la procédure de recalage, mais dans un scénario différent. Dans ce cas, au lieu de supposer que nous avons des classificateurs arbitraires pré-entraînés à notre disposition, nous considérons un ensemble d’annotations précis (vérité terrain) pour une variété de structures anatomiques. Nous présentons une contribution méthodologique qui vise à l'apprentissage des critères correspondants au contexte spécifique comme une agrégation des mesures de similarité standard à partir des données annotées, en utilisant une adaptation de l’algorithme « Latent Structured Support Vector Machine »

    Quantification de la microstructure de la moelle épinière humaine par IRM et application chez des patients avec sclérose en plaques

    Get PDF
    Les pathologies degeneratives de la moelle epiniere sont encore aujourd'hui mal diagnostiquees et laissent les patients dans un etat de souffrance et de doute. L'imagerie par resonance magnetique (IRM) permet d'obtenir des informations quantitatives sur la microstructure de la matiere blanche. Nous avons demontre la faisabilité d'estimer la densite et le diametre des axones dans la moelle epiniere humaine en utilisant une IRM unique au monde installee a Boston, le "scanner Connectom", capable d'atteindre des gradients de champ magnetique de r@@mT/m. Cependant cette methode ne donne qu'une information partielle de la microstructure de la matiere blanche et ne tient pas compte de la gaine de myeline entourant les axones. Cette gaine de myeline permet d'assurer une bonne conductivite des axones et peut degenerer dans certaines pathologies comme la sclerose en plaques. Nos collaborateurs de l'université McGill ont proposé de combiner cette technique avec l'IRM quantitative de la myeline afin de mesurer son g-ratio, ou ratio du diametre interne sur externe de la myeline. Durant cette these, j’ai mis en place les techniques d’IRM de la microstructure, j’ai valide ces methodes en utilisant l’histologie a large champ de vue, puis je les ai appliquees chez des patients avec sclerose en plaques pour une application clinique.----------ABSTRACT Degenerative pathologies of the spinal cord are still difficult to diagnose today, leaving patients in a state of constant suffering and constant doubt about their future. Magnetic Resonance Imaging (MRI) can gather quantitative information about the white matter microstructure by playing on the phase and relaxation of the spins. Using a unique MRI system capable of magnetic gradients of r@@mT/m, the “Connectom scanner”, we showed that neuronal fibers (axons) density and diameter can be measured in the human spinal cord in vivo using diffusion MRI. Although very informative, this method only provides a partial description of the tissue and no direct information about the myelin sheath that surrounds the axons is extracted. The myelin sheath improves the speed and frequency of action potentials that are transmitted through the axons, and an alteration of myelin integrity leads to paralysis in diseases such as multiple sclerosis. Our collaborators at McGill University proposed to combine the diffusion technique with quantitative myelin imaging technique in order to measure the thickness of the myelin sheath. In this thesis, I developed quantitative MRI techniques in the spinal cord, I validated these methods using large-scale histology, and I applied them on patients with multiple sclerosis
    corecore