743 research outputs found
Recommended from our members
Development of Deep Learning Methods for Magnetic Resonance Phase Imaging of Neurological Disease
Magnetic resonance imaging (MRI) is a high-resolution, non-invasive medical imaging modality that is widely used in human brain. In recent years, susceptibility weighted imaging (SWI) and quantitative susceptibility mapping (QSM) have been proposed to utilize MR phase signal to generate contrast from tissue magnetic susceptibility and even quantify the property. On the other hand, deep learning, especially deep convolutional neural networks (DCNNs), have achieved state-of-the-art performances in numerous computer vision tasks and gained significant attention in the field of medical imaging in the recent years. This dissertation combined the idea of deep learning with the two MR phase imaging methods. To combined deep learning with SWI, we designed and trained a 3D deep residual network that can distinguish false positive detected candidates from cerebral microbleeds (CMBs) and built an automatic CMB detection pipeline with high performance. We further confirmed the generalizability of this deep learning-based pipeline using multiple dataset with different scan parameters and pathologies and provided lessons for application and generalization of generic deep learning based medical imaging methods.To combine deep learning with QSM, we developed a 3D U-Net based network that learns to perform dipole inversion from gold standard QSM acquired from data with multiple orientation. The model was further improved with adversarial training strategy and achieved significantly lower reconstruction error than traditional QSM algorithms. In addition, we also performed various background removal and dipole inversion algorithms on both brain tumor patients and healthy volunteers to study and compare their performances. The results could provide guidance on future application of QSM in different scenarios
Diffusion Kurtosis Imaging of neonatal Spinal Cord in clinical routine
Diffusion kurtosis imaging (DKI) has undisputed advantages over the more classical diffusion magnetic resonance imaging (dMRI) as witnessed by the fast-increasing number of clinical applications and software packages widely adopted in brain imaging. However, in the neonatal setting, DKI is still largely underutilized, in particular in spinal cord (SC) imaging, because of its inherently demanding technological requirements. Due to its extreme sensitivity to non-Gaussian diffusion, DKI proves particularly suitable for detecting complex, subtle, fast microstructural changes occurring in this area at this early and critical stage of development, which are not identifiable with only DTI. Given the multiplicity of congenital anomalies of the spinal canal, their crucial effect on later developmental outcome, and the close interconnection between the SC region and the brain above, managing to apply such a method to the neonatal cohort becomes of utmost importance. This study will (i) mention current methodological challenges associated with the application of advanced dMRI methods, like DKI, in early infancy, (ii) illustrate the first semi-automated pipeline built on Spinal Cord Toolbox for handling the DKI data of neonatal SC, from acquisition setting to estimation of diffusion measures, through accurate adjustment of processing algorithms customized for adult SC, and (iii) present results of its application in a pilot clinical case study. With the proposed pipeline, we preliminarily show that DKI is more sensitive than DTI-related measures to alterations caused by brain white matter injuries in the underlying cervical SC
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Unsupervised Cross-Modality Domain Adaptation for Vestibular Schwannoma Segmentation and Koos Grade Prediction based on Semi-Supervised Contrastive Learning
Domain adaptation has been widely adopted to transfer styles across
multi-vendors and multi-centers, as well as to complement the missing
modalities. In this challenge, we proposed an unsupervised domain adaptation
framework for cross-modality vestibular schwannoma (VS) and cochlea
segmentation and Koos grade prediction. We learn the shared representation from
both ceT1 and hrT2 images and recover another modality from the latent
representation, and we also utilize proxy tasks of VS segmentation and brain
parcellation to restrict the consistency of image structures in domain
adaptation. After generating missing modalities, the nnU-Net model is utilized
for VS and cochlea segmentation, while a semi-supervised contrastive learning
pre-train approach is employed to improve the model performance for Koos grade
prediction. On CrossMoDA validation phase Leaderboard, our method received rank
4 in task1 with a mean Dice score of 0.8394 and rank 2 in task2 with
Macro-Average Mean Square Error of 0.3941. Our code is available at
https://github.com/fiy2W/cmda2022.superpolymerization
Icex: Advances in the automatic extraction and volume calculation of cranial cavities
The use of non-destructive approaches for digital acquisition (e.g. computerised tomography-CT) allows detailed qualitative and quantitative study of internal structures of skeletal material. Here, we present a new R-based software tool, Icex, applicable to the study of the sizes and shapes of skeletal cavities and fossae in 3D digital images. Traditional methods of volume extraction involve the manual labelling (i.e. segmentation) of the areas of interest on each section of the image stack. This is time-consuming, error-prone and challenging to apply to complex cavities. Icex facilitates rapid quantification of such structures. We describe and detail its application to the isolation and calculation of volumes of various cranial cavities. The R tool is used here to automatically extract the orbital volumes, the paranasal sinuses, the nasal cavity and the upper oral volumes, based on the coordinates of 18 cranial anatomical points used to define their limits, from 3D cranial surface meshes obtained by segmenting CT scans. Icex includes an algorithm (Icv) for the calculation of volumes by defining a 3D convex hull of the extracted cavity. We demonstrate the use of Icex on an ontogenetic sample (0-19 years) of modern humans and on the fossil hominin crania Kabwe (Broken Hill) 1, Gibraltar (Forbes' Quarry) and Guattari 1. We also test the tool on three species of non-human primates. In the modern human subsample, Icex allowed us to perform a preliminary analysis on the absolute and relative expansion of cranial sinuses and pneumatisations during growth. The performance of Icex, applied to diverse crania, shows the potential for an extensive evaluation of the developmental and/or evolutionary significance of hollow cranial structures. Furthermore, being open source, Icex is a fully customisable tool, easily applicable to other taxa and skeletal regions
Brain Tumor Detection and Segmentation in Multisequence MRI
Tato práce se zabĂ˝vá detekcĂ a segmentacĂ mozkovĂ©ho nádoru v multisekvenÄŤnĂch MR obrazech se zaměřenĂm na gliomy vysokĂ©ho a nĂzkĂ©ho stupnÄ› malignity. Jsou zde pro tento účel navrĹľeny tĹ™i metody. PrvnĂ metoda se zabĂ˝vá detekcĂ prezence částĂ mozkovĂ©ho nádoru v axiálnĂch a koronárnĂch Ĺ™ezech. Jedná se o algoritmus zaloĹľenĂ˝ na analĂ˝ze symetrie pĹ™i rĹŻznĂ˝ch rozlišenĂch obrazu, kterĂ˝ byl otestován na T1, T2, T1C a FLAIR obrazech. Druhá metoda se zabĂ˝vá extrakcĂ oblasti celĂ©ho mozkovĂ©ho nádoru, zahrnujĂcĂ oblast jádra tumoru a edĂ©mu, ve FLAIR a T2 obrazech. Metoda je schopna extrahovat mozkovĂ˝ nádor z 2D i 3D obrazĹŻ. Je zde opÄ›t vyuĹľita analĂ˝za symetrie, která je následována automatickĂ˝m stanovenĂm intenzitnĂho prahu z nejvĂce asymetrickĂ˝ch částĂ. TĹ™etĂ metoda je zaloĹľena na predikci lokálnĂ struktury a je schopna segmentovat celou oblast nádoru, jeho jádro i jeho aktivnà část. Metoda vyuĹľĂvá faktu, Ĺľe vÄ›tšina lĂ©kaĹ™skĂ˝ch obrazĹŻ vykazuje vysokou podobnost intenzit sousednĂch pixelĹŻ a silnou korelaci mezi intenzitami v rĹŻznĂ˝ch obrazovĂ˝ch modalitách. JednĂm ze zpĹŻsobĹŻ, jak s touto korelacĂ pracovat a pouĹľĂvat ji, je vyuĹľitĂ lokálnĂch obrazovĂ˝ch polĂ. Podobná korelace existuje takĂ© mezi sousednĂmi pixely v anotaci obrazu. Tento pĹ™Ăznak byl vyuĹľit v predikci lokálnĂ struktury pĹ™i lokálnĂ anotaci polĂ. Jako klasifikaÄŤnĂ algoritmus je v tĂ©to metodÄ› pouĹľita konvoluÄŤnĂ neuronová sĂĹĄ vzhledem k jejĂ známe schopnosti zacházet s korelacĂ mezi pĹ™Ăznaky. Všechny tĹ™i metody byly otestovány na veĹ™ejnĂ© databázi 254 multisekvenÄŤnĂch MR obrazech a byla dosáhnuta pĹ™esnost srovnatelná s nejmodernÄ›jšĂmi metodami v mnohem kratšĂm vĂ˝poÄŤetnĂm ÄŤase (v řádu sekund pĹ™i pouĹľitĂ˝ CPU), coĹľ poskytuje moĹľnost manuálnĂch Ăşprav pĹ™i interaktivnĂ segmetaci.This work deals with the brain tumor detection and segmentation in multisequence MR images with particular focus on high- and low-grade gliomas. Three methods are propose for this purpose. The first method deals with the presence detection of brain tumor structures in axial and coronal slices. This method is based on multi-resolution symmetry analysis and it was tested for T1, T2, T1C and FLAIR images. The second method deals with extraction of the whole brain tumor region, including tumor core and edema, in FLAIR and T2 images and is suitable to extract the whole brain tumor region from both 2D and 3D. It also uses the symmetry analysis approach which is followed by automatic determination of the intensity threshold from the most asymmetric parts. The third method is based on local structure prediction and it is able to segment the whole tumor region as well as tumor core and active tumor. This method takes the advantage of a fact that most medical images feature a high similarity in intensities of nearby pixels and a strong correlation of intensity profiles across different image modalities. One way of dealing with -- and even exploiting -- this correlation is the use of local image patches. In the same way, there is a high correlation between nearby labels in image annotation, a feature that has been used in the ``local structure prediction'' of local label patches. Convolutional neural network is chosen as a learning algorithm, as it is known to be suited for dealing with correlation between features. All three methods were evaluated on a public data set of 254 multisequence MR volumes being able to reach comparable results to state-of-the-art methods in much shorter computing time (order of seconds running on CPU) providing means, for example, to do online updates when aiming at an interactive segmentation.
Cross-Modality Feature Learning for Three-Dimensional Brain Image Synthesis
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors such as patient discomfort, increased cost, prolonged scanning time and scanner unavailability. In addition, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. Moreover, independently of how well an imaging system is, the performance of the imaging equipment usually comes to a certain limit through different physical devices. Additional interferences arise (particularly for medical imaging systems), for example, limited acquisition times, sophisticated and costly equipment and patients with severe medical conditions, which also cause image degradation. The acquisitions can be considered as the degraded version of the original high-quality images.
In this dissertation, we explore the problems of image super-resolution and cross-modality synthesis for one Magnetic Resonance Imaging (MRI) modality from an image of another MRI modality of the same subject using an image synthesis framework for reconstructing the missing/complex modality data. We develop models and techniques that allow us to connect the domain of source modality data and the domain of target modality data, enabling transformation between elements of
the two domains. In particular, we first introduce the models that project both source modality data and target modality data into a common multi-modality feature space in a supervised setting. This common space then allows us to connect cross-modality features that depict a relationship between each other, and we can impose the learned association function that synthesizes any target modality image. Moreover, we develop a weakly-supervised method that takes a few registered multi-modality image pairs as training data and generates the desired modality data without being constrained a large number of multi-modality images collection of well-processed (\textit{e.g.}, skull-stripped and strictly registered) brain data. Finally, we propose an approach that provides a generic way of learning a dual mapping between source and target domains while considering both visually high-fidelity synthesis and task-practicability. We demonstrate that this model can be used to take any arbitrary modality and efficiently synthesize the desirable modality data in an unsupervised manner.
We show that these proposed models advance the state-of-the-art on image super-resolution and cross-modality synthesis tasks that need jointly processing of multi-modality images and that we can design the algorithms in ways to generate the practically beneficial data to medical image analysis
Towards Fast and High-quality Biomedical Image Reconstruction
Department of Computer Science and EngineeringReconstruction is an important module in the image analysis pipeline with purposes of isolating the majority of meaningful information that hidden inside the acquired data. The term ???reconstruction??? can be understood and subdivided in several specific tasks in different modalities. For example, in biomedical imaging, such as Computed Tomography (CT), Magnetic Resonance Image (MRI), that term stands for the transformation from the, possibly fully or under-sampled, spectral domains (sinogram for CT and k-space for MRI) to the visible image domains. Or, in connectomics, people usually refer it to segmentation (reconstructing the semantic contact between neuronal connections) or denoising (reconstructing the clean image). In this dissertation research, I will describe a set of my contributed algorithms from conventional to state-of-the-art deep learning methods, with a transition at the data-driven dictionary learning approaches that tackle the reconstruction problems in various image analysis tasks.clos
DUAL-GLOW: Conditional Flow-Based Generative Model for Modality Transfer
Positron emission tomography (PET) imaging is an imaging modality for
diagnosing a number of neurological diseases. In contrast to Magnetic Resonance
Imaging (MRI), PET is costly and involves injecting a radioactive substance
into the patient. Motivated by developments in modality transfer in vision, we
study the generation of certain types of PET images from MRI data. We derive
new flow-based generative models which we show perform well in this small
sample size regime (much smaller than dataset sizes available in standard
vision tasks). Our formulation, DUAL-GLOW, is based on two invertible networks
and a relation network that maps the latent spaces to each other. We discuss
how given the prior distribution, learning the conditional distribution of PET
given the MRI image reduces to obtaining the conditional distribution between
the two latent codes w.r.t. the two image types. We also extend our framework
to leverage 'side' information (or attributes) when available. By controlling
the PET generation through 'conditioning' on age, our model is also able to
capture brain FDG-PET (hypometabolism) changes, as a function of age. We
present experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI)
dataset with 826 subjects, and obtain good performance in PET image synthesis,
qualitatively and quantitatively better than recent works
- …