40 research outputs found

    Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI

    Get PDF
    T1- and T2-weighted (T1w and T2w) images are essential for tissue classification and anatomical localization in Magnetic Resonance Imaging (MRI) analyses. However, these anatomical data can be challenging to acquire in non-sedated neonatal cohorts, which are prone to high amplitude movement and display lower tissue contrast than adults. As a result, one of these modalities may be missing or of such poor quality that they cannot be used for accurate image processing, resulting in subject loss. While recent literature attempts to overcome these issues in adult populations using synthetic imaging approaches, evaluation of the efficacy of these methods in pediatric populations and the impact of these techniques in conventional MR analyses has not been performed. In this work, we present two novel methods to generate pseudo-T2w images: the first is based in deep learning and expands upon previous models to 3D imaging without the requirement of paired data, the second is based in nonlinear multi-atlas registration providing a computationally lightweight alternative. We demonstrate the anatomical accuracy of pseudo-T2w images and their efficacy in existing MR processing pipelines in two independent neonatal cohorts. Critically, we show that implementing these pseudo-T2w methods in resting-state functional MRI analyses produces virtually identical functional connectivity results when compared to those resulting from T2w images, confirming their utility in infant MRI studies for salvaging otherwise lost subject data

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    Deep Learning Methods for Classification of Gliomas and Their Molecular Subtypes, From Central Learning to Federated Learning

    Get PDF
    The most common type of brain cancer in adults are gliomas. Under the updated 2016 World Health Organization (WHO) tumor classification in central nervous system (CNS), identification of molecular subtypes of gliomas is important. For low grade gliomas (LGGs), prediction of molecular subtypes by observing magnetic resonance imaging (MRI) scans might be difficult without taking biopsy. With the development of machine learning (ML) methods such as deep learning (DL), molecular based classification methods have shown promising results from MRI scans that may assist clinicians for prognosis and deciding on a treatment strategy. However, DL requires large amount of training datasets with tumor class labels and tumor boundary annotations. Manual annotation of tumor boundary is a time consuming and expensive process.The thesis is based on the work developed in five papers on gliomas and their molecular subtypes. We propose novel methods that provide improved performance. \ua0The proposed methods consist of a multi-stream convolutional autoencoder (CAE)-based classifier, a deep convolutional generative adversarial network (DCGAN) to enlarge the training dataset, a CycleGAN to handle domain shift, a novel federated learning (FL) scheme to allow local client-based training with dataset protection, and employing bounding boxes to MRIs when tumor boundary annotations are not available.Experimental results showed that DCGAN generated MRIs have enlarged the original training dataset size and have improved the classification performance on test sets. CycleGAN showed good domain adaptation on multiple source datasets and improved the classification performance. The proposed FL scheme showed a slightly degraded performance as compare to that of central learning (CL) approach while protecting dataset privacy. Using tumor bounding boxes showed to be an alternative approach to tumor boundary annotation for tumor classification and segmentation, with a trade-off between a slight decrease in performance and saving time in manual marking by clinicians. The proposed methods may benefit the future research in bringing DL tools into clinical practice for assisting tumor diagnosis and help the decision making process

    Motion robust acquisition and reconstruction of quantitative T2* maps in the developing brain

    Get PDF
    The goal of the research presented in this thesis was to develop methods for quantitative T2* mapping of the developing brain. Brain maturation in the early period of life involves complex structural and physiological changes caused by synaptogenesis, myelination and growth of cells. Molecular structures and biological processes give rise to varying levels of T2* relaxation time, which is an inherent contrast mechanism in magnetic resonance imaging. The knowledge of T2* relaxation times in the brain can thus help with evaluation of pathology by establishing its normative values in the key areas of the brain. T2* relaxation values are a valuable biomarker for myelin microstructure and iron concentration, as well as an important guide towards achievement of optimal fMRI contrast. However, fetal MR imaging is a significant step up from neonatal or adult MR imaging due to the complexity of the acquisition and reconstruction techniques that are required to provide high quality artifact-free images in the presence of maternal respiration and unpredictable fetal motion. The first contribution of this thesis, described in Chapter 4, presents a novel acquisition method for measurement of fetal brain T2* values. At the time of publication, this was the first study of fetal brain T2* values. Single shot multi-echo gradient echo EPI was proposed as a rapid method for measuring fetal T2* values by effectively freezing intra-slice motion. The study concluded that fetal T2* values are higher than those previously reported for pre-term neonates and decline with a consistent trend across gestational age. The data also suggested that longer than usual echo times or direct T2* measurement should be considered when performing fetal fMRI in order to reach optimal BOLD sensitivity. For the second contribution, described in Chapter 5, measurements were extended to a higher field strength of 3T and reported, for the first time, both for fetal and neonatal subjects at this field strength. The technical contribution of this work is a fully automatic segmentation framework that propagates brain tissue labels onto the acquired T2* maps without the need for manual intervention. The third contribution, described in Chapter 6, proposed a new method for performing 3D fetal brain reconstruction where the available data is sparse and is therefore limited in the use of current state of the art techniques for 3D brain reconstruction in the presence of motion. To enable a high resolution reconstruction, a generative adversarial network was trained to perform image to image translation between T2 weighted and T2* weighted data. Translated images could then be served as a prior for slice alignment and super resolution reconstruction of 3D brain image.Open Acces

    EEG To FMRI Synthesis: Is Deep Learning a Candidate?

    Get PDF
    Advances on signal, image and video generation underly major breakthroughs on generative medical imaging tasks, including Brain Image Synthesis. Still, the extent to which functional Magnetic Ressonance Imaging (fMRI) can be mapped from the brain electrophysiology remains largely unexplored. This work provides the first comprehensive view on how to use state-of-the-art principles from Neural Processing to synthesize fMRI data from electroencephalographic (EEG) data. Given the distinct spatiotemporal nature of haemodynamic and electrophysiological signals, this problem is formulated as the task of learning a mapping function between multivariate time series with highly dissimilar structures. A comparison of state-of-the-art synthesis approaches, including Autoencoders, Generative Adversarial Networks and Pairwise Learning, is undertaken. Results highlight the feasibility of EEG to fMRI brain image mappings, pinpointing the role of current advances in Machine Learning and showing the relevance of upcoming contributions to further improve performance. EEG to fMRI synthesis offers a way to enhance and augment brain image data, and guarantee access to more affordable, portable and long-lasting protocols of brain activity monitoring. The code used in this manuscript is available in Github and the datasets are open source
    corecore