41 research outputs found

    Role of deep learning in infant brain MRI analysis

    Get PDF
    Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them

    Blurry Boundary Delineation and Adversarial Confidence Learning for Medical Image Analysis

    Get PDF
    Low tissue contrast and fuzzy boundaries are major challenges in medical image segmentation which is a key step for various medical image analysis tasks. In particular, blurry boundary delineation is one of the most challenging problems due to low-contrast and even vanishing boundaries. Currently, encoder-decoder networks are widely adopted for medical image segmentation. With the lateral skip connection, the models can obtain and fuse both semantic and resolution information in deep layers to achieve more accurate segmentation performance. However, in many applications (e.g., images with blurry boundaries), these models often cannot precisely locate complex boundaries and segment tiny isolated parts. To solve this challenging problem, we empirically analyze why simple lateral connections in encoder-decoder architectures are not able to accurately locate indistinct boundaries. Based on the analysis, we argue learning high-resolution semantic information in the lateral connection can better delineate the blurry boundaries. Two methods have been proposed to achieve such a goal. a) A high-resolution pathway composed of dilated residual blocks has been adopted to replace the simple lateral connection for learning the high-resolution semantic features. b) A semantic-guided encoder feature learning strategy is further proposed to learn high-resolution semantic encoder features so that we can more accurately and efficiently locate the blurry boundaries. Besides, we also explore a contour constraint mechanism to model blurry boundary detection. Experimental results on real clinical datasets (infant brain MRI and pelvic organ datasets) show that our proposed methods can achieve state-of-the-art segmentation accuracy, especially for the blurry regions. Further analysis also indicates that our proposed network components indeed contribute to the performance gain. Experiments on an extra dataset also validate the generalization ability of our proposed methods. Generative adversarial networks (GANs) are widely used in medical image analysis tasks, such as medical image segmentation and synthesis. In these works, adversarial learning is usually directly applied to the original supervised segmentation (synthesis) networks. The use of adversarial learning is effective in improving visual perception performance since adversarial learning works as realistic regularization for supervised generators. However, the quantitative performance often cannot be improved as much as the qualitative performance, and it can even become worse in some cases. In this dissertation, I explore how adversarial learning could be more useful in supervised segmentation (synthesis) models, i.e., how to synchronously improve visual and quantitative performance. I first analyze the roles of discriminator in the classic GANs and compare them with those in supervised adversarial systems. Based on this analysis, an adversarial confidence learning framework is proposed for taking better advantage of adversarial learning; that is, besides the adversarial learning for emphasizing visual perception, the confidence information provided by the adversarial network is utilized to enhance the design of the supervised segmentation (synthesis) network. In particular, I propose using a fully convolutional adversarial network for confidence learning to provide voxel-wise and region-wise confidence information for the segmentation (synthesis) network. Furthermore, various loss functions of GANs are investigated and the binary cross entropy loss is finally chosen to train the proposed adversarial confidence learning system so that the modeling capacity of the discriminator is retained for confidence learning. With these settings, two machine learning algorithms are proposed to solve some specific medical image analysis problems. a) A difficulty-aware attention mechanism is proposed to properly handle hard samples or regions by taking structural information into consideration so that the irregular distribution of medical data could be appropriately dealt with. Experimental results on clinical and challenge datasets show that the proposed algorithm can achieve state-of-the-art segmentation (synthesis) accuracy. Further analysis also indicates that adversarial confidence learning can synchronously improve the visual perception and quantitative performance. b) A semisupervised segmentation model is proposed to alleviate the everlasting challenge for medical image segmentation - lack of annotated data. The proposed method can automatically recognize well-segmented regions (instead of the entire sample) and dynamically include them to increase the label set during training. Specifically, based on the confidence map, a region-attention based semi-supervised learning strategy is designed to further train the segmentation network. Experimental results on real clinical datasets show that the proposed approach can achieve better segmentation performance with extra unannotated data.Doctor of Philosoph

    Learning from Complex Neuroimaging Datasets

    Get PDF
    Advancements in Magnetic Resonance Imaging (MRI) allowed for the early diagnosis of neurodevelopmental disorders and neurodegenerative diseases. Neuroanatomical abnormalities in the cerebral cortex are often investigated by examining group-level differences of brain morphometric measures extracted from highly-sampled cortical surfaces. However, group-level differences do not allow for individual-level outcome prediction critical for the application to clinical practice. Despite the success of MRI-based deep learning frameworks, critical issues have been identified: (1) extracting accurate and reliable local features from the cortical surface, (2) determining a parsimonious subset of cortical features for correct disease diagnosis, (3) learning directly from a non-Euclidean high-dimensional feature space, (4) improving the robustness of multi-task multi-modal models, and (5) identifying anomalies in imbalanced and heterogeneous settings. This dissertation describes novel methodological contributions to tackle the challenges above. First, I introduce a Laplacian-based method for quantifying local Extra-Axial Cerebrospinal Fluid (EA-CSF) from structural MRI. Next, I describe a deep learning approach for combining local EA-CSF with other morphometric cortical measures for early disease detection. Then, I propose a data-driven approach for extending convolutional learning to non-Euclidean manifolds such as cortical surfaces. I also present a unified framework for robust multi-task learning from imaging and non-imaging information. Finally, I propose a semi-supervised generative approach for the detection of samples from untrained classes in imbalanced and heterogeneous developmental datasets. The proposed methodological contributions are evaluated by applying them to the early detection of Autism Spectrum Disorder (ASD) in the first year of the infant’s life. Also, the aging human brain is examined in the context of studying different stages of Alzheimer’s Disease (AD).Doctor of Philosoph

    Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features

    Get PDF
    The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images
    corecore