314 research outputs found

    Pathology Synthesis of 3D-Consistent Cardiac MR Images using 2D VAEs and GANs

    Full text link
    We propose a method for synthesizing cardiac magnetic resonance (MR) images with plausible heart pathologies and realistic appearances for the purpose of generating labeled data for the application of supervised deep-learning (DL) training. The image synthesis consists of label deformation and label-to-image translation tasks. The former is achieved via latent space interpolation in a VAE model, while the latter is accomplished via a label-conditional GAN model. We devise three approaches for label manipulation in the latent space of the trained VAE model; i) \textbf{intra-subject synthesis} aiming to interpolate the intermediate slices of a subject to increase the through-plane resolution, ii) \textbf{inter-subject synthesis} aiming to interpolate the geometry and appearance of intermediate images between two dissimilar subjects acquired with different scanner vendors, and iii) \textbf{pathology synthesis} aiming to synthesize a series of pseudo-pathological synthetic subjects with characteristics of a desired heart disease. Furthermore, we propose to model the relationship between 2D slices in the latent space of the VAE prior to reconstruction for generating 3D-consistent subjects from stacking up 2D slice-by-slice generations. We demonstrate that such an approach could provide a solution to diversify and enrich an available database of cardiac MR images and to pave the way for the development of generalizable DL-based image analysis algorithms. We quantitatively evaluate the quality of the synthesized data in an augmentation scenario to achieve generalization and robustness to multi-vendor and multi-disease data for image segmentation. Our code is available at https://github.com/sinaamirrajab/CardiacPathologySynthesis.Comment: Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://www.melba-journal.org/2023:01

    Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI

    Get PDF
    This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction. Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria. Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    DEEP LEARNING FOR VOLUMETRIC MEDICAL IMAGE SEGMENTATION

    Get PDF
    Over the past few decades, medical imaging techniques, e.g., computed tomography (CT), positron emission tomography (PET), have been widely used to improve the state of diagnosis, prognosis, and treatment of diseases. However, reading medical images and making diagnosis or treatment planning require well-trained medical specialists, which is labor-intensive, time-consuming, high-cost and error-prone. With the emerging of deep learning, doctors and researchers have started to benefit from medical image analysis in various applications, e.g., medical image registration, classification, detection and segmentation. Among these tasks, segmentation is the most common area of applying deep learning to medical imaging. How to improve medical diagnosis by advancing the segmentation in computer-aided diagnosis systems has become an active research topic. In this dissertation, we will address this topic in following aspects. (i) We propose a 3D-based coarse-to-fine framework to effectively and efficiently tackle the challenges of limited amount of annotated 3D data and limited computational resources in the field of volumetric medical image segmentation. (ii) We extend the 3D coarse-to-fine to be multi-scale to early detect the small but clinically important pancreatic ductal adenocarcinoma (PDAC) tumors, and provide radiologists with interpretable abnormality locations by segmentation-for-classification. (iii) We extend the segmentation-for-classification to screen pancreatic neuroendocrine (PNETs) tumors by incorporating dual-phase information and dilated pancreatic duct that is regarded as the sign of high risk for pancreatic cancer. (iv) Going further, we investigate the mainstream methodology in the segmentation area and then explore the novel idea of AutoML in the medical imaging field to automatically search the neural network architectures tailoring for the segmentation task, which further advances the medical image segmentation field. (v) Moving forward beyond pancreatic tumors, we are the first to address the clinically critical task of detecting, identifying and characterizing suspicious cancer metastasized lymph nodes (LNs) by proposing a 3D distance stratification strategy to simulate and simplify the high-level reasoning protocols conducted by radiation oncologists in a divide-and-conquer manner. (vi) The 3D distance stratification strategy is upgraded by our proposed multi-branch detection-by-segmentation, which further advances the finding, identifying and segmenting of metastasis-suspicious LNs

    Simulation and Synthesis for Cardiac Magnetic Resonance Image Analysis

    Get PDF

    Applications of Deep Learning Techniques for Automated Multiple Sclerosis Detection Using Magnetic Resonance Imaging: A Review

    Get PDF
    Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure
    • …
    corecore