1,476 research outputs found

    Learning strategies for improving neural networks for image segmentation under class imbalance

    Get PDF
    This thesis aims to improve convolutional neural networks (CNNs) for image segmentation under class imbalance, which is referred to the problem of training dataset when the class distributions are unequal. We particularly focus on medical image segmentation because of its imbalanced nature and clinical importance. Based on our observations of model behaviour, we argue that CNNs cannot generalize well on imbalanced segmentation tasks, mainly because of two counterintuitive reasons. CNNs are prone to overfit the under-represented foreground classes as it would memorize the regions of interest (ROIs) in the training data because they are so rare. Besides, CNNs could underfit the heterogenous background classes as it is difficult to learn from the samples with diverse and complex characteristics. Those behaviours of CNNs are not limited to specific loss functions. To address those limitations, firstly we propose novel asymmetric variants of popular loss functions and regularization techniques, which are explicitly designed to increase the variance of foreground samples to counter overfitting under class imbalance. Secondly we propose context label learning (CoLab) to tackle background underfitting by automatically decomposing the background class into several subclasses. This is achieved by optimizing an auxiliary task generator to generate context labels such that the main network will produce good ROIs segmentation performance. Then we propose a meta-learning based automatic data augmentation framework which builds a balance of foreground and background samples to alleviate class imbalance. Specifically, we learn class-specific training-time data augmentation (TRA) and jointly optimize TRA and test-time data augmentation (TEA) effectively aligning training and test data distribution for better generalization. Finally, we explore how to estimate model performance under domain shifts when trained with imbalanced dataset. We propose class-specific variants of existing confidence-based model evaluation methods which adapts separate parameters per class, enabling class-wise calibration to reduce model bias towards the minority classes.Open Acces

    Bridging generative models and Convolutional Neural Networks for domain-agnostic segmentation of brain MRI

    Get PDF
    Segmentation of brain MRI scans is paramount in neuroimaging, as it is a prerequisite for many subsequent analyses. Although manual segmentation is considered the gold standard, it suffers from severe reproducibility issues, and is extremely tedious, which limits its application to large datasets. Therefore, there is a clear need for automated tools that enable fast and accurate segmentation of brain MRI scans. Recent methods rely on convolutional neural networks (CNNs). While CNNs obtain accurate results on their training domain, they are highly sensitive to changes in resolution and MRI contrast. Although data augmentation and domain adaptation techniques can increase the generalisability of CNNs, these methods still need to be retrained for every new domain, which requires costly labelling of images. Here, we present a learning strategy to make CNNs agnostic to MRI contrast, resolution, and numerous artefacts. Specifically, we train a network with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation approach where all generation parameters are drawn for each example from uniform priors. As a result, the network is forced to learn domain-agnostic features, and can segment real test scans without retraining. The proposed method almost achieves the accuracy of supervised CNNs on their training domain, and substantially outperforms state-of-the-art domain adaptation methods. Finally, based on this learning strategy, we present a segmentation suite for robust analysis of heterogeneous clinical scans. Overall, our approach unlocks the development of morphometry on millions of clinical scans, which ultimately has the potential to improve the diagnosis and characterisation of neurological disorders

    Large-scale inference in the focally damaged human brain

    Get PDF
    Clinical outcomes in focal brain injury reflect the interactions between two distinct anatomically distributed patterns: the functional organisation of the brain and the structural distribution of injury. The challenge of understanding the functional architecture of the brain is familiar; that of understanding the lesion architecture is barely acknowledged. Yet, models of the functional consequences of focal injury are critically dependent on our knowledge of both. The studies described in this thesis seek to show how machine learning-enabled high-dimensional multivariate analysis powered by large-scale data can enhance our ability to model the relation between focal brain injury and clinical outcomes across an array of modelling applications. All studies are conducted on internationally the largest available set of MR imaging data of focal brain injury in the context of acute stroke (N=1333) and employ kernel machines at the principal modelling architecture. First, I examine lesion-deficit prediction, quantifying the ceiling on achievable predictive fidelity for high-dimensional and low-dimensional models, demonstrating the former to be substantially higher than the latter. Second, I determine the marginal value of adding unlabelled imaging data to predictive models within a semi-supervised framework, quantifying the benefit of assembling unlabelled collections of clinical imaging. Third, I compare high- and low-dimensional approaches to modelling response to therapy in two contexts: quantifying the effect of treatment at the population level (therapeutic inference) and predicting the optimal treatment in an individual patient (prescriptive inference). I demonstrate the superiority of the high-dimensional approach in both settings

    Multi-branch Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation

    Get PDF
    In this paper, we present an automated approach for segmenting multiple sclerosis (MS) lesions from multi-modal brain magnetic resonance images. Our method is based on a deep end-to-end 2D convolutional neural network (CNN) for slice-based segmentation of 3D volumetric data. The proposed CNN includes a multi-branch downsampling path, which enables the network to encode information from multiple modalities separately. Multi-scale feature fusion blocks are proposed to combine feature maps from different modalities at different stages of the network. Then, multi-scale feature upsampling blocks are introduced to upsize combined feature maps to leverage information from lesion shape and location. We trained and tested the proposed model using orthogonal plane orientations of each 3D modality to exploit the contextual information in all directions. The proposed pipeline is evaluated on two different datasets: a private dataset including 37 MS patients and a publicly available dataset known as the ISBI 2015 longitudinal MS lesion segmentation challenge dataset, consisting of 14 MS patients. Considering the ISBI challenge, at the time of submission, our method was amongst the top performing solutions. On the private dataset, using the same array of performance metrics as in the ISBI challenge, the proposed approach shows high improvements in MS lesion segmentation compared with other publicly available tools.Comment: This paper has been accepted for publication in NeuroImag

    Opportunities for Understanding MS Mechanisms and Progression With MRI Using Large-Scale Data Sharing and Artificial Intelligence

    Get PDF
    Multiple sclerosis (MS) patients have heterogeneous clinical presentations, symptoms and progression over time, making MS difficult to assess and comprehend in vivo. The combination of large-scale data-sharing and artificial intelligence creates new opportunities for monitoring and understanding MS using magnetic resonance imaging (MRI).First, development of validated MS-specific image analysis methods can be boosted by verified reference, test and benchmark imaging data. Using detailed expert annotations, artificial intelligence algorithms can be trained on such MS-specific data. Second, understanding disease processes could be greatly advanced through shared data of large MS cohorts with clinical, demographic and treatment information. Relevant patterns in such data that may be imperceptible to a human observer could be detected through artificial intelligence techniques. This applies from image analysis (lesions, atrophy or functional network changes) to large multi-domain datasets (imaging, cognition, clinical disability, genetics, etc.).After reviewing data-sharing and artificial intelligence, this paper highlights three areas that offer strong opportunities for making advances in the next few years: crowdsourcing, personal data protection, and organized analysis challenges. Difficulties as well as specific recommendations to overcome them are discussed, in order to best leverage data sharing and artificial intelligence to improve image analysis, imaging and the understanding of MS
    corecore