114 research outputs found

    A deep learning algorithm for white matter hyperintensity lesion detection and segmentation

    Get PDF
    Purpose: White matter hyperintensity (WMHI) lesions on MR images are an important indication of various types of brain diseases that involve inflammation and blood vessel abnormalities. Automated quantification of the WMHI can be valuable for the clinical management of patients, but existing automated software is often developed for a single type of disease and may not be applicable for clinical scans with thick slices and different scanning protocols. The purpose of the study is to develop and validate an algorithm for automatic quantification of white matter hyperintensity suitable for heterogeneous MRI data with different disease types. / Methods: We developed and evaluated “DeepWML”, a deep learning method for fully automated white matter lesion (WML) segmentation of multicentre FLAIR images. We used MRI from 507 patients, including three distinct white matter diseases, obtained in 9 centres, with a wide range of scanners and acquisition protocols. The automated delineation tool was evaluated through quantitative parameters of Dice similarity, sensitivity and precision compared to manual delineation (gold standard). / Results: The overall median Dice similarity coefficient was 0.78 (range 0.64 ~ 0.86) across the three disease types and multiple centres. The median sensitivity and precision were 0.84 (range 0.67 ~ 0.94) and 0.81 (range 0.64 ~ 0.92), respectively. The tool’s performance increased with larger lesion volumes. / Conclusion: DeepWML was successfully applied to a wide spectrum of MRI data in the three white matter disease types, which has the potential to improve the practical workflow of white matter lesion delineation

    White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks

    Get PDF
    The accurate assessment of White matter hyperintensities (WMH) burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data. The manual delineation of WMHs is tedious, costly and time consuming. This is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense. Several automated methods aiming to tackle the challenges of WMH segmentation have been proposed, however cannot differentiate between WMH and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. As far as we know, this is the first time such differentiation task has explicitly been proposed. The proposed fully convolutional CNN architecture, is comprised of an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, were found to be in line with the associations found with the expert-annotated volumes

    A Review on Computer Aided Diagnosis of Acute Brain Stroke.

    Full text link
    Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas

    Bridging generative models and Convolutional Neural Networks for domain-agnostic segmentation of brain MRI

    Get PDF
    Segmentation of brain MRI scans is paramount in neuroimaging, as it is a prerequisite for many subsequent analyses. Although manual segmentation is considered the gold standard, it suffers from severe reproducibility issues, and is extremely tedious, which limits its application to large datasets. Therefore, there is a clear need for automated tools that enable fast and accurate segmentation of brain MRI scans. Recent methods rely on convolutional neural networks (CNNs). While CNNs obtain accurate results on their training domain, they are highly sensitive to changes in resolution and MRI contrast. Although data augmentation and domain adaptation techniques can increase the generalisability of CNNs, these methods still need to be retrained for every new domain, which requires costly labelling of images. Here, we present a learning strategy to make CNNs agnostic to MRI contrast, resolution, and numerous artefacts. Specifically, we train a network with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation approach where all generation parameters are drawn for each example from uniform priors. As a result, the network is forced to learn domain-agnostic features, and can segment real test scans without retraining. The proposed method almost achieves the accuracy of supervised CNNs on their training domain, and substantially outperforms state-of-the-art domain adaptation methods. Finally, based on this learning strategy, we present a segmentation suite for robust analysis of heterogeneous clinical scans. Overall, our approach unlocks the development of morphometry on millions of clinical scans, which ultimately has the potential to improve the diagnosis and characterisation of neurological disorders

    Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

    Get PDF
    Deep learning approaches such as convolutional neural nets have consistently outperformed previous methods on challenging tasks such as dense, semantic segmentation. However, the various proposed networks perform differently, with behaviour largely influenced by architectural choices and training settings. This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods. The approach reduces the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database. EMMA can be seen as an unbiased, generic deep learning model which is shown to yield excellent performance, winning the first position in the BRATS 2017 competition among 50+ participating teams.Comment: The method won the 1st-place in the Brain Tumour Segmentation (BRATS) 2017 competition (segmentation task

    Development of machine learning schemes for segmentation, characterisation, and evolution prediction of white matter hyperintensities in structural brain MRI

    Get PDF
    White matter hyperintensities (WMH) are neuroradiological features seen in T2 Fluid-Attenuated Inversion Recovery (T2-FLAIR) brain magnetic resonance imaging (MRI) and have been commonly associated with stroke, ageing, dementia, and Alzheimer’s disease (AD) progression. As a marker of neuro-degenerative disease, WMH may change over time and follow the clinical condition of the patient. In contrast to the early longitudinal studies of WMH, recent studies have suggested that the progression of WMH may be a dynamic, non-linear process where different clusters of WMH may shrink, stay unchanged, or grow. In this thesis, these changes are referred to as the “evolution of WMH”. The main objective of this thesis is to develop machine learning methods for prediction of WMH evolution in structural brain MRI from one-time (baseline) assessment. Predicting the evolution of WMH is challenging because the rate and direction of WMH evolution varies greatly across previous studies. Furthermore, the evolution of WMH is a non-deterministic problem because some clinical factors that possibly influence it are still not known. In this thesis, different learning schemes of deep learning algorithm and data modalities are proposed to produce the best estimation of WMH evolution. Furthermore, a scheme to simulate the non-deterministic nature of WMH evolution, named auxiliary input, was also proposed. In addition to the development of prediction model for WMH evolution, machine learning methods for segmentation of early WMH, characterisation of WMH, and simulation of WMH progression and regression are also developed as parts of this thesis

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis
    corecore