88 research outputs found

    Towards personalized diagnosis of Glioblastoma in Fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning

    Full text link
    Glioblastoma multiforme (GBM) is a fast-growing and highly invasive brain tumour, it tends to occur in adults between the ages of 45 and 70 and it accounts for 52 percent of all primary brain tumours. Usually, GBMs are detected by magnetic resonance images (MRI). Among MRI, Fluid-attenuated inversion recovery (FLAIR) sequence produces high quality digital tumour representation. Fast detection and segmentation techniques are needed for overcoming subjective medical doctors (MDs) judgment. In the present investigation, we intend to demonstrate by means of numerical experiments that topological features combined with textural features can be enrolled for GBM analysis and morphological characterization on FLAIR. To this extent, we have performed three numerical experiments. In the first experiment, Topological Data Analysis (TDA) of a simplified 2D tumour growth mathematical model had allowed to understand the bio-chemical conditions that facilitate tumour growth: the higher the concentration of chemical nutrients the more virulent the process. In the second experiment topological data analysis was used for evaluating GBM temporal progression on FLAIR recorded within 90 days following treatment (e.g., chemo-radiation therapy - CRT) completion and at progression. The experiment had confirmed that persistent entropy is a viable statistics for monitoring GBM evolution during the follow-up period. In the third experiment we had developed a novel methodology based on topological and textural features and automatic interpretable machine learning for automatic GBM classification on FLAIR. The algorithm reached a classification accuracy up to the 97%.Comment: 22 pages; 16 figure

    Automated brain tumour identification using magnetic resonance imaging:a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. METHODS: A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. RESULTS: Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. CONCLUSIONS: The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models

    A hybrid method for traumatic brain injury lesion segmentation

    Get PDF
    Traumatic brain injuries are significant effects of disability and loss of life. Physicians employ computed tomography (CT) images to observe the trauma and measure its severity for diagnosis and treatment. Due to the overlap of hemorrhage and normal brain tissues, segmentation methods sometimes lead to false results. The study is more challenging to unitize the AI field to collect brain hemorrhage by involving patient datasets employing CT scans images. We propose a novel technique free-form object model for brain injury CT image segmentation based on superpixel image processing that uses CT to analyzing brain injuries, quite challenging to create a high outstanding simple linear iterative clustering (SLIC) method. The maintains a strategic distance of the segmentation image to reduced intensity boundaries. The segmentation image contains marked red hemorrhage to modify the free-form object model. The contour labelled by the red mark is the output from our free-form object model. We proposed a hybrid image segmentation approach based on the combined edge detection and dilation technique features. The approach diminishes computational costs, and the show accomplished 96.68% accuracy. The segmenting brain hemorrhage images are achieved in the clustered region to construct a free-form object model. The study also presents further directions on future research in this domain

    Rich probabilistic models for semantic labeling

    Get PDF
    Das Ziel dieser Monographie ist es die Methoden und Anwendungen des semantischen Labelings zu erforschen. Unsere Beiträge zu diesem sich rasch entwickelten Thema sind bestimmte Aspekte der Modellierung und der Inferenz in probabilistischen Modellen und ihre Anwendungen in den interdisziplinären Bereichen der Computer Vision sowie medizinischer Bildverarbeitung und Fernerkundung

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy

    Pseudo-label refinement using superpixels for semi-supervised brain tumour segmentation

    Get PDF
    Training neural networks using limited annotations is an important problem in the medical domain. Deep Neural Networks (DNNs) typically require large, annotated datasets to achieve acceptable performance which, in the medical domain, are especially difficult to obtain as they require significant time from expert radiologists. Semi-supervised learning aims to overcome this problem by learning segmentations with very little annotated data, whilst exploiting large amounts of unlabelled data. However, the best-known technique, which utilises inferred pseudo-labels, is vulnerable to inaccurate pseudo-labels degrading the performance. We propose a framework based on superpixels - meaningful clusters of adjacent pixels - to improve the accuracy of the pseudo labels and address this issue. Our framework combines superpixels with semi-supervised learning, refining the pseudo-labels during training using the features and edges of the superpixel maps. This method is evaluated on a multimodal magnetic resonance imaging (MRI) dataset for the task of brain tumour region segmentation. Our method demonstrates improved performance over the standard semi-supervised pseudo-labelling baseline when there is a reduced annotator burden and only 5 annotated patients are available. We report DSC=0.824 and DSC=0.707 for the test set whole tumour and tumour core regions respectively

    Brain Tumor Segmentation and Identification Using Particle Imperialist Deep Convolutional Neural Network in MRI Images

    Get PDF
    For the past few years, segmentation for medical applications using Magnetic Resonance (MR) images is concentrated. Segmentation of Brain tumors using MRIpaves an effective platform to plan the treatment and diagnosis of tumors. Thus, segmentation is necessary to be improved, for a novel framework. The Particle Imperialist Deep Convolutional Neural Network (PI-Deep CNN) suggested framework is intended to address the problems with segmenting and categorizing the brain tumor. Using the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) Algorithm, the input MRI brain image is segmented, and then features are extracted using the Scatter Local Neighborhood Structure (SLNS) descriptor. Combining the scattering transform and the Local Neighborhood Structure (LNS) descriptor yields the proposed descriptor. A suggested Particle Imperialist algorithm-trained Deep CNN is then used to achieve the tumor-level classification. Different levels of the tumor are classified by the classifier, including Normal without tumor, Abnormal, Malignant tumor, and Non-malignant tumor. The cell is identified as a tumor cell and is subjected to additional diagnostics, with the exception of the normal cells that are tumor-free. The proposed method obtained a maximum accuracy of 0.965 during the experimentation utilizing the BRATS database and performance measures

    Pseudo-label refinement using superpixels for semi-supervised brain tumour segmentation

    Get PDF
    Training neural networks using limited annotations is an important problem in the medical domain. Deep Neural Networks (DNNs) typically require large, annotated datasets to achieve acceptable performance which, in the medical domain, are especially difficult to obtain as they require significant time from expert radiologists. Semi-supervised learning aims to overcome this problem by learning segmentations with very little annotated data, whilst exploiting large amounts of unlabelled data. However, the best-known technique, which utilises inferred pseudo-labels, is vulnerable to inaccurate pseudo-labels degrading the performance. We propose a framework based on superpixels - meaningful clusters of adjacent pixels - to improve the accuracy of the pseudo labels and address this issue. Our framework combines superpixels with semi-supervised learning, refining the pseudo-labels during training using the features and edges of the superpixel maps. This method is evaluated on a multimodal magnetic resonance imaging (MRI) dataset for the task of brain tumour region segmentation. Our method demonstrates improved performance over the standard semi-supervised pseudo-labelling baseline when there is a reduced annotator burden and only 5 annotated patients are available. We report DSC=0.824 and DSC=0.707 for the test set whole tumour and tumour core regions respectively
    • …
    corecore