71 research outputs found

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Radiomic Features to Predict Overall Survival Time for Patients with Glioblastoma Brain Tumors Based on Machine Learning and Deep Learning Methods

    Full text link
    Machine Learning (ML) methods including Deep Learning (DL) Methods have been employed in the medical field to improve diagnosis process and patient’s prognosis outcomes. Glioblastoma multiforme is an extremely aggressive Glioma brain tumor that has a poor survival rate. Understanding the behavior of the Glioblastoma brain tumor is still uncertain and some factors are still unrecognized. In fact, the tumor behavior is important to decide a proper treatment plan and to improve a patient’s health. The aim of this dissertation is to develop a Computer-Aided-Diagnosis system (CADiag) based on ML/DL methods to automatically estimate the Overall Survival Time (OST) for patients with Glioblastoma brain tumors from medical imaging and non-imaging data. This system is developed to enhance and speed-up the diagnosis process, as well as to increase understanding of the behavior of Glioblastoma brain tumors. The proposed OST prediction system is developed based on a classification process to categorize a GBM patient into one of the following three survival time groups: short-term (months), mid-term (10-15 months), and long-term (\u3e15 months). The Brain Tumor Segmentation challenge (BraTS) dataset is used to develop the automatic OST prediction system. This dataset consists of multimodal preoperative Magnetic Resonance Imaging (mpMRI) data, and clinical data. The training data is relatively small in size to train an accurate OST prediction model based on DL method. Therefore, traditional ML methods such as Support Vector Machine (SVM), Neural Network, K-Nearest Neighbor (KNN), Decision Tree (DT) were used to develop the OST prediction model for GBM patients. The main contributions in the perspective of ML field include: developing and evaluating five novel radiomic feature extraction methods to produce an automatic and reliable OST prediction system based on classification task. These methods are volumetric, shape, location, texture, histogram-based, and DL features. Some of these radiomic features can be extracted directly from MRI images, such as statistical texture features and histogram-based features. However, preprocessing methods are required to extract automatically other radiomic features from MRI images such as the volume, shape, and location information of the GBM brain tumors. Therefore, a three-dimension (3D) segmentation DL model based on modified U-Net architecture is developed to identify and localize the three glioma brain tumor subregions, peritumoral edematous/invaded tissue (ED), GD-enhancing tumor (ET), and the necrotic tumor core (NCR), in multi MRI scans. The segmentation results are used to calculate the volume, location and shape information of a GBM tumor. Two novel approaches based on volumetric, shape, and location information, are proposed and evaluated in this dissertation. To improve the performance of the OST prediction system, information fusion strategies based on data-fusion, features-fusion and decision-fusion are involved. The best prediction model was developed based on feature fusions and ensemble models using NN classifiers. The proposed OST prediction system achieved competitive results in the BraTS 2020 with accuracy 55.2% and 55.1% on the BraTS 2020 validation and test datasets, respectively. In sum, developing automatic CADiag systems based on robust features and ML methods, such as our developed OST prediction system, enhances the diagnosis process in terms of cost, accuracy, and time. Our OST prediction system was evaluated from the perspective of the ML field. In addition, preprocessing steps are essential to improve not only the quality of the features but also boost the performance of the prediction system. To test the effectiveness of our developed OST system in medical decisions, we suggest more evaluations from the perspective of biology and medical decisions, to be then involved in the diagnosis process as a fast, inexpensive and automatic diagnosis method. To improve the performance of our developed OST prediction system, we believe it is required to increase the size of the training data, involve multi-modal data, and/or provide any uncertain or missing information to the data (such as patients\u27 resection statuses, gender, etc.). The DL structure is able to extract numerous meaningful low-level and high-level radiomic features during the training process without any feature type nominations by researchers. We thus believe that DL methods could achieve better predictions than ML methods if large size and proper data is available

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Biomedical image analysis of brain tumours through the use of artificial intelligence

    Get PDF
    Thesis (MCom)--Stellenbosch University, 2022.ENGLISH SUMMARY: Cancer is one of the leading causes of morbidity and mortality on a global scale. More specifically, cancer of the brain, which is one of the rarest forms. One of the major challenges is that of timely diagnoses. In the ongoing fight against cancer early and accurate detection in combination with effective treatment strategy planning remains one of the best tools for improved patient outcomes and success. Emphasis has been placed on the identification and classification of brain lesions in patients - that is, either the absence or presence of brain tumours. In the case of malignant brain tumours it is critical to classify patients into either high-grade or low-grade brain lesion groups: different gradings of brain tumours have different prognoses, thus different survival rates. The growth in the availability and accessibility of big data due to digitisation has led individuals in the area of bioinformatics in both academia and industry to apply and evaluate artificial intelligence techniques. However, one of the most important challenges, not only in the field of bioinformatics but also in other realms, is transforming the raw data into valuable insights and knowledge. In this research thesis artificial intelligence techniques that can detect vital and fundamental underlying patterns in the data are reviewed. The models may provide significant predictive performance to assist with decision making. Much artificial intelligence has been applied to brain tumour classification and segmentation in the research literature. However, in this study the theoretical background of two more traditional machine learning methods, namely -nearest neighbours and support vector machines, is discussed. In recent years, deep learning (artificial neural networks) has gained prominence due to its ability to handle copious amounts of data. The specialised version of the artificial neural network that is reviewed is convolutional neural networks. The rationale behind this particular technique is that it is applied to visual imagery. In addition to making use of the convolutional neural network architecture, the study reviews the training of neural networks that involves the use of optimisation techniques, considered to be one of the most difficult parts. Utilising only one learning algorithm (optimisation technique) in the architecture of convolutional neural network models for classification tasks may be regarded as insufficient unless there is strong support in the design of the analysis for using a particular technique. Nine state-of-the-art optimisation techniques formed part of a comparative study to determine if there was any improvement in the classification and segmentation of high-grade or low-grade brain tumours. These machine learning and deep learning techniques have proved to be successful in image classification and - more relevant to this research – brain tumours. To supplement the theoretical knowledge, these artificial intelligence methodologies (models) are applied through the exploration of magnetic resonance imaging scans of brain lesions.AFRIKAANSE OPSOMMING: Kanker is wêreldwyd een van die hoofoorsake van morbiditeit en sterftes; veral breinkanker, wat een van die mees seldsame soorte is. Een van die groot uitdagings is om dit betyds te diagnoseer. In die voortgesette stryd teen kanker is vroeë en akkurate opsporing, in kombinasie met doeltreffende beplanning van die behandelingstrategie, een van die beste hulpmiddels vir verbeterde pasiëntuitkomste en sukses. Klem word geplaas op die identifikasie en klassifikasie van breinletsels in pasiënte – dit wil sê, die teenwoordigheid of afwesigheid van breingewasse. In die geval van kwaadaardige breingewasse is dit noodsaaklik om pasiënte in groepe as hetsy hoëgraad- of laegraadbreingewasse te klassifiseer: verskillende graderings van breingewasse het verskillende prognoses, en dus verskillende oorlewingskoerse. Die toename in die beskikbaarheid en toeganklikheid van groot data danksy digitalisering, het daartoe gelei dat individue op die gebied van bio-informatika in die akademie en die bedryf begin het om kunsmatige-intelligensie-tegnieke toe te pas en te evalueer. Een van die belangrikste uitdagings, nie slegs op die gebied van bio-informatika nie, maar ook op ander terreine, is egter die omskakeling van rou data na waardevolle insigte en kennis. Hierdie navorsingstesis hersien die kunsmatige-intelligensie-tegnieke wat lewensbelangrike en grondliggende onderliggende patrone in die data kan opspoor. Die modelle kan beduidende voorspellende prestasie bied om met besluitneming te help. Die navorsingsliteratuur dek heelwat toepassings van kunsmatige intelligensie op breingewasklassifikasie en -segmentasie. In hierdie studie word die teoretiese agtergrond van meer tradisionele masjienleermetodes, naamlik die -naaste-bure-algoritme (-nearest neighbour algorithm) en steunvektormasjiene, bespreek. Diep leer (kunsmatige neurale netwerke) het onlangs op die voorgrond getree weens die vermoë daarvan om groot hoeveelhede data te kan hanteer. Die gespesialiseerde weergawe van die kunsmatige neurale netwerk wat hersien word, is konvolusionele neurale netwerkargitektuur. Die rasionaal vir hierdie spesifieke tegniek is dat dit op visuele beelde toegepas word. Buiten dat dit van konvolusionele neurale netwerkargitektuur gebruik maak, hersien die studie ook die afrigting van neurale netwerke met behulp van optimaliseringstegnieke, wat as een van die moeilikste dele beskou word. Die aanwending van slegs een leeralgoritme (optimaliseringstegniek) in die argitektuur van konvolusionele neurale netwerkmodelle vir klassifikasietake, kan as onvoldoende beskou word, tensy daar sterk steun vir die gebruik van ʼn spesifieke tegniek in die ontwerp van die ontleding is. Nege van die jongste optimaliseringstegnieke was deel van ʼn vergelykende studie om vas te stel of daar enige verbetering in die klassifikasie en segmentasie van hoëgraad- en laegraadbreingewasse was. Hierdie masjienleer- en diep-leertegnieke was suksesvol met beeldklassifikasie en – meer relevant vir hierdie navorsing – breingewasklassifikasie. Ter aanvulling van die teoretiese kennis, word hierdie kunsmatige-intelligensie-metodologieë (-modelle) deur die verkenning van magnetiese resonansbeelding van breingewasse toegepas.Master

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy

    Advanced Imaging Analysis for Predicting Tumor Response and Improving Contour Delineation Uncertainty

    Get PDF
    ADVANCED IMAGING ANALYSIS FOR PREDICTING TUMOR RESPONSE AND IMPROVING CONTOUR DELINEATION UNCERTAINTY By Rebecca Nichole Mahon, MS A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University. Virginia Commonwealth University, 2018 Major Director: Dr. Elisabeth Weiss, Professor, Department of Radiation Oncology Radiomics, an advanced form of imaging analysis, is a growing field of interest in medicine. Radiomics seeks to extract quantitative information from images through use of computer vision techniques to assist in improving treatment. Early prediction of treatment response is one way of improving overall patient care. This work seeks to explore the feasibility of building predictive models from radiomic texture features extracted from magnetic resonance (MR) and computed tomography (CT) images of lung cancer patients. First, repeatable primary tumor texture features from each imaging modality were identified to ensure a sufficient number of repeatable features existed for model development. Then a workflow was developed to build models to predict overall survival and local control using single modality and multi-modality radiomics features. The workflow was also applied to normal tissue contours as a control study. Multiple significant models were identified for the single modality MR- and CT-based models, while the multi-modality models were promising indicating exploration with a larger cohort is warranted. Another way advances in imaging analysis can be leveraged is in improving accuracy of contours. Unfortunately, the tumor can be close in appearance to normal tissue on medical images creating high uncertainty in the tumor boundary. As the entire defined target is treated, providing physicians with additional information when delineating the target volume can improve the accuracy of the contour and potentially reduce the amount of normal tissue incorporated into the contour. Convolution neural networks were developed and trained to identify the tumor interface with normal tissue and for one network to identify the tumor location. A mock tool was presented using the output of the network to provide the physician with the uncertainty in prediction of the interface type and the probability of the contour delineation uncertainty exceeding 5mm for the top three predictions
    • …
    corecore