127 research outputs found

    Overall Survival Prediction of Glioma Patients With Multiregional Radiomics

    Get PDF
    Radiomics-guided prediction of overall survival (OS) in brain gliomas is seen as a significant problem in Neuro-oncology. The ultimate goal is to develop a robust MRI-based approach (i.e., a radiomics model) that can accurately classify a novel subject as a short-term survivor, a medium-term survivor, or a long-term survivor. The BraTS 2020 challenge provides radiological imaging and clinical data (178 subjects) to develop and validate radiomics-based methods for OS classification in brain gliomas. In this study, we empirically evaluated the efficacy of four multiregional radiomic models, for OS classification, and quantified the robustness of predictions to variations in automatic segmentation of brain tumor volume. More specifically, we evaluated four radiomic models, namely, the Whole Tumor (WT) radiomics model, the 3-subregions radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model. The 3-subregions radiomics model is based on a physiological segmentation of whole tumor volume (WT) into three non-overlapping subregions. The 6-subregions and 21-subregions radiomic models are based on an anatomical segmentation of the brain tumor into 6 and 21 anatomical regions, respectively. Moreover, we employed six segmentation schemes – five CNNs and one STAPLE-fusion method – to quantify the robustness of radiomic models. Our experiments revealed that the 3-subregions radiomics model had the best predictive performance (mean AUC = 0.73) but poor robustness (RSD = 1.99) and the 6-subregions and 21-subregions radiomics models were more robust (RSD  1.39) with lower predictive performance (mean AUC  0.71). The poor robustness of the 3-subregions radiomics model was associated with highly variable and inferior segmentation of tumor core and active tumor subregions as quantified by the Hausdorff distance metric (4.4−6.5mm) across six segmentation schemes. Failure analysis revealed that the WT radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model failed for the same subjects which is attributed to the common requirement of accurate segmentation of the WT volume. Moreover, short-term survivors were largely misclassified by the radiomic models and had large segmentation errors (average Hausdorff distance of 7.09mm). Lastly, we concluded that while STAPLE-fusion can reduce segmentation errors, it is not a solution to learning accurate and robust radiomic models

    Computer-aided analysis of complex neurological data for age-based classification of upper limbs motor performance and radiomics-based survival prediction of brain tumors

    Get PDF
    Nowadays, the availability of an ever-increasing amount of digital medical data collected through heterogeneous sources such as healthcare systems, sensors, and mobile consumer technologies makes it possible to perform computer-aided analyses aimed at improving the knowledge, diagnosis, and treatment of medical conditions. In this thesis, we worked with two medical datasets that can be used to study two different types of neurological disorders, motor control disorders (e.g., Parkinson’s disease) and brain tumors. The first dataset is comprised of the results of digital motor tests of the upper limbs that have been taken by more than 10000 users of a free and publicly available mobile application called MotorBrain. Motor tests are used by neurologists to assess human motor performance and support the diagnosis of disorders affecting motor control. Our first goal was to analyse the MotorBrain data with statistical methods to investigate the age-related behavior patterns of healthy subjects for the different motor tests included in the application. Results show that the collected data reveal the typical degradation of motor performance that is common with aging, thus providing support for the appropriateness of the considered approach to motor performance data collection and potentially helping neurologist to identify neurological disorders at an early stage by comparing new data with the available normative data. At the same time, the results highlight problems that emerge when data collection is performed in an unsupervised non-clinical setting. Based on the results of the statistical analysis, we used machine learning to automatically classify users according to their motor performance. The idea is to use such classification to automatically flag cases whose motor performance differs significantly from the typical performance of their age group and thus require manual inspection from a neurologist. In particular, we used random forest and logistic regression classification techniques with Minimum Redundancy, Maximum Relevance (MRMR) and Recursive Feature Elimination with SVM (RFE-SVM) feature selection methods. For each motor test, we were able to achieve good average accuracy in discriminating motor performance of young and old adults, with the random forest method leading to better results. Similar results were obtained for multi-class discrimination based on 5 age groups. The second dataset we worked with consists of a standard set of MRI images of brain tumors that is often used to develop and validate radiomics-based methods for overall survival (OS) classification of brain gliomas. We specifically focused on two important steps of the radiomics process, segmentation and feature selection. We first used the MRI dataset to empirically evaluate the impact of six different segmentation algorithms – five Convolutional Neural Networks and the STAPLE-fusion method - and four multiregional radiomic models (Whole Tumor (WT), 3-subregions, 6-subregions, and 21-subregions) on OS classification. Results of the evaluation show that the 3-subregions radiomics model has high predictive power but poor robustness while the 6-subregions and 21-subregions radiomics models are more robust but have low predictive power. The poor robustness of the 3-subregions radiomics model was associated with highly variable and inferior segmentation of tumor core and active tumor subregions as quantified by the Hausdorff metric. Failure analysis revealed that the WT radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model failed for some subjects, possibly because of inaccurate segmentation of the WT volume. Moreover, short-term survivors were largely misclassified by the radiomic models and were associated to large segmentation errors. The STAPLE fusion method was able to circumvent these segmentation errors but was not found to be the ultimate solution in terms of its predictive power.Nowadays, the availability of an ever-increasing amount of digital medical data collected through heterogeneous sources such as healthcare systems, sensors, and mobile consumer technologies makes it possible to perform computer-aided analyses aimed at improving the knowledge, diagnosis, and treatment of medical conditions. In this thesis, we worked with two medical datasets that can be used to study two different types of neurological disorders, motor control disorders (e.g., Parkinson’s disease) and brain tumors. The first dataset is comprised of the results of digital motor tests of the upper limbs that have been taken by more than 10000 users of a free and publicly available mobile application called MotorBrain. Motor tests are used by neurologists to assess human motor performance and support the diagnosis of disorders affecting motor control. Our first goal was to analyse the MotorBrain data with statistical methods to investigate the age-related behavior patterns of healthy subjects for the different motor tests included in the application. Results show that the collected data reveal the typical degradation of motor performance that is common with aging, thus providing support for the appropriateness of the considered approach to motor performance data collection and potentially helping neurologist to identify neurological disorders at an early stage by comparing new data with the available normative data. At the same time, the results highlight problems that emerge when data collection is performed in an unsupervised non-clinical setting. Based on the results of the statistical analysis, we used machine learning to automatically classify users according to their motor performance. The idea is to use such classification to automatically flag cases whose motor performance differs significantly from the typical performance of their age group and thus require manual inspection from a neurologist. In particular, we used random forest and logistic regression classification techniques with Minimum Redundancy, Maximum Relevance (MRMR) and Recursive Feature Elimination with SVM (RFE-SVM) feature selection methods. For each motor test, we were able to achieve good average accuracy in discriminating motor performance of young and old adults, with the random forest method leading to better results. Similar results were obtained for multi-class discrimination based on 5 age groups. The second dataset we worked with consists of a standard set of MRI images of brain tumors that is often used to develop and validate radiomics-based methods for overall survival (OS) classification of brain gliomas. We specifically focused on two important steps of the radiomics process, segmentation and feature selection. We first used the MRI dataset to empirically evaluate the impact of six different segmentation algorithms – five Convolutional Neural Networks and the STAPLE-fusion method - and four multiregional radiomic models (Whole Tumor (WT), 3-subregions, 6-subregions, and 21-subregions) on OS classification. Results of the evaluation show that the 3-subregions radiomics model has high predictive power but poor robustness while the 6-subregions and 21-subregions radiomics models are more robust but have low predictive power. The poor robustness of the 3-subregions radiomics model was associated with highly variable and inferior segmentation of tumor core and active tumor subregions as quantified by the Hausdorff metric. Failure analysis revealed that the WT radiomics model, the 6-subregions radiomics model, and the 21-subregions radiomics model failed for some subjects, possibly because of inaccurate segmentation of the WT volume. Moreover, short-term survivors were largely misclassified by the radiomic models and were associated to large segmentation errors. The STAPLE fusion method was able to circumvent these segmentation errors but was not found to be the ultimate solution in terms of its predictive power

    Machine Learning and Radiomic Features to Predict Overall Survival Time for Glioblastoma Patients

    Get PDF
    Glioblastoma is an aggressive brain tumor with a low survival rate. Understanding tumor behavior by predicting prognosis outcomes is a crucial factor in deciding a proper treatment plan. In this paper, an automatic overall survival time prediction system (OST) for glioblastoma patients is developed on the basis of radiomic features and machine learning (ML). This system is designed to predict prognosis outcomes by classifying a glioblastoma patient into one of three survival groups: short-term, mid-term, and long-term. To develop the prediction system, a medical dataset based on imaging information from magnetic resonance imaging (MRI) and non-imaging information is used. A novel radiomic feature extraction method is proposed and developed on the basis of volumetric and location information of brain tumor subregions extracted from MRI scans. This method is based on calculating the volumetric features from two brain sub-volumes obtained from the whole brain volume in MRI images using brain sectional planes (sagittal, coronal, and horizontal). Many experiments are conducted on the basis of various ML methods and combinations of feature extraction methods to develop the best OST system. In addition, the feature fusions of both radiomic and non-imaging features are examined to improve the accuracy of the prediction system. The best performance was achieved by the neural network and feature fusions

    Model-Based Approach for Diffuse Glioma Classification, Grading, and Patient Survival Prediction

    Get PDF
    The work in this dissertation proposes model-based approaches for molecular mutations classification of gliomas, grading based on radiomics features and genomics, and prediction of diffuse gliomas clinical outcome in overall patient survival. Diffuse gliomas are types of Central Nervous System (CNS) brain tumors that account for 25.5% of primary brain and CNS tumors and originate from the supportive glial cells. In the 2016 World Health Organization’s (WHO) criteria for CNS brain tumor, a major reclassification of the diffuse gliomas is presented based on gliomas molecular mutations and the growth behavior. Currently, the status of molecular mutations is determined by obtaining viable regions of tumor tissue samples. However, an increasing need to non-invasively analyze the clinical outcome of tumors requires careful modeling and co-analysis of radiomics (i.e., imaging features) and genomics (molecular and proteomics features). The variances in diffuse Lower-grade gliomas (LGG), which are demonstrated by their heterogeneity, can be exemplified by radiographic imaging features (i.e., radiomics). Therefore, radiomics may be suggested as a crucial non-invasive marker in the tumor diagnosis and prognosis. Consequently, we examine radiomics extracted from the multi-resolution fractal representations of the tumor in classifying the molecular mutations of diffuse LGG non-invasively. The proposed radiomics in the decision-tree-based ensemble machine learning molecular prediction model confirm the efficacy of these fractal features in glioma prediction. Furthermore, this dissertation proposes a novel non-invasive statistical model to classify and predict LGG molecular mutations based on radiomics and count-based genomics data. The performance results of the proposed statistical model indicate that fusing radiomics to count-based genomics improves the performance of mutations prediction. Furthermore, the radiomics-based glioblastoma survival prediction framework is proposed in this work. The survival prediction framework includes two survival prediction pipelines that combine different feature selection and regression approaches. The framework is evaluated using two recent widely used benchmark datasets from Brain Tumor Segmentation (BraTS) challenges in 2017 and 2018. The first survival prediction pipeline offered the best overall performance in the 2017 Challenge, and the second survival prediction pipeline offered the best performance using the validation dataset. In summary, in this work, we develop non-invasive computational and statistical models based on radiomics and genomics to investigate overall survival, tumor progression, and the molecular classification in diffuse gliomas. The methods discussed in our study are important steps towards a non-invasive approach to diffuse brain tumor classification, grading, and patient survival prediction that may be recommended prior to invasive tissue sampling in a clinical setting

    Prediction of treatment response and outcome in locally advanced rectal cancer using radiomics

    Get PDF
    With the increasing number of medical images, deep learning is being used more and more in radiomics, but it suffers from small and heterogeneous datasets. To address this, a radiomics pipeline was developed for the prediction of the treatment outcome for neoadjuvant therapy in locally advanced rectal cancer (LARC), focusing on developing methods for dealing with small, heterogeneous multicenter datasets. For normalization, six different normalization methods (five statistical methods and one novel deep learning method) were investigated in multiple configurations: trained on all images, images from all centers except one, and images from a single center. The impact of normalization was evaluated in four tasks: tumor segmentation, prediction of treatment outcome, prediction of sex and prediction of age. For segmentation, there were only significant differences when training on one center, with the deep learning method being the best with a DSC of 0.50 ± 0.01. For the prediction of sex and treatment outcomes, the percentile method combined with histogram matching works best in all scenarios. The classification performance was evaluated using a published neural network. This network consists of two U-Nets sharing their weights, with segmentation as an additional task. The maximum AUC was 0.75 (95 % CI: 0.52 to 0.92) on the validation set. This is better than chance, but not better than using a classifier trained on clinical characteristics. In summary, normalization did help with the generalizability of the neural networks, but there is a limit to what can be corrected

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge

    Get PDF
    International Brain Tumor Segmentation (BraTS) challengeGliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.This work was supported in part by the 1) National Institute of Neurological Disorders and Stroke (NINDS) of the NIH R01 grant with award number R01-NS042645, 2) Informatics Technology for Cancer Research (ITCR) program of the NCI/NIH U24 grant with award number U24-CA189523, 3) Swiss Cancer League, under award number KFS-3979-08-2016, 4) Swiss National Science Foundation, under award number 169607.Article signat per 427 autors/es: Spyridon Bakas1,2,3,†,‡,∗ , Mauricio Reyes4,† , Andras Jakab5,†,‡ , Stefan Bauer4,6,169,† , Markus Rempfler9,65,127,† , Alessandro Crimi7,† , Russell Takeshi Shinohara1,8,† , Christoph Berger9,† , Sung Min Ha1,2,† , Martin Rozycki1,2,† , Marcel Prastawa10,† , Esther Alberts9,65,127,† , Jana Lipkova9,65,127,† , John Freymann11,12,‡ , Justin Kirby11,12,‡ , Michel Bilello1,2,‡ , Hassan M. Fathallah-Shaykh13,‡ , Roland Wiest4,6,‡ , Jan Kirschke126,‡ , Benedikt Wiestler126,‡ , Rivka Colen14,‡ , Aikaterini Kotrotsou14,‡ , Pamela Lamontagne15,‡ , Daniel Marcus16,17,‡ , Mikhail Milchenko16,17,‡ , Arash Nazeri17,‡ , Marc-Andr Weber18,‡ , Abhishek Mahajan19,‡ , Ujjwal Baid20,‡ , Elizabeth Gerstner123,124,‡ , Dongjin Kwon1,2,† , Gagan Acharya107, Manu Agarwal109, Mahbubul Alam33 , Alberto Albiol34, Antonio Albiol34, Francisco J. Albiol35, Varghese Alex107, Nigel Allinson143, Pedro H. A. Amorim159, Abhijit Amrutkar107, Ganesh Anand107, Simon Andermatt152, Tal Arbel92, Pablo Arbelaez134, Aaron Avery60, Muneeza Azmat62, Pranjal B.107, Wenjia Bai128, Subhashis Banerjee36,37, Bill Barth2 , Thomas Batchelder33, Kayhan Batmanghelich88, Enzo Battistella42,43 , Andrew Beers123,124, Mikhail Belyaev137, Martin Bendszus23, Eze Benson38, Jose Bernal40 , Halandur Nagaraja Bharath141, George Biros62, Sotirios Bisdas76, James Brown123,124, Mariano Cabezas40, Shilei Cao67, Jorge M. Cardoso76, Eric N Carver41, Adri Casamitjana138, Laura Silvana Castillo134, Marcel Cat138, Philippe Cattin152, Albert Cerigues ´ 40, Vinicius S. Chagas159 , Siddhartha Chandra42, Yi-Ju Chang45, Shiyu Chang156, Ken Chang123,124, Joseph Chazalon29 , Shengcong Chen25, Wei Chen46, Jefferson W Chen80, Zhaolin Chen130, Kun Cheng120, Ahana Roy Choudhury47, Roger Chylla60, Albert Clrigues40, Steven Colleman141, Ramiro German Rodriguez Colmeiro149,150,151, Marc Combalia138, Anthony Costa122, Xiaomeng Cui115, Zhenzhen Dai41, Lutao Dai50, Laura Alexandra Daza134, Eric Deutsch43, Changxing Ding25, Chao Dong65 , Shidu Dong155, Wojciech Dudzik71,72, Zach Eaton-Rosen76, Gary Egan130, Guilherme Escudero159, Tho Estienne42,43, Richard Everson87, Jonathan Fabrizio29, Yong Fan1,2 , Longwei Fang54,55, Xue Feng27, Enzo Ferrante128, Lucas Fidon42, Martin Fischer95, Andrew P. French38,39 , Naomi Fridman57, Huan Fu90, David Fuentes58, Yaozong Gao68, Evan Gates58, David Gering60 , Amir Gholami61, Willi Gierke95, Ben Glocker128, Mingming Gong88,89, Sandra Gonzlez-Vill40, T. Grosges151, Yuanfang Guan108, Sheng Guo64, Sudeep Gupta19, Woo-Sup Han63, Il Song Han63 , Konstantin Harmuth95, Huiguang He54,55,56, Aura Hernndez-Sabat100, Evelyn Herrmann102 , Naveen Himthani62, Winston Hsu111, Cheyu Hsu111, Xiaojun Hu64, Xiaobin Hu65, Yan Hu66, Yifan Hu117, Rui Hua68,69, Teng-Yi Huang45, Weilin Huang64, Sabine Van Huffel141, Quan Huo68, Vivek HV70, Khan M. Iftekharuddin33, Fabian Isensee22, Mobarakol Islam81,82, Aaron S. Jackson38 , Sachin R. Jambawalikar48, Andrew Jesson92, Weijian Jian119, Peter Jin61, V Jeya Maria Jose82,83 , Alain Jungo4 , Bernhard Kainz128, Konstantinos Kamnitsas128, Po-Yu Kao79, Ayush Karnawat129 , Thomas Kellermeier95, Adel Kermi74, Kurt Keutzer61, Mohamed Tarek Khadir75, Mahendra Khened107, Philipp Kickingereder23, Geena Kim135, Nik King60, Haley Knapp60, Urspeter Knecht4 , Lisa Kohli60, Deren Kong64, Xiangmao Kong115, Simon Koppers32, Avinash Kori107, Ganapathy Krishnamurthi107, Egor Krivov137, Piyush Kumar47, Kaisar Kushibar40, Dmitrii Lachinov84,85 , Tryphon Lambrou143, Joon Lee41, Chengen Lee111, Yuehchou Lee111, Matthew Chung Hai Lee128 , Szidonia Lefkovits96, Laszlo Lefkovits97, James Levitt62, Tengfei Li51, Hongwei Li65, Wenqi Li76,77 , Hongyang Li108, Xiaochuan Li110, Yuexiang Li133, Heng Li51, Zhenye Li146, Xiaoyu Li67, Zeju Li158 , XiaoGang Li162, Wenqi Li76,77, Zheng-Shen Lin45, Fengming Lin115, Pietro Lio153, Chang Liu41 , Boqiang Liu46, Xiang Liu67, Mingyuan Liu114, Ju Liu115,116, Luyan Liu112, Xavier Llado´ 40, Marc Moreno Lopez132, Pablo Ribalta Lorenzo72, Zhentai Lu53, Lin Luo31, Zhigang Luo162, Jun Ma73 , Kai Ma117, Thomas Mackie60, Anant Madabhushi129, Issam Mahmoudi74, Klaus H. Maier-Hein22 , Pradipta Maji36, CP Mammen161, Andreas Mang165, B. S. Manjunath79, Michal Marcinkiewicz71 , Steven McDonagh128, Stephen McKenna157, Richard McKinley6 , Miriam Mehl166, Sachin Mehta91 , Raghav Mehta92, Raphael Meier4,6 , Christoph Meinel95, Dorit Merhof32, Craig Meyer27,28, Robert Miller131, Sushmita Mitra36, Aliasgar Moiyadi19, David Molina-Garcia142, Miguel A.B. Monteiro105 , Grzegorz Mrukwa71,72, Andriy Myronenko21, Jakub Nalepa71,72, Thuyen Ngo79, Dong Nie113, Holly Ning131, Chen Niu67, Nicholas K Nuechterlein91, Eric Oermann122, Arlindo Oliveira105,106, Diego D. C. Oliveira159, Arnau Oliver40, Alexander F. I. Osman140, Yu-Nian Ou45, Sebastien Ourselin76 , Nikos Paragios42,44, Moo Sung Park121, Brad Paschke60, J. Gregory Pauloski58, Kamlesh Pawar130, Nick Pawlowski128, Linmin Pei33, Suting Peng46, Silvio M. Pereira159, Julian Perez-Beteta142, Victor M. Perez-Garcia142, Simon Pezold152, Bao Pham104, Ashish Phophalia136 , Gemma Piella101, G.N. Pillai109, Marie Piraud65, Maxim Pisov137, Anmol Popli109, Michael P. Pound38, Reza Pourreza131, Prateek Prasanna129, Vesna Pr?kovska99, Tony P. Pridmore38, Santi Puch99, lodie Puybareau29, Buyue Qian67, Xu Qiao46, Martin Rajchl128, Swapnil Rane19, Michael Rebsamen4 , Hongliang Ren82, Xuhua Ren112, Karthik Revanuru139, Mina Rezaei95, Oliver Rippel32, Luis Carlos Rivera134, Charlotte Robert43, Bruce Rosen123,124, Daniel Rueckert128 , Mohammed Safwan107, Mostafa Salem40, Joaquim Salvi40, Irina Sanchez138, Irina Snchez99 , Heitor M. Santos159, Emmett Sartor160, Dawid Schellingerhout59, Klaudius Scheufele166, Matthew R. Scott64, Artur A. Scussel159, Sara Sedlar139, Juan Pablo Serrano-Rubio86, N. Jon Shah130 , Nameetha Shah139, Mazhar Shaikh107, B. Uma Shankar36, Zeina Shboul33, Haipeng Shen50 , Dinggang Shen113, Linlin Shen133, Haocheng Shen157, Varun Shenoy61, Feng Shi68, Hyung Eun Shin121, Hai Shu52, Diana Sima141, Matthew Sinclair128, Orjan Smedby167, James M. Snyder41 , Mohammadreza Soltaninejad143, Guidong Song145, Mehul Soni107, Jean Stawiaski78, Shashank Subramanian62, Li Sun30, Roger Sun42,43, Jiawei Sun46, Kay Sun60, Yu Sun69, Guoxia Sun115 , Shuang Sun115, Yannick R Suter4 , Laszlo Szilagyi97, Sanjay Talbar20, Dacheng Tao26, Dacheng Tao90, Zhongzhao Teng154, Siddhesh Thakur20, Meenakshi H Thakur19, Sameer Tharakan62 , Pallavi Tiwari129, Guillaume Tochon29, Tuan Tran103, Yuhsiang M. Tsai111, Kuan-Lun Tseng111 , Tran Anh Tuan103, Vadim Turlapov85, Nicholas Tustison28, Maria Vakalopoulou42,43, Sergi Valverde40, Rami Vanguri48,49, Evgeny Vasiliev85, Jonathan Ventura132, Luis Vera142, Tom Vercauteren76,77, C. A. Verrastro149,150, Lasitha Vidyaratne33, Veronica Vilaplana138, Ajeet Vivekanandan60, Guotai Wang76,77, Qian Wang112, Chiatse J. Wang111, Weichung Wang111, Duo Wang153, Ruixuan Wang157, Yuanyuan Wang158, Chunliang Wang167, Guotai Wang76,77, Ning Wen41, Xin Wen67, Leon Weninger32, Wolfgang Wick24, Shaocheng Wu108, Qiang Wu115,116 , Yihong Wu144, Yong Xia66, Yanwu Xu88, Xiaowen Xu115, Peiyuan Xu117, Tsai-Ling Yang45 , Xiaoping Yang73, Hao-Yu Yang93,94, Junlin Yang93, Haojin Yang95, Guang Yang170, Hongdou Yao98, Xujiong Ye143, Changchang Yin67, Brett Young-Moxon60, Jinhua Yu158, Xiangyu Yue61 , Songtao Zhang30, Angela Zhang79, Kun Zhang89, Xuejie Zhang98, Lichi Zhang112, Xiaoyue Zhang118, Yazhuo Zhang145,146,147, Lei Zhang143, Jianguo Zhang157, Xiang Zhang162, Tianhao Zhang168, Sicheng Zhao61, Yu Zhao65, Xiaomei Zhao144,55, Liang Zhao163,164, Yefeng Zheng117 , Liming Zhong53, Chenhong Zhou25, Xiaobing Zhou98, Fan Zhou51, Hongtu Zhu51, Jin Zhu153, Ying Zhuge131, Weiwei Zong41, Jayashree Kalpathy-Cramer123,124,† , Keyvan Farahani12,†,‡ , Christos Davatzikos1,2,†,‡ , Koen van Leemput123,124,125,† , and Bjoern Menze9,65,127,†,∗Preprin

    Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI

    Get PDF
    This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction. Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria. Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019
    • …
    corecore