77 research outputs found

    Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction

    Full text link
    Deep learning for regression tasks on medical imaging data has shown promising results. However, compared to other approaches, their power is strongly linked to the dataset size. In this study, we evaluate 3D-convolutional neural networks (CNNs) and classical regression methods with hand-crafted features for survival time regression of patients with high grade brain tumors. The tested CNNs for regression showed promising but unstable results. The best performing deep learning approach reached an accuracy of 51.5% on held-out samples of the training set. All tested deep learning experiments were outperformed by a Support Vector Classifier (SVC) using 30 radiomic features. The investigated features included intensity, shape, location and deep features. The submitted method to the BraTS 2018 survival prediction challenge is an ensemble of SVCs, which reached a cross-validated accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set, and 42.9% on the testing set. The results suggest that more training data is necessary for a stable performance of a CNN model for direct regression from magnetic resonance images, and that non-imaging clinical patient information is crucial along with imaging information.Comment: Contribution to The International Multimodal Brain Tumor Segmentation (BraTS) Challenge 2018, survival prediction tas

    Cancer risk prediction with whole exome sequencing and machine learning

    Get PDF
    Accurate cancer risk and survival time prediction are important problems in personalized medicine, where disease diagnosis and prognosis are tuned to individuals based on their genetic material. Cancer risk prediction provides an informed decision about making regular screening that helps to detect disease at the early stage and therefore increases the probability of successful treatments. Cancer risk prediction is a challenging problem. Lifestyle, environment, family history, and genetic predisposition are some factors that influence the disease onset. Cancer risk prediction based on predisposing genetic variants has been studied extensively. Most studies have examined the predictive ability of variants in known mutated genes for specific cancers. However, previous studies have not explored the predictive ability of collective genomic variants from whole-exome sequencing data. It is crucial to train a model in one study and predict another related independent study to ensure that the predictive model generalizes to other datasets. Survival time prediction allows patients and physicians to evaluate the treatment feasibility and helps chart health treatment plans. Many studies have concluded that clinicians are inaccurate and often optimistic in predicting patients’ survival time; therefore, the need increases for automated survival time prediction from genomic and medical imaging data. For cancer risk prediction, this dissertation explores the effectiveness of ranking genomic variants in whole-exome sequencing data with univariate features selection methods on the predictive capability of machine learning classifiers. The dissertation performs cross-study in chronic lymphocytic leukemia, glioma, and kidney cancers that show that the top-ranked variants achieve better accuracy than the whole genomic variants. For survival time prediction, many studies have devised 3D convolutional neural networks (CNNs) to improve the accuracy of structural magnetic resonance imaging (MRI) volumes to classify glioma patients into survival categories. This dissertation proposes a new multi-path convolutional neural network with SNP and demographic features to predict glioblastoma survival groups with a one-year threshold that improves upon existing machine learning methods. The dissertation also proposes a multi-path neural network system to predict glioblastoma survival categories with a 14-year threshold from a heterogeneous combination of genomic variations, messenger ribonucleic acid (RNA) expressions, 3D post-contrast T1 MRI volumes, and 2D post-contrast T1 MRI modality scans that show the malignancy. In 10-fold cross-validation, the mean 10-fold accuracy of the proposed network with handpicked 2D MRI slices (that manifest the tumor), mRNA expressions, and SNPs slightly improves upon each data source individually

    Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge

    Get PDF
    International Brain Tumor Segmentation (BraTS) challengeGliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.This work was supported in part by the 1) National Institute of Neurological Disorders and Stroke (NINDS) of the NIH R01 grant with award number R01-NS042645, 2) Informatics Technology for Cancer Research (ITCR) program of the NCI/NIH U24 grant with award number U24-CA189523, 3) Swiss Cancer League, under award number KFS-3979-08-2016, 4) Swiss National Science Foundation, under award number 169607.Article signat per 427 autors/es: Spyridon Bakas1,2,3,†,‡,∗ , Mauricio Reyes4,† , Andras Jakab5,†,‡ , Stefan Bauer4,6,169,† , Markus Rempfler9,65,127,† , Alessandro Crimi7,† , Russell Takeshi Shinohara1,8,† , Christoph Berger9,† , Sung Min Ha1,2,† , Martin Rozycki1,2,† , Marcel Prastawa10,† , Esther Alberts9,65,127,† , Jana Lipkova9,65,127,† , John Freymann11,12,‡ , Justin Kirby11,12,‡ , Michel Bilello1,2,‡ , Hassan M. Fathallah-Shaykh13,‡ , Roland Wiest4,6,‡ , Jan Kirschke126,‡ , Benedikt Wiestler126,‡ , Rivka Colen14,‡ , Aikaterini Kotrotsou14,‡ , Pamela Lamontagne15,‡ , Daniel Marcus16,17,‡ , Mikhail Milchenko16,17,‡ , Arash Nazeri17,‡ , Marc-Andr Weber18,‡ , Abhishek Mahajan19,‡ , Ujjwal Baid20,‡ , Elizabeth Gerstner123,124,‡ , Dongjin Kwon1,2,† , Gagan Acharya107, Manu Agarwal109, Mahbubul Alam33 , Alberto Albiol34, Antonio Albiol34, Francisco J. Albiol35, Varghese Alex107, Nigel Allinson143, Pedro H. A. Amorim159, Abhijit Amrutkar107, Ganesh Anand107, Simon Andermatt152, Tal Arbel92, Pablo Arbelaez134, Aaron Avery60, Muneeza Azmat62, Pranjal B.107, Wenjia Bai128, Subhashis Banerjee36,37, Bill Barth2 , Thomas Batchelder33, Kayhan Batmanghelich88, Enzo Battistella42,43 , Andrew Beers123,124, Mikhail Belyaev137, Martin Bendszus23, Eze Benson38, Jose Bernal40 , Halandur Nagaraja Bharath141, George Biros62, Sotirios Bisdas76, James Brown123,124, Mariano Cabezas40, Shilei Cao67, Jorge M. Cardoso76, Eric N Carver41, Adri Casamitjana138, Laura Silvana Castillo134, Marcel Cat138, Philippe Cattin152, Albert Cerigues ´ 40, Vinicius S. Chagas159 , Siddhartha Chandra42, Yi-Ju Chang45, Shiyu Chang156, Ken Chang123,124, Joseph Chazalon29 , Shengcong Chen25, Wei Chen46, Jefferson W Chen80, Zhaolin Chen130, Kun Cheng120, Ahana Roy Choudhury47, Roger Chylla60, Albert Clrigues40, Steven Colleman141, Ramiro German Rodriguez Colmeiro149,150,151, Marc Combalia138, Anthony Costa122, Xiaomeng Cui115, Zhenzhen Dai41, Lutao Dai50, Laura Alexandra Daza134, Eric Deutsch43, Changxing Ding25, Chao Dong65 , Shidu Dong155, Wojciech Dudzik71,72, Zach Eaton-Rosen76, Gary Egan130, Guilherme Escudero159, Tho Estienne42,43, Richard Everson87, Jonathan Fabrizio29, Yong Fan1,2 , Longwei Fang54,55, Xue Feng27, Enzo Ferrante128, Lucas Fidon42, Martin Fischer95, Andrew P. French38,39 , Naomi Fridman57, Huan Fu90, David Fuentes58, Yaozong Gao68, Evan Gates58, David Gering60 , Amir Gholami61, Willi Gierke95, Ben Glocker128, Mingming Gong88,89, Sandra Gonzlez-Vill40, T. Grosges151, Yuanfang Guan108, Sheng Guo64, Sudeep Gupta19, Woo-Sup Han63, Il Song Han63 , Konstantin Harmuth95, Huiguang He54,55,56, Aura Hernndez-Sabat100, Evelyn Herrmann102 , Naveen Himthani62, Winston Hsu111, Cheyu Hsu111, Xiaojun Hu64, Xiaobin Hu65, Yan Hu66, Yifan Hu117, Rui Hua68,69, Teng-Yi Huang45, Weilin Huang64, Sabine Van Huffel141, Quan Huo68, Vivek HV70, Khan M. Iftekharuddin33, Fabian Isensee22, Mobarakol Islam81,82, Aaron S. Jackson38 , Sachin R. Jambawalikar48, Andrew Jesson92, Weijian Jian119, Peter Jin61, V Jeya Maria Jose82,83 , Alain Jungo4 , Bernhard Kainz128, Konstantinos Kamnitsas128, Po-Yu Kao79, Ayush Karnawat129 , Thomas Kellermeier95, Adel Kermi74, Kurt Keutzer61, Mohamed Tarek Khadir75, Mahendra Khened107, Philipp Kickingereder23, Geena Kim135, Nik King60, Haley Knapp60, Urspeter Knecht4 , Lisa Kohli60, Deren Kong64, Xiangmao Kong115, Simon Koppers32, Avinash Kori107, Ganapathy Krishnamurthi107, Egor Krivov137, Piyush Kumar47, Kaisar Kushibar40, Dmitrii Lachinov84,85 , Tryphon Lambrou143, Joon Lee41, Chengen Lee111, Yuehchou Lee111, Matthew Chung Hai Lee128 , Szidonia Lefkovits96, Laszlo Lefkovits97, James Levitt62, Tengfei Li51, Hongwei Li65, Wenqi Li76,77 , Hongyang Li108, Xiaochuan Li110, Yuexiang Li133, Heng Li51, Zhenye Li146, Xiaoyu Li67, Zeju Li158 , XiaoGang Li162, Wenqi Li76,77, Zheng-Shen Lin45, Fengming Lin115, Pietro Lio153, Chang Liu41 , Boqiang Liu46, Xiang Liu67, Mingyuan Liu114, Ju Liu115,116, Luyan Liu112, Xavier Llado´ 40, Marc Moreno Lopez132, Pablo Ribalta Lorenzo72, Zhentai Lu53, Lin Luo31, Zhigang Luo162, Jun Ma73 , Kai Ma117, Thomas Mackie60, Anant Madabhushi129, Issam Mahmoudi74, Klaus H. Maier-Hein22 , Pradipta Maji36, CP Mammen161, Andreas Mang165, B. S. Manjunath79, Michal Marcinkiewicz71 , Steven McDonagh128, Stephen McKenna157, Richard McKinley6 , Miriam Mehl166, Sachin Mehta91 , Raghav Mehta92, Raphael Meier4,6 , Christoph Meinel95, Dorit Merhof32, Craig Meyer27,28, Robert Miller131, Sushmita Mitra36, Aliasgar Moiyadi19, David Molina-Garcia142, Miguel A.B. Monteiro105 , Grzegorz Mrukwa71,72, Andriy Myronenko21, Jakub Nalepa71,72, Thuyen Ngo79, Dong Nie113, Holly Ning131, Chen Niu67, Nicholas K Nuechterlein91, Eric Oermann122, Arlindo Oliveira105,106, Diego D. C. Oliveira159, Arnau Oliver40, Alexander F. I. Osman140, Yu-Nian Ou45, Sebastien Ourselin76 , Nikos Paragios42,44, Moo Sung Park121, Brad Paschke60, J. Gregory Pauloski58, Kamlesh Pawar130, Nick Pawlowski128, Linmin Pei33, Suting Peng46, Silvio M. Pereira159, Julian Perez-Beteta142, Victor M. Perez-Garcia142, Simon Pezold152, Bao Pham104, Ashish Phophalia136 , Gemma Piella101, G.N. Pillai109, Marie Piraud65, Maxim Pisov137, Anmol Popli109, Michael P. Pound38, Reza Pourreza131, Prateek Prasanna129, Vesna Pr?kovska99, Tony P. Pridmore38, Santi Puch99, lodie Puybareau29, Buyue Qian67, Xu Qiao46, Martin Rajchl128, Swapnil Rane19, Michael Rebsamen4 , Hongliang Ren82, Xuhua Ren112, Karthik Revanuru139, Mina Rezaei95, Oliver Rippel32, Luis Carlos Rivera134, Charlotte Robert43, Bruce Rosen123,124, Daniel Rueckert128 , Mohammed Safwan107, Mostafa Salem40, Joaquim Salvi40, Irina Sanchez138, Irina Snchez99 , Heitor M. Santos159, Emmett Sartor160, Dawid Schellingerhout59, Klaudius Scheufele166, Matthew R. Scott64, Artur A. Scussel159, Sara Sedlar139, Juan Pablo Serrano-Rubio86, N. Jon Shah130 , Nameetha Shah139, Mazhar Shaikh107, B. Uma Shankar36, Zeina Shboul33, Haipeng Shen50 , Dinggang Shen113, Linlin Shen133, Haocheng Shen157, Varun Shenoy61, Feng Shi68, Hyung Eun Shin121, Hai Shu52, Diana Sima141, Matthew Sinclair128, Orjan Smedby167, James M. Snyder41 , Mohammadreza Soltaninejad143, Guidong Song145, Mehul Soni107, Jean Stawiaski78, Shashank Subramanian62, Li Sun30, Roger Sun42,43, Jiawei Sun46, Kay Sun60, Yu Sun69, Guoxia Sun115 , Shuang Sun115, Yannick R Suter4 , Laszlo Szilagyi97, Sanjay Talbar20, Dacheng Tao26, Dacheng Tao90, Zhongzhao Teng154, Siddhesh Thakur20, Meenakshi H Thakur19, Sameer Tharakan62 , Pallavi Tiwari129, Guillaume Tochon29, Tuan Tran103, Yuhsiang M. Tsai111, Kuan-Lun Tseng111 , Tran Anh Tuan103, Vadim Turlapov85, Nicholas Tustison28, Maria Vakalopoulou42,43, Sergi Valverde40, Rami Vanguri48,49, Evgeny Vasiliev85, Jonathan Ventura132, Luis Vera142, Tom Vercauteren76,77, C. A. Verrastro149,150, Lasitha Vidyaratne33, Veronica Vilaplana138, Ajeet Vivekanandan60, Guotai Wang76,77, Qian Wang112, Chiatse J. Wang111, Weichung Wang111, Duo Wang153, Ruixuan Wang157, Yuanyuan Wang158, Chunliang Wang167, Guotai Wang76,77, Ning Wen41, Xin Wen67, Leon Weninger32, Wolfgang Wick24, Shaocheng Wu108, Qiang Wu115,116 , Yihong Wu144, Yong Xia66, Yanwu Xu88, Xiaowen Xu115, Peiyuan Xu117, Tsai-Ling Yang45 , Xiaoping Yang73, Hao-Yu Yang93,94, Junlin Yang93, Haojin Yang95, Guang Yang170, Hongdou Yao98, Xujiong Ye143, Changchang Yin67, Brett Young-Moxon60, Jinhua Yu158, Xiangyu Yue61 , Songtao Zhang30, Angela Zhang79, Kun Zhang89, Xuejie Zhang98, Lichi Zhang112, Xiaoyue Zhang118, Yazhuo Zhang145,146,147, Lei Zhang143, Jianguo Zhang157, Xiang Zhang162, Tianhao Zhang168, Sicheng Zhao61, Yu Zhao65, Xiaomei Zhao144,55, Liang Zhao163,164, Yefeng Zheng117 , Liming Zhong53, Chenhong Zhou25, Xiaobing Zhou98, Fan Zhou51, Hongtu Zhu51, Jin Zhu153, Ying Zhuge131, Weiwei Zong41, Jayashree Kalpathy-Cramer123,124,† , Keyvan Farahani12,†,‡ , Christos Davatzikos1,2,†,‡ , Koen van Leemput123,124,125,† , and Bjoern Menze9,65,127,†,∗Preprin

    Machine Learning Methods for Image Analysis in Medical Applications, from Alzheimer\u27s Disease, Brain Tumors, to Assisted Living

    Get PDF
    Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer\u27s disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications

    Adaptive Fine-tuning based Transfer Learning for the Identification of MGMT Promoter Methylation Status

    Full text link
    Glioblastoma Multiforme (GBM) is an aggressive form of malignant brain tumor with a generally poor prognosis. Treatment usually includes a mix of surgical resection, radiation therapy, and akylating chemotherapy but, even with these intensive treatments, the 2-year survival rate is still very low. O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation has been shown to be a predictive bio-marker for resistance to chemotherapy, but it is invasive and time-consuming to determine the methylation status. Due to this, there has been effort to predict the MGMT methylation status through analyzing MRI scans using machine learning, which only requires pre-operative scans that are already part of standard-of-care for GBM patients. We developed a 3D SpotTune network with adaptive fine-tuning capability to improve the performance of conventional transfer learning in the identification of MGMT promoter methylation status. Using the pretrained weights of MedicalNet coupled with the SpotTune network, we compared its performance with two equivalent networks: one that is initialized with MedicalNet weights, but with no adaptive fine-tuning and one initialized with random weights. These three networks are trained and evaluated using the UPENN-GBM dataset, a public GBM dataset provided by the University of Pennsylvania. The SpotTune network showed better performance than the network with randomly initialized weights and the pre-trained MedicalNet with no adaptive fine-tuning. SpotTune enables transfer learning to be adaptive to individual patients, resulting in improved performance in predicting MGMT promoter methylation status in GBM using MRIs as compared to conventional transfer learning without adaptive fine-tuning.Comment: 18 pages, 4 figures. Preprin

    Deep Learning Methods for Classification of Glioma and its Molecular Subtypes

    Get PDF
    Diagnosis and timely treatment play an important role in\ua0preventing brain tumor growth. Clinicians are unable to reliablypredict LGG molecular subtypes from magnetic resonance imaging (MRI) without taking biopsy. Accurate diagnosis prior to surgery would be important. Recently, non-invasive classification methods such as deep learning have shown promising outcome in prediction of glioma-subtypes based upon pre-operative brain scans. However, it needs large amount of annotated medical data on tumors. This thesis investigates methods on the problem of data scarcity, specifically for molecular LGG-subtypes. The focus of this thesis is on two challenges for improving the classification performance of gliomas and its molecular subtypes using MRIs; data augmentation and domain mapping to overcome the lack of data and using data with unavailable GT annotation to tackle the issue of tedious task of manually marking tumor boundaries. Data augmentation includes generating synthetic MR images to enlarge the training data using Generative Adversarial Networks (GANs). Another type of GAN, CycleGAN, is used to enlarge the data size by mapping data from different domains to a target domain. A multi-stream Convolutional Autoencoder (CAE) classifier is proposed with a 2-stage training strategy. To enable MRI data to be used without tumor annotation, ellipse bounding box is proposed that gives comparable classification performance. The thesis comprises of papers addressing the challenging problems of data scarcity and lacking of tumor annotation. These proposed methods can benefit the future research in bringing machine learning tools into clinical practice for non-invasive diagnostics that would assist surgeons and patients in the shared decision making process

    Brain Tumor Growth Modelling .

    Get PDF
    Prediction methods of Glioblastoma tumors growth constitute a hard task due to the lack of medical data, which is mostly related to the patients’ privacy, the cost of collecting a large medical dataset, and the availability of related notations by experts. In this thesis, we study and propose a Synthetic Medical Image Generator (SMIG) with the purpose of generating synthetic data based on Generative Adversarial Network in order to provide anonymized data. In addition, to predict the Glioblastoma multiform (GBM) tumor growth we developed a Tumor Growth Predictor (TGP) based on End to End Convolution Neural Network architecture that allows training on a public dataset from The Cancer Imaging Archive (TCIA), combined with the generated synthetic data. We also highlighted the impact of implicating a synthetic data generated using SMIG as a data augmentation tool. Despite small data size provided by TCIA dataset, the obtained results demonstrate valuable tumor growth prediction accurac

    Uncertainty-driven refinement of tumor-core segmentation using 3D-to-2D networks with label uncertainty

    Full text link
    The BraTS dataset contains a mixture of high-grade and low-grade gliomas, which have a rather different appearance: previous studies have shown that performance can be improved by separated training on low-grade gliomas (LGGs) and high-grade gliomas (HGGs), but in practice this information is not available at test time to decide which model to use. By contrast with HGGs, LGGs often present no sharp boundary between the tumor core and the surrounding edema, but rather a gradual reduction of tumor-cell density. Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which ranked highly in the 2019 BraTS challenge and was trained using an uncertainty-aware loss, we separate cases into those with a confidently segmented core, and those with a vaguely segmented or missing core. Since by assumption every tumor has a core, we reduce the threshold for classification of core tissue in those cases where the core, as segmented by the classifier, is vaguely defined or missing. We then predict survival of high-grade glioma patients using a fusion of linear regression and random forest classification, based on age, number of distinct tumor components, and number of distinct tumor cores. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on the testing set, where the method achieved 4th place in Segmentation, 1st place in uncertainty estimation, and 1st place in Survival prediction.Comment: Presented (virtually) in the MICCAI Brainles workshop 2020. Accepted for publication in Brainles proceeding

    Current State-of-the-Art of AI Methods Applied to MRI

    Get PDF
    Di Noia, C., Grist, J. T., Riemer, F., Lyasheva, M., Fabozzi, M., Castelli, M., Lodi, R., Tonon, C., Rundo, L., & Zaccagna, F. (2022). Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics, 12(9), 1-16. [2125]. https://doi.org/10.3390/diagnostics12092125Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.publishersversionpublishe
    • …
    corecore