12 research outputs found

    Attention-Guided Version of 2D UNet for Automatic Brain Tumor Segmentation

    Full text link
    Gliomas are the most common and aggressive among brain tumors, which cause a short life expectancy in their highest grade. Therefore, treatment assessment is a key stage to enhance the quality of the patients' lives. Recently, deep convolutional neural networks (DCNNs) have achieved a remarkable performance in brain tumor segmentation, but this task is still difficult owing to high varying intensity and appearance of gliomas. Most of the existing methods, especially UNet-based networks, integrate low-level and high-level features in a naive way, which may result in confusion for the model. Moreover, most approaches employ 3D architectures to benefit from 3D contextual information of input images. These architectures contain more parameters and computational complexity than 2D architectures. On the other hand, using 2D models causes not to benefit from 3D contextual information of input images. In order to address the mentioned issues, we design a low-parameter network based on 2D UNet in which we employ two techniques. The first technique is an attention mechanism, which is adopted after concatenation of low-level and high-level features. This technique prevents confusion for the model by weighting each of the channels adaptively. The second technique is the Multi-View Fusion. By adopting this technique, we can benefit from 3D contextual information of input images despite using a 2D model. Experimental results demonstrate that our method performs favorably against 2017 and 2018 state-of-the-art methods.Comment: 7 pages, 5 figures, 4 tables, Accepted by ICCKE 201

    Brain Tumor Area Segmentation of MRI Images

    Get PDF
    Accurate and timely detection of the brain tumor area has a great impact on the choice of treatment, its success rate, and following the disease process during treatment. The existing algorithms for brain tumor diagnosis have problems in terms of good performance on various brain images with different qualities, low sensitivity of the results to the parameters introduced in the algorithm, and also reliable diagnosis of tumors in the early stages of formation. In this study, a two-stage segmentation method for accurate detection of the tumor area in magnetic resonance imaging of the brain is presented. In the first stage, after performing the necessary preprocessing on the image, the location of the tumor is located using a threshold-based segmentation method, and in the second stage, it is used as an indicator in a pond segmentation method based on the marker used. Placed. Given that in the first stage there is not much emphasis on accurate detection of the tumor area, the selection of threshold values over a large range of values will not affect the final results. In the second stage, the use of the marker-based pond segmentation method will lead to accurate detection of the tumor area. The results of the implementations show that the proposed method for accurate detection of the tumor area in a large range of changes in input parameters has the same and accurate results

    Feature-Guided Deep Radiomics for Glioblastoma Patient Survival Prediction

    Get PDF
    Glioblastoma is recognized as World Health Organization (WHO) grade IV glioma with an aggressive growth pattern. The current clinical practice in diagnosis and prognosis of Glioblastoma using MRI involves multiple steps including manual tumor sizing. Accurate identification and segmentation of multiple abnormal tissues within tumor volume in MRI is essential for precise survival prediction. Manual tumor and abnormal tissue detection and sizing are tedious, and subject to inter-observer variability. Consequently, this work proposes a fully automated MRI-based glioblastoma and abnormal tissue segmentation, and survival prediction framework. The framework includes radiomics feature-guided deep neural network methods for tumor tissue segmentation; followed by survival regression and classification using these abnormal tumor tissue segments and other relevant clinical features. The proposed multiple abnormal tumor tissue segmentation step effectively fuses feature-based and feature-guided deep radiomics information in structural MRI. The survival prediction step includes two representative survival prediction pipelines that combine different feature selection and regression approaches. The framework is evaluated using two recent widely used benchmark datasets from Brain Tumor Segmentation (BraTS) global challenges in 2017 and 2018. The best overall survival pipeline in the proposed framework achieves leave-one-out cross-validation (LOOCV) accuracy of 0.73 for training datasets and 0.68 for validation datasets, respectively. These training and validation accuracies for tumor patient survival prediction are among the highest reported in literature. Finally, a critical analysis of radiomics features and efficacy of these features in segmentation and survival prediction performance is presented as lessons learned

    Automated Brain Tumor Segmentation on Multi-MR Sequences to Determine the Most Efficient Sequence using a Deep Learning Method

    Get PDF
    Brain tumor segmentation is an important step in the diagnosis and treatment planning of cancer patients. The procedure of manual brain tumor segmentation suffers from a long processing time. In this light, automatic brain tumor segmentation is highly appealing in the clinical routine. This study sets out to segment the tumors from brain MR images and to investigate the effectiveness/usefulness of the different MRI sequences for this purpose. Here, the MR images from the BRATS challenge were utilized. 310 patients with four different MRI sequences, including T1, T1ce, T2, and FLAIR were employed to train a ResNet deep CNN. Four separate models were trained with each of the input MR sequences to identify the best sequence for brain tumor segmentation. To assess the performance of these models, 60 patients (external dataset) were quantitatively evaluated. The quantitative results indicated that the FLAIR sequence is more reliable for automatic brain tumor segmentation than other sequences with an accuracy of 0.77±0.10 in terms of Dice compared to Dice indices of 0.73±0.12, 0.73±0.15, and 0.62±0.17 obtained from T1, T2, and T1ce sequences, respectively. Based on the results of this study, FLAIR is a more reliable sequence than other sequences for brain tumor segmentation

    Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge

    Get PDF
    International Brain Tumor Segmentation (BraTS) challengeGliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.This work was supported in part by the 1) National Institute of Neurological Disorders and Stroke (NINDS) of the NIH R01 grant with award number R01-NS042645, 2) Informatics Technology for Cancer Research (ITCR) program of the NCI/NIH U24 grant with award number U24-CA189523, 3) Swiss Cancer League, under award number KFS-3979-08-2016, 4) Swiss National Science Foundation, under award number 169607.Article signat per 427 autors/es: Spyridon Bakas1,2,3,†,‡,∗ , Mauricio Reyes4,† , Andras Jakab5,†,‡ , Stefan Bauer4,6,169,† , Markus Rempfler9,65,127,† , Alessandro Crimi7,† , Russell Takeshi Shinohara1,8,† , Christoph Berger9,† , Sung Min Ha1,2,† , Martin Rozycki1,2,† , Marcel Prastawa10,† , Esther Alberts9,65,127,† , Jana Lipkova9,65,127,† , John Freymann11,12,‡ , Justin Kirby11,12,‡ , Michel Bilello1,2,‡ , Hassan M. Fathallah-Shaykh13,‡ , Roland Wiest4,6,‡ , Jan Kirschke126,‡ , Benedikt Wiestler126,‡ , Rivka Colen14,‡ , Aikaterini Kotrotsou14,‡ , Pamela Lamontagne15,‡ , Daniel Marcus16,17,‡ , Mikhail Milchenko16,17,‡ , Arash Nazeri17,‡ , Marc-Andr Weber18,‡ , Abhishek Mahajan19,‡ , Ujjwal Baid20,‡ , Elizabeth Gerstner123,124,‡ , Dongjin Kwon1,2,† , Gagan Acharya107, Manu Agarwal109, Mahbubul Alam33 , Alberto Albiol34, Antonio Albiol34, Francisco J. Albiol35, Varghese Alex107, Nigel Allinson143, Pedro H. A. Amorim159, Abhijit Amrutkar107, Ganesh Anand107, Simon Andermatt152, Tal Arbel92, Pablo Arbelaez134, Aaron Avery60, Muneeza Azmat62, Pranjal B.107, Wenjia Bai128, Subhashis Banerjee36,37, Bill Barth2 , Thomas Batchelder33, Kayhan Batmanghelich88, Enzo Battistella42,43 , Andrew Beers123,124, Mikhail Belyaev137, Martin Bendszus23, Eze Benson38, Jose Bernal40 , Halandur Nagaraja Bharath141, George Biros62, Sotirios Bisdas76, James Brown123,124, Mariano Cabezas40, Shilei Cao67, Jorge M. Cardoso76, Eric N Carver41, Adri Casamitjana138, Laura Silvana Castillo134, Marcel Cat138, Philippe Cattin152, Albert Cerigues ´ 40, Vinicius S. Chagas159 , Siddhartha Chandra42, Yi-Ju Chang45, Shiyu Chang156, Ken Chang123,124, Joseph Chazalon29 , Shengcong Chen25, Wei Chen46, Jefferson W Chen80, Zhaolin Chen130, Kun Cheng120, Ahana Roy Choudhury47, Roger Chylla60, Albert Clrigues40, Steven Colleman141, Ramiro German Rodriguez Colmeiro149,150,151, Marc Combalia138, Anthony Costa122, Xiaomeng Cui115, Zhenzhen Dai41, Lutao Dai50, Laura Alexandra Daza134, Eric Deutsch43, Changxing Ding25, Chao Dong65 , Shidu Dong155, Wojciech Dudzik71,72, Zach Eaton-Rosen76, Gary Egan130, Guilherme Escudero159, Tho Estienne42,43, Richard Everson87, Jonathan Fabrizio29, Yong Fan1,2 , Longwei Fang54,55, Xue Feng27, Enzo Ferrante128, Lucas Fidon42, Martin Fischer95, Andrew P. French38,39 , Naomi Fridman57, Huan Fu90, David Fuentes58, Yaozong Gao68, Evan Gates58, David Gering60 , Amir Gholami61, Willi Gierke95, Ben Glocker128, Mingming Gong88,89, Sandra Gonzlez-Vill40, T. Grosges151, Yuanfang Guan108, Sheng Guo64, Sudeep Gupta19, Woo-Sup Han63, Il Song Han63 , Konstantin Harmuth95, Huiguang He54,55,56, Aura Hernndez-Sabat100, Evelyn Herrmann102 , Naveen Himthani62, Winston Hsu111, Cheyu Hsu111, Xiaojun Hu64, Xiaobin Hu65, Yan Hu66, Yifan Hu117, Rui Hua68,69, Teng-Yi Huang45, Weilin Huang64, Sabine Van Huffel141, Quan Huo68, Vivek HV70, Khan M. Iftekharuddin33, Fabian Isensee22, Mobarakol Islam81,82, Aaron S. Jackson38 , Sachin R. Jambawalikar48, Andrew Jesson92, Weijian Jian119, Peter Jin61, V Jeya Maria Jose82,83 , Alain Jungo4 , Bernhard Kainz128, Konstantinos Kamnitsas128, Po-Yu Kao79, Ayush Karnawat129 , Thomas Kellermeier95, Adel Kermi74, Kurt Keutzer61, Mohamed Tarek Khadir75, Mahendra Khened107, Philipp Kickingereder23, Geena Kim135, Nik King60, Haley Knapp60, Urspeter Knecht4 , Lisa Kohli60, Deren Kong64, Xiangmao Kong115, Simon Koppers32, Avinash Kori107, Ganapathy Krishnamurthi107, Egor Krivov137, Piyush Kumar47, Kaisar Kushibar40, Dmitrii Lachinov84,85 , Tryphon Lambrou143, Joon Lee41, Chengen Lee111, Yuehchou Lee111, Matthew Chung Hai Lee128 , Szidonia Lefkovits96, Laszlo Lefkovits97, James Levitt62, Tengfei Li51, Hongwei Li65, Wenqi Li76,77 , Hongyang Li108, Xiaochuan Li110, Yuexiang Li133, Heng Li51, Zhenye Li146, Xiaoyu Li67, Zeju Li158 , XiaoGang Li162, Wenqi Li76,77, Zheng-Shen Lin45, Fengming Lin115, Pietro Lio153, Chang Liu41 , Boqiang Liu46, Xiang Liu67, Mingyuan Liu114, Ju Liu115,116, Luyan Liu112, Xavier Llado´ 40, Marc Moreno Lopez132, Pablo Ribalta Lorenzo72, Zhentai Lu53, Lin Luo31, Zhigang Luo162, Jun Ma73 , Kai Ma117, Thomas Mackie60, Anant Madabhushi129, Issam Mahmoudi74, Klaus H. Maier-Hein22 , Pradipta Maji36, CP Mammen161, Andreas Mang165, B. S. Manjunath79, Michal Marcinkiewicz71 , Steven McDonagh128, Stephen McKenna157, Richard McKinley6 , Miriam Mehl166, Sachin Mehta91 , Raghav Mehta92, Raphael Meier4,6 , Christoph Meinel95, Dorit Merhof32, Craig Meyer27,28, Robert Miller131, Sushmita Mitra36, Aliasgar Moiyadi19, David Molina-Garcia142, Miguel A.B. Monteiro105 , Grzegorz Mrukwa71,72, Andriy Myronenko21, Jakub Nalepa71,72, Thuyen Ngo79, Dong Nie113, Holly Ning131, Chen Niu67, Nicholas K Nuechterlein91, Eric Oermann122, Arlindo Oliveira105,106, Diego D. C. Oliveira159, Arnau Oliver40, Alexander F. I. Osman140, Yu-Nian Ou45, Sebastien Ourselin76 , Nikos Paragios42,44, Moo Sung Park121, Brad Paschke60, J. Gregory Pauloski58, Kamlesh Pawar130, Nick Pawlowski128, Linmin Pei33, Suting Peng46, Silvio M. Pereira159, Julian Perez-Beteta142, Victor M. Perez-Garcia142, Simon Pezold152, Bao Pham104, Ashish Phophalia136 , Gemma Piella101, G.N. Pillai109, Marie Piraud65, Maxim Pisov137, Anmol Popli109, Michael P. Pound38, Reza Pourreza131, Prateek Prasanna129, Vesna Pr?kovska99, Tony P. Pridmore38, Santi Puch99, lodie Puybareau29, Buyue Qian67, Xu Qiao46, Martin Rajchl128, Swapnil Rane19, Michael Rebsamen4 , Hongliang Ren82, Xuhua Ren112, Karthik Revanuru139, Mina Rezaei95, Oliver Rippel32, Luis Carlos Rivera134, Charlotte Robert43, Bruce Rosen123,124, Daniel Rueckert128 , Mohammed Safwan107, Mostafa Salem40, Joaquim Salvi40, Irina Sanchez138, Irina Snchez99 , Heitor M. Santos159, Emmett Sartor160, Dawid Schellingerhout59, Klaudius Scheufele166, Matthew R. Scott64, Artur A. Scussel159, Sara Sedlar139, Juan Pablo Serrano-Rubio86, N. Jon Shah130 , Nameetha Shah139, Mazhar Shaikh107, B. Uma Shankar36, Zeina Shboul33, Haipeng Shen50 , Dinggang Shen113, Linlin Shen133, Haocheng Shen157, Varun Shenoy61, Feng Shi68, Hyung Eun Shin121, Hai Shu52, Diana Sima141, Matthew Sinclair128, Orjan Smedby167, James M. Snyder41 , Mohammadreza Soltaninejad143, Guidong Song145, Mehul Soni107, Jean Stawiaski78, Shashank Subramanian62, Li Sun30, Roger Sun42,43, Jiawei Sun46, Kay Sun60, Yu Sun69, Guoxia Sun115 , Shuang Sun115, Yannick R Suter4 , Laszlo Szilagyi97, Sanjay Talbar20, Dacheng Tao26, Dacheng Tao90, Zhongzhao Teng154, Siddhesh Thakur20, Meenakshi H Thakur19, Sameer Tharakan62 , Pallavi Tiwari129, Guillaume Tochon29, Tuan Tran103, Yuhsiang M. Tsai111, Kuan-Lun Tseng111 , Tran Anh Tuan103, Vadim Turlapov85, Nicholas Tustison28, Maria Vakalopoulou42,43, Sergi Valverde40, Rami Vanguri48,49, Evgeny Vasiliev85, Jonathan Ventura132, Luis Vera142, Tom Vercauteren76,77, C. A. Verrastro149,150, Lasitha Vidyaratne33, Veronica Vilaplana138, Ajeet Vivekanandan60, Guotai Wang76,77, Qian Wang112, Chiatse J. Wang111, Weichung Wang111, Duo Wang153, Ruixuan Wang157, Yuanyuan Wang158, Chunliang Wang167, Guotai Wang76,77, Ning Wen41, Xin Wen67, Leon Weninger32, Wolfgang Wick24, Shaocheng Wu108, Qiang Wu115,116 , Yihong Wu144, Yong Xia66, Yanwu Xu88, Xiaowen Xu115, Peiyuan Xu117, Tsai-Ling Yang45 , Xiaoping Yang73, Hao-Yu Yang93,94, Junlin Yang93, Haojin Yang95, Guang Yang170, Hongdou Yao98, Xujiong Ye143, Changchang Yin67, Brett Young-Moxon60, Jinhua Yu158, Xiangyu Yue61 , Songtao Zhang30, Angela Zhang79, Kun Zhang89, Xuejie Zhang98, Lichi Zhang112, Xiaoyue Zhang118, Yazhuo Zhang145,146,147, Lei Zhang143, Jianguo Zhang157, Xiang Zhang162, Tianhao Zhang168, Sicheng Zhao61, Yu Zhao65, Xiaomei Zhao144,55, Liang Zhao163,164, Yefeng Zheng117 , Liming Zhong53, Chenhong Zhou25, Xiaobing Zhou98, Fan Zhou51, Hongtu Zhu51, Jin Zhu153, Ying Zhuge131, Weiwei Zong41, Jayashree Kalpathy-Cramer123,124,† , Keyvan Farahani12,†,‡ , Christos Davatzikos1,2,†,‡ , Koen van Leemput123,124,125,† , and Bjoern Menze9,65,127,†,∗Preprin

    Radiomic Features to Predict Overall Survival Time for Patients with Glioblastoma Brain Tumors Based on Machine Learning and Deep Learning Methods

    Full text link
    Machine Learning (ML) methods including Deep Learning (DL) Methods have been employed in the medical field to improve diagnosis process and patient’s prognosis outcomes. Glioblastoma multiforme is an extremely aggressive Glioma brain tumor that has a poor survival rate. Understanding the behavior of the Glioblastoma brain tumor is still uncertain and some factors are still unrecognized. In fact, the tumor behavior is important to decide a proper treatment plan and to improve a patient’s health. The aim of this dissertation is to develop a Computer-Aided-Diagnosis system (CADiag) based on ML/DL methods to automatically estimate the Overall Survival Time (OST) for patients with Glioblastoma brain tumors from medical imaging and non-imaging data. This system is developed to enhance and speed-up the diagnosis process, as well as to increase understanding of the behavior of Glioblastoma brain tumors. The proposed OST prediction system is developed based on a classification process to categorize a GBM patient into one of the following three survival time groups: short-term (months), mid-term (10-15 months), and long-term (\u3e15 months). The Brain Tumor Segmentation challenge (BraTS) dataset is used to develop the automatic OST prediction system. This dataset consists of multimodal preoperative Magnetic Resonance Imaging (mpMRI) data, and clinical data. The training data is relatively small in size to train an accurate OST prediction model based on DL method. Therefore, traditional ML methods such as Support Vector Machine (SVM), Neural Network, K-Nearest Neighbor (KNN), Decision Tree (DT) were used to develop the OST prediction model for GBM patients. The main contributions in the perspective of ML field include: developing and evaluating five novel radiomic feature extraction methods to produce an automatic and reliable OST prediction system based on classification task. These methods are volumetric, shape, location, texture, histogram-based, and DL features. Some of these radiomic features can be extracted directly from MRI images, such as statistical texture features and histogram-based features. However, preprocessing methods are required to extract automatically other radiomic features from MRI images such as the volume, shape, and location information of the GBM brain tumors. Therefore, a three-dimension (3D) segmentation DL model based on modified U-Net architecture is developed to identify and localize the three glioma brain tumor subregions, peritumoral edematous/invaded tissue (ED), GD-enhancing tumor (ET), and the necrotic tumor core (NCR), in multi MRI scans. The segmentation results are used to calculate the volume, location and shape information of a GBM tumor. Two novel approaches based on volumetric, shape, and location information, are proposed and evaluated in this dissertation. To improve the performance of the OST prediction system, information fusion strategies based on data-fusion, features-fusion and decision-fusion are involved. The best prediction model was developed based on feature fusions and ensemble models using NN classifiers. The proposed OST prediction system achieved competitive results in the BraTS 2020 with accuracy 55.2% and 55.1% on the BraTS 2020 validation and test datasets, respectively. In sum, developing automatic CADiag systems based on robust features and ML methods, such as our developed OST prediction system, enhances the diagnosis process in terms of cost, accuracy, and time. Our OST prediction system was evaluated from the perspective of the ML field. In addition, preprocessing steps are essential to improve not only the quality of the features but also boost the performance of the prediction system. To test the effectiveness of our developed OST system in medical decisions, we suggest more evaluations from the perspective of biology and medical decisions, to be then involved in the diagnosis process as a fast, inexpensive and automatic diagnosis method. To improve the performance of our developed OST prediction system, we believe it is required to increase the size of the training data, involve multi-modal data, and/or provide any uncertain or missing information to the data (such as patients\u27 resection statuses, gender, etc.). The DL structure is able to extract numerous meaningful low-level and high-level radiomic features during the training process without any feature type nominations by researchers. We thus believe that DL methods could achieve better predictions than ML methods if large size and proper data is available
    corecore