10 research outputs found

    Automated brain tumor segmentation on multi-modal MR image using SegNet

    Get PDF
    The potential of improving disease detection and treatment planning comes with accurate and fully automatic algorithms for brain tumor segmentation. Glioma, a type of brain tumor, can appear at different locations with different shapes and sizes. Manual segmentation of brain tumor regions is not only time-consuming but also prone to human error, and its performance depends on pathologists’ experience. In this paper, we tackle this problem by applying a fully convolutional neural network SegNet to 3D data sets for four MRI modalities (Flair, T1, T1ce, and T2) for automated segmentation of brain tumor and sub-tumor parts, including necrosis, edema, and enhancing tumor. To further improve tumor segmentation, the four separately trained SegNet models are integrated by post-processing to produce four maximum feature maps by fusing the machine-learned feature maps from the fully convolutional layers of each trained model. The maximum feature maps and the pixel intensity values of the original MRI modalities are combined to encode interesting information into a feature representation. Taking the combined feature as input, a decision tree (DT) is used to classify the MRI voxels into different tumor parts and healthy brain tissue. Evaluating the proposed algorithm on the dataset provided by the Brain Tumor Segmentation 2017 (BraTS 2017) challenge, we achieved F-measure scores of 0.85, 0.81, and 0.79 for whole tumor, tumor core, and enhancing tumor, respectively. Experimental results demonstrate that using SegNet models with 3D MRI datasets and integrating the four maximum feature maps with pixel intensity values of the original MRI modalities has potential to perform well on brain tumor segmentation

    Two-layer ensemble of deep learning models for medical image segmentation. [Article]

    Get PDF
    One of the most important areas in medical image analysis is segmentation, in which raw image data is partitioned into structured and meaningful regions to gain further insights. By using Deep Neural Networks (DNN), AI-based automated segmentation algorithms can potentially assist physicians with more effective imaging-based diagnoses. However, since it is difficult to acquire high-quality ground truths for medical images and DNN hyperparameters require significant manual tuning, the results by DNN-based medical models might be limited. A potential solution is to combine multiple DNN models using ensemble learning. We propose a two-layer ensemble of deep learning models in which the prediction of each training image pixel made by each model in the first layer is used as the augmented data of the training image for the second layer of the ensemble. The prediction of the second layer is then combined by using a weight-based scheme which is found by solving linear regression problems. To the best of our knowledge, our paper is the first work which proposes a two-layer ensemble of deep learning models with an augmented data technique in medical image segmentation. Experiments conducted on five different medical image datasets for diverse segmentation tasks show that proposed method achieves better results in terms of several performance metrics compared to some well-known benchmark algorithms. Our proposed two-layer ensemble of deep learning models for segmentation of medical images shows effectiveness compared to several benchmark algorithms. The research can be expanded in several directions like image classification

    Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge

    Get PDF
    International Brain Tumor Segmentation (BraTS) challengeGliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.This work was supported in part by the 1) National Institute of Neurological Disorders and Stroke (NINDS) of the NIH R01 grant with award number R01-NS042645, 2) Informatics Technology for Cancer Research (ITCR) program of the NCI/NIH U24 grant with award number U24-CA189523, 3) Swiss Cancer League, under award number KFS-3979-08-2016, 4) Swiss National Science Foundation, under award number 169607.Article signat per 427 autors/es: Spyridon Bakas1,2,3,†,‡,∗ , Mauricio Reyes4,† , Andras Jakab5,†,‡ , Stefan Bauer4,6,169,† , Markus Rempfler9,65,127,† , Alessandro Crimi7,† , Russell Takeshi Shinohara1,8,† , Christoph Berger9,† , Sung Min Ha1,2,† , Martin Rozycki1,2,† , Marcel Prastawa10,† , Esther Alberts9,65,127,† , Jana Lipkova9,65,127,† , John Freymann11,12,‡ , Justin Kirby11,12,‡ , Michel Bilello1,2,‡ , Hassan M. Fathallah-Shaykh13,‡ , Roland Wiest4,6,‡ , Jan Kirschke126,‡ , Benedikt Wiestler126,‡ , Rivka Colen14,‡ , Aikaterini Kotrotsou14,‡ , Pamela Lamontagne15,‡ , Daniel Marcus16,17,‡ , Mikhail Milchenko16,17,‡ , Arash Nazeri17,‡ , Marc-Andr Weber18,‡ , Abhishek Mahajan19,‡ , Ujjwal Baid20,‡ , Elizabeth Gerstner123,124,‡ , Dongjin Kwon1,2,† , Gagan Acharya107, Manu Agarwal109, Mahbubul Alam33 , Alberto Albiol34, Antonio Albiol34, Francisco J. Albiol35, Varghese Alex107, Nigel Allinson143, Pedro H. A. Amorim159, Abhijit Amrutkar107, Ganesh Anand107, Simon Andermatt152, Tal Arbel92, Pablo Arbelaez134, Aaron Avery60, Muneeza Azmat62, Pranjal B.107, Wenjia Bai128, Subhashis Banerjee36,37, Bill Barth2 , Thomas Batchelder33, Kayhan Batmanghelich88, Enzo Battistella42,43 , Andrew Beers123,124, Mikhail Belyaev137, Martin Bendszus23, Eze Benson38, Jose Bernal40 , Halandur Nagaraja Bharath141, George Biros62, Sotirios Bisdas76, James Brown123,124, Mariano Cabezas40, Shilei Cao67, Jorge M. Cardoso76, Eric N Carver41, Adri Casamitjana138, Laura Silvana Castillo134, Marcel Cat138, Philippe Cattin152, Albert Cerigues ´ 40, Vinicius S. Chagas159 , Siddhartha Chandra42, Yi-Ju Chang45, Shiyu Chang156, Ken Chang123,124, Joseph Chazalon29 , Shengcong Chen25, Wei Chen46, Jefferson W Chen80, Zhaolin Chen130, Kun Cheng120, Ahana Roy Choudhury47, Roger Chylla60, Albert Clrigues40, Steven Colleman141, Ramiro German Rodriguez Colmeiro149,150,151, Marc Combalia138, Anthony Costa122, Xiaomeng Cui115, Zhenzhen Dai41, Lutao Dai50, Laura Alexandra Daza134, Eric Deutsch43, Changxing Ding25, Chao Dong65 , Shidu Dong155, Wojciech Dudzik71,72, Zach Eaton-Rosen76, Gary Egan130, Guilherme Escudero159, Tho Estienne42,43, Richard Everson87, Jonathan Fabrizio29, Yong Fan1,2 , Longwei Fang54,55, Xue Feng27, Enzo Ferrante128, Lucas Fidon42, Martin Fischer95, Andrew P. French38,39 , Naomi Fridman57, Huan Fu90, David Fuentes58, Yaozong Gao68, Evan Gates58, David Gering60 , Amir Gholami61, Willi Gierke95, Ben Glocker128, Mingming Gong88,89, Sandra Gonzlez-Vill40, T. Grosges151, Yuanfang Guan108, Sheng Guo64, Sudeep Gupta19, Woo-Sup Han63, Il Song Han63 , Konstantin Harmuth95, Huiguang He54,55,56, Aura Hernndez-Sabat100, Evelyn Herrmann102 , Naveen Himthani62, Winston Hsu111, Cheyu Hsu111, Xiaojun Hu64, Xiaobin Hu65, Yan Hu66, Yifan Hu117, Rui Hua68,69, Teng-Yi Huang45, Weilin Huang64, Sabine Van Huffel141, Quan Huo68, Vivek HV70, Khan M. Iftekharuddin33, Fabian Isensee22, Mobarakol Islam81,82, Aaron S. Jackson38 , Sachin R. Jambawalikar48, Andrew Jesson92, Weijian Jian119, Peter Jin61, V Jeya Maria Jose82,83 , Alain Jungo4 , Bernhard Kainz128, Konstantinos Kamnitsas128, Po-Yu Kao79, Ayush Karnawat129 , Thomas Kellermeier95, Adel Kermi74, Kurt Keutzer61, Mohamed Tarek Khadir75, Mahendra Khened107, Philipp Kickingereder23, Geena Kim135, Nik King60, Haley Knapp60, Urspeter Knecht4 , Lisa Kohli60, Deren Kong64, Xiangmao Kong115, Simon Koppers32, Avinash Kori107, Ganapathy Krishnamurthi107, Egor Krivov137, Piyush Kumar47, Kaisar Kushibar40, Dmitrii Lachinov84,85 , Tryphon Lambrou143, Joon Lee41, Chengen Lee111, Yuehchou Lee111, Matthew Chung Hai Lee128 , Szidonia Lefkovits96, Laszlo Lefkovits97, James Levitt62, Tengfei Li51, Hongwei Li65, Wenqi Li76,77 , Hongyang Li108, Xiaochuan Li110, Yuexiang Li133, Heng Li51, Zhenye Li146, Xiaoyu Li67, Zeju Li158 , XiaoGang Li162, Wenqi Li76,77, Zheng-Shen Lin45, Fengming Lin115, Pietro Lio153, Chang Liu41 , Boqiang Liu46, Xiang Liu67, Mingyuan Liu114, Ju Liu115,116, Luyan Liu112, Xavier Llado´ 40, Marc Moreno Lopez132, Pablo Ribalta Lorenzo72, Zhentai Lu53, Lin Luo31, Zhigang Luo162, Jun Ma73 , Kai Ma117, Thomas Mackie60, Anant Madabhushi129, Issam Mahmoudi74, Klaus H. Maier-Hein22 , Pradipta Maji36, CP Mammen161, Andreas Mang165, B. S. Manjunath79, Michal Marcinkiewicz71 , Steven McDonagh128, Stephen McKenna157, Richard McKinley6 , Miriam Mehl166, Sachin Mehta91 , Raghav Mehta92, Raphael Meier4,6 , Christoph Meinel95, Dorit Merhof32, Craig Meyer27,28, Robert Miller131, Sushmita Mitra36, Aliasgar Moiyadi19, David Molina-Garcia142, Miguel A.B. Monteiro105 , Grzegorz Mrukwa71,72, Andriy Myronenko21, Jakub Nalepa71,72, Thuyen Ngo79, Dong Nie113, Holly Ning131, Chen Niu67, Nicholas K Nuechterlein91, Eric Oermann122, Arlindo Oliveira105,106, Diego D. C. Oliveira159, Arnau Oliver40, Alexander F. I. Osman140, Yu-Nian Ou45, Sebastien Ourselin76 , Nikos Paragios42,44, Moo Sung Park121, Brad Paschke60, J. Gregory Pauloski58, Kamlesh Pawar130, Nick Pawlowski128, Linmin Pei33, Suting Peng46, Silvio M. Pereira159, Julian Perez-Beteta142, Victor M. Perez-Garcia142, Simon Pezold152, Bao Pham104, Ashish Phophalia136 , Gemma Piella101, G.N. Pillai109, Marie Piraud65, Maxim Pisov137, Anmol Popli109, Michael P. Pound38, Reza Pourreza131, Prateek Prasanna129, Vesna Pr?kovska99, Tony P. Pridmore38, Santi Puch99, lodie Puybareau29, Buyue Qian67, Xu Qiao46, Martin Rajchl128, Swapnil Rane19, Michael Rebsamen4 , Hongliang Ren82, Xuhua Ren112, Karthik Revanuru139, Mina Rezaei95, Oliver Rippel32, Luis Carlos Rivera134, Charlotte Robert43, Bruce Rosen123,124, Daniel Rueckert128 , Mohammed Safwan107, Mostafa Salem40, Joaquim Salvi40, Irina Sanchez138, Irina Snchez99 , Heitor M. Santos159, Emmett Sartor160, Dawid Schellingerhout59, Klaudius Scheufele166, Matthew R. Scott64, Artur A. Scussel159, Sara Sedlar139, Juan Pablo Serrano-Rubio86, N. Jon Shah130 , Nameetha Shah139, Mazhar Shaikh107, B. Uma Shankar36, Zeina Shboul33, Haipeng Shen50 , Dinggang Shen113, Linlin Shen133, Haocheng Shen157, Varun Shenoy61, Feng Shi68, Hyung Eun Shin121, Hai Shu52, Diana Sima141, Matthew Sinclair128, Orjan Smedby167, James M. Snyder41 , Mohammadreza Soltaninejad143, Guidong Song145, Mehul Soni107, Jean Stawiaski78, Shashank Subramanian62, Li Sun30, Roger Sun42,43, Jiawei Sun46, Kay Sun60, Yu Sun69, Guoxia Sun115 , Shuang Sun115, Yannick R Suter4 , Laszlo Szilagyi97, Sanjay Talbar20, Dacheng Tao26, Dacheng Tao90, Zhongzhao Teng154, Siddhesh Thakur20, Meenakshi H Thakur19, Sameer Tharakan62 , Pallavi Tiwari129, Guillaume Tochon29, Tuan Tran103, Yuhsiang M. Tsai111, Kuan-Lun Tseng111 , Tran Anh Tuan103, Vadim Turlapov85, Nicholas Tustison28, Maria Vakalopoulou42,43, Sergi Valverde40, Rami Vanguri48,49, Evgeny Vasiliev85, Jonathan Ventura132, Luis Vera142, Tom Vercauteren76,77, C. A. Verrastro149,150, Lasitha Vidyaratne33, Veronica Vilaplana138, Ajeet Vivekanandan60, Guotai Wang76,77, Qian Wang112, Chiatse J. Wang111, Weichung Wang111, Duo Wang153, Ruixuan Wang157, Yuanyuan Wang158, Chunliang Wang167, Guotai Wang76,77, Ning Wen41, Xin Wen67, Leon Weninger32, Wolfgang Wick24, Shaocheng Wu108, Qiang Wu115,116 , Yihong Wu144, Yong Xia66, Yanwu Xu88, Xiaowen Xu115, Peiyuan Xu117, Tsai-Ling Yang45 , Xiaoping Yang73, Hao-Yu Yang93,94, Junlin Yang93, Haojin Yang95, Guang Yang170, Hongdou Yao98, Xujiong Ye143, Changchang Yin67, Brett Young-Moxon60, Jinhua Yu158, Xiangyu Yue61 , Songtao Zhang30, Angela Zhang79, Kun Zhang89, Xuejie Zhang98, Lichi Zhang112, Xiaoyue Zhang118, Yazhuo Zhang145,146,147, Lei Zhang143, Jianguo Zhang157, Xiang Zhang162, Tianhao Zhang168, Sicheng Zhao61, Yu Zhao65, Xiaomei Zhao144,55, Liang Zhao163,164, Yefeng Zheng117 , Liming Zhong53, Chenhong Zhou25, Xiaobing Zhou98, Fan Zhou51, Hongtu Zhu51, Jin Zhu153, Ying Zhuge131, Weiwei Zong41, Jayashree Kalpathy-Cramer123,124,† , Keyvan Farahani12,†,‡ , Christos Davatzikos1,2,†,‡ , Koen van Leemput123,124,125,† , and Bjoern Menze9,65,127,†,∗Preprin

    3D Convolution Neural Networks for Medical Imaging; Classification and Segmentation : A Doctor’s Third Eye

    Get PDF
    Master's thesis in Information- and communication technology (IKT591)In this thesis, we studied and developed 3D classification and segmentation models for medical imaging. The classification is done for Alzheimer’s Disease and segmentation is for brain tumor sub-regions. For the medical imaging classification task we worked towards developing a novel deep architecture which can accomplish the complex task of classifying Alzheimer’s Disease volumetrically from the MRI scans without the need of any transfer learning. The experiments were performed for both binary classification of Alzheimer’s Disease (AD) from Normal Cognitive (NC), as well as multi class classification between the three stages of Alzheimer’s called NC, AD and Mild cognitive impairment (MCI). We tested our model on the ADNI dataset and achieved mean accuracy of 94.17% and 89.14% for binary classification and multiclass classification respectively. In the second part of this thesis which is segmentation of tumors sub-regions in brain MRI images we studied some popular architecture for segmentation of medical imaging and inspired from them, proposed our architecture of end-to-end trainable fully convolutional neural net-work which uses attention block to learn the localization of different features of the multiple sub-regions of tumor. Also experiments were done to see the effect of weighted cross-entropy loss function and dice loss function on the performance of the model and the quality of the output segmented labels. The results of evaluation of our model are received through BraTS’19 dataset challenge. The model is able to achieve a dice score of 0.80 for the segmentation of whole tumor, and a dice scores of 0.639 and 0.536 for other two sub-regions within the tumor on validation data. In this thesis we successfully applied computer vision techniques for medical imaging analysis. We show the huge potential and numerous benefits of deep learning to combat and detect diseases opens up more avenues for research and application for automating medical imaging analysis

    Cascaded V-Net using ROI masks for brain tumor segmentation

    No full text
    This book constitutes revised selected papers from the Third International MICCAI Brainlesion Workshop, BrainLes 2017, as well as the International Multimodal Brain Tumor Segmentation, BraTS, and White Matter Hyperintensities, WMH, segmentation challenges, which were held jointly at the Medical Image computing for Computer Assisted Intervention Conference, MICCAI, in Quebec City, Canada, in September 2017.Peer Reviewe

    Multiclass Bone Segmentation of PET/CT Scans for Automatic SUV Extraction

    Get PDF
    In this thesis I present an automated framework for segmentation of bone structures from dual modality PET/CT scans and further extraction of SUV measurements. The first stage of this framework consists of a variant of the 3D U-Net architecture for segmentation of three bone structures: vertebral body, pelvis, and sternum. The dataset for this model consists of annotated slices from the CT scans retrieved from the study of post-HCST patients and the 18F-FLT radiotracer, which are undersampled volumes due to the low-dose radiation used during the scanning. The mean Dice scores obtained by the proposed model are 0.9162, 0.9163, and 0.8721 for the vertebral body, pelvis, and sternum class respectively. The next step of the proposed framework consists of identifying the individual vertebrae, which is a particularly difficult task due to the low resolution of the CT scans in the axial dimension. To address this issue, I present an iterative algorithm for instance segmentation of vertebral bodies, based on anatomical priors of the spine for detecting the starting point of a vertebra. The spatial information contained in the CT and PET scans is used to translate the resulting masks to the PET image space and extract SUV measurements. I then present a CNN model based on the DenseNet architecture that, for the first time, classifies the spatial distribution of SUV within the marrow cavities of the vertebral bodies as normal engraftment or possible relapse. With an AUC of 0.931 and an accuracy of 92% obtained on real patient data, this method shows good potential as a future automated tool to assist in monitoring the recovery process of HSCT patients

    Deep learning-based brain tumour image segmentation and its extension to stroke lesion segmentation

    Get PDF
    Medical imaging plays a very important role in clinical methods of treating cancer, as well as treatment selection, diagnosis, an evaluating the response to therapy. One of the best-known acquisition modalities is magnetic resonance imaging (MRI), which is used widely in the analysis of brain tumours by means of acquisition protocols (e.g. conventional and advanced). Due to the wide variation in the shape, location and appearance of tumours, automated segmentation in MRI is a difficult task. Although many studies have been conducted, automated segmentation is difficult and work to improve the accuracy of tumour segmentation is still ongoing. This research aims to develop fully automated methods for segmenting the abnormal tissues associated with brain tumours (i.e. those subject to oedema, necrosis and enhanced) from the multimodal MRI images that help radiologists to diagnose conditions and plan treatment. In this thesis the machine-learned features from the deep learning convolutional neural network (CIFAR) are investigated and joined with hand-crafted histogram texture features to encode global information and local dependencies in the representation of features. The combined features are then applied in a decision tree (DT) classifier to group individual pixels into normal brain tissues and the various parts of a tumour. These features are good point view for the clinicians to accurately visualize the texture tissue of tumour and sub-tumour regions. To further improve the segmentation of tumour and sub-tumour tissues, 3D datasets of the four MRI modalities (i.e. FLAIR, T1, T1ce and T2) are used and fully convolutional neural networks, called SegNet, are constructed for each of these four modalities of images. The outputs of these four SegNet models are then fused by choosing the one with the highest scores to construct feature maps, with the pixel intensities as an input to a DT classifier to further classify each pixel as either a normal brain tissue or the component parts of a tumour. To achieve a high-performance accuracy in the segmentation as a whole, deep learning (the IV SegNet network) and the hand-crafted features are combined, particularly in the grey-level co-occurrence matrix (GLCM) in the region of interest (ROI) that is initially detected from FLAIR modality images using the SegNet network. The methods that have been developed in this thesis (i.e. CIFAR _PI_HIS _DT, SegNet_Max_DT and SegNet_GLCM_DT) are evaluated on two datasets: the first is the publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2017 dataset, and the second is a clinical dataset. In brain tumour segmentation methods, the F-measure performance of more than 0.83 is accepted, or at least useful from a clinical point of view, for segmenting the whole tumour structure which represents the brain tumour boundaries. Thanks to it, our proposed methods show promising results in the segmentation of brain tumour structures and they provide a close match to expert delineation across all grades of glioma. To further detect brain injury, these three methods were adopted and exploited for ischemic stroke lesion segmentation. In the steps of training and evaluation, the publicly available Ischemic Stroke Lesion (ISLES 2015) dataset and a clinical dataset were used. The performance accuracies of the three developed methods in ischemic stroke lesion segmentation were assessed. The third segmentation method (SegNet_GLCM_DT) was found to be more accurate than the other two methods (SegNet_Max_DT and SegNet_GLCM_DT) because it exploits GLCM as a set of hand-crafted features with machine features, which increases the accuracy of segmentation with ischemic stroke lesion
    corecore