8 research outputs found
Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge
International Brain Tumor Segmentation (BraTS) challengeGliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.This work was supported in part by the
1) National Institute of Neurological Disorders and Stroke (NINDS) of the NIH R01 grant with award number R01-NS042645,
2) Informatics Technology for Cancer Research (ITCR) program of the NCI/NIH U24 grant with award number U24-CA189523,
3) Swiss Cancer League, under award number KFS-3979-08-2016, 4) Swiss National Science Foundation, under award number 169607.Article signat per 427 autors/es: Spyridon Bakas1,2,3,†,‡,∗ , Mauricio Reyes4,†, Andras Jakab5,†,‡ , Stefan Bauer4,6,169,†, Markus Rempfler9,65,127,†, Alessandro Crimi7,†, Russell Takeshi Shinohara1,8,†, Christoph Berger9,†, Sung Min Ha1,2,†, Martin Rozycki1,2,†, Marcel Prastawa10,†, Esther Alberts9,65,127,†, Jana Lipkova9,65,127,†, John Freymann11,12,‡ , Justin Kirby11,12,‡ , Michel Bilello1,2,‡ , Hassan M. Fathallah-Shaykh13,‡ , Roland Wiest4,6,‡ , Jan Kirschke126,‡ , Benedikt Wiestler126,‡ , Rivka Colen14,‡ , Aikaterini Kotrotsou14,‡ , Pamela Lamontagne15,‡ , Daniel Marcus16,17,‡ , Mikhail Milchenko16,17,‡ , Arash Nazeri17,‡ , Marc-Andr Weber18,‡ , Abhishek Mahajan19,‡ , Ujjwal Baid20,‡ , Elizabeth Gerstner123,124,‡ , Dongjin Kwon1,2,†, Gagan Acharya107, Manu Agarwal109, Mahbubul Alam33 , Alberto Albiol34, Antonio Albiol34, Francisco J. Albiol35, Varghese Alex107, Nigel Allinson143, Pedro H. A. Amorim159, Abhijit Amrutkar107, Ganesh Anand107, Simon Andermatt152, Tal Arbel92, Pablo Arbelaez134, Aaron Avery60, Muneeza Azmat62, Pranjal B.107, Wenjia Bai128, Subhashis Banerjee36,37, Bill Barth2 , Thomas Batchelder33, Kayhan Batmanghelich88, Enzo Battistella42,43 , Andrew Beers123,124, Mikhail Belyaev137, Martin Bendszus23, Eze Benson38, Jose Bernal40 , Halandur Nagaraja Bharath141, George Biros62, Sotirios Bisdas76, James Brown123,124, Mariano Cabezas40, Shilei Cao67, Jorge M. Cardoso76, Eric N Carver41, Adri Casamitjana138, Laura Silvana Castillo134, Marcel Cat138, Philippe Cattin152, Albert Cerigues ´ 40, Vinicius S. Chagas159 , Siddhartha Chandra42, Yi-Ju Chang45, Shiyu Chang156, Ken Chang123,124, Joseph Chazalon29 , Shengcong Chen25, Wei Chen46, Jefferson W Chen80, Zhaolin Chen130, Kun Cheng120, Ahana Roy Choudhury47, Roger Chylla60, Albert Clrigues40, Steven Colleman141, Ramiro German Rodriguez Colmeiro149,150,151, Marc Combalia138, Anthony Costa122, Xiaomeng Cui115, Zhenzhen Dai41, Lutao Dai50, Laura Alexandra Daza134, Eric Deutsch43, Changxing Ding25, Chao Dong65 , Shidu Dong155, Wojciech Dudzik71,72, Zach Eaton-Rosen76, Gary Egan130, Guilherme Escudero159, Tho Estienne42,43, Richard Everson87, Jonathan Fabrizio29, Yong Fan1,2 , Longwei Fang54,55, Xue Feng27, Enzo Ferrante128, Lucas Fidon42, Martin Fischer95, Andrew P. French38,39 , Naomi Fridman57, Huan Fu90, David Fuentes58, Yaozong Gao68, Evan Gates58, David Gering60 , Amir Gholami61, Willi Gierke95, Ben Glocker128, Mingming Gong88,89, Sandra Gonzlez-Vill40, T. Grosges151, Yuanfang Guan108, Sheng Guo64, Sudeep Gupta19, Woo-Sup Han63, Il Song Han63 , Konstantin Harmuth95, Huiguang He54,55,56, Aura Hernndez-Sabat100, Evelyn Herrmann102 , Naveen Himthani62, Winston Hsu111, Cheyu Hsu111, Xiaojun Hu64, Xiaobin Hu65, Yan Hu66, Yifan Hu117, Rui Hua68,69, Teng-Yi Huang45, Weilin Huang64, Sabine Van Huffel141, Quan Huo68, Vivek HV70, Khan M. Iftekharuddin33, Fabian Isensee22, Mobarakol Islam81,82, Aaron S. Jackson38 , Sachin R. Jambawalikar48, Andrew Jesson92, Weijian Jian119, Peter Jin61, V Jeya Maria Jose82,83 , Alain Jungo4 , Bernhard Kainz128, Konstantinos Kamnitsas128, Po-Yu Kao79, Ayush Karnawat129 , Thomas Kellermeier95, Adel Kermi74, Kurt Keutzer61, Mohamed Tarek Khadir75, Mahendra Khened107, Philipp Kickingereder23, Geena Kim135, Nik King60, Haley Knapp60, Urspeter Knecht4 , Lisa Kohli60, Deren Kong64, Xiangmao Kong115, Simon Koppers32, Avinash Kori107, Ganapathy Krishnamurthi107, Egor Krivov137, Piyush Kumar47, Kaisar Kushibar40, Dmitrii Lachinov84,85 , Tryphon Lambrou143, Joon Lee41, Chengen Lee111, Yuehchou Lee111, Matthew Chung Hai Lee128 , Szidonia Lefkovits96, Laszlo Lefkovits97, James Levitt62, Tengfei Li51, Hongwei Li65, Wenqi Li76,77 , Hongyang Li108, Xiaochuan Li110, Yuexiang Li133, Heng Li51, Zhenye Li146, Xiaoyu Li67, Zeju Li158 , XiaoGang Li162, Wenqi Li76,77, Zheng-Shen Lin45, Fengming Lin115, Pietro Lio153, Chang Liu41 , Boqiang Liu46, Xiang Liu67, Mingyuan Liu114, Ju Liu115,116, Luyan Liu112, Xavier Llado´ 40, Marc Moreno Lopez132, Pablo Ribalta Lorenzo72, Zhentai Lu53, Lin Luo31, Zhigang Luo162, Jun Ma73 , Kai Ma117, Thomas Mackie60, Anant Madabhushi129, Issam Mahmoudi74, Klaus H. Maier-Hein22 , Pradipta Maji36, CP Mammen161, Andreas Mang165, B. S. Manjunath79, Michal Marcinkiewicz71 , Steven McDonagh128, Stephen McKenna157, Richard McKinley6 , Miriam Mehl166, Sachin Mehta91 , Raghav Mehta92, Raphael Meier4,6 , Christoph Meinel95, Dorit Merhof32, Craig Meyer27,28, Robert Miller131, Sushmita Mitra36, Aliasgar Moiyadi19, David Molina-Garcia142, Miguel A.B. Monteiro105 , Grzegorz Mrukwa71,72, Andriy Myronenko21, Jakub Nalepa71,72, Thuyen Ngo79, Dong Nie113, Holly Ning131, Chen Niu67, Nicholas K Nuechterlein91, Eric Oermann122, Arlindo Oliveira105,106, Diego D. C. Oliveira159, Arnau Oliver40, Alexander F. I. Osman140, Yu-Nian Ou45, Sebastien Ourselin76 , Nikos Paragios42,44, Moo Sung Park121, Brad Paschke60, J. Gregory Pauloski58, Kamlesh Pawar130, Nick Pawlowski128, Linmin Pei33, Suting Peng46, Silvio M. Pereira159, Julian Perez-Beteta142, Victor M. Perez-Garcia142, Simon Pezold152, Bao Pham104, Ashish Phophalia136 , Gemma Piella101, G.N. Pillai109, Marie Piraud65, Maxim Pisov137, Anmol Popli109, Michael P. Pound38, Reza Pourreza131, Prateek Prasanna129, Vesna Pr?kovska99, Tony P. Pridmore38, Santi Puch99, lodie Puybareau29, Buyue Qian67, Xu Qiao46, Martin Rajchl128, Swapnil Rane19, Michael Rebsamen4 , Hongliang Ren82, Xuhua Ren112, Karthik Revanuru139, Mina Rezaei95, Oliver Rippel32, Luis Carlos Rivera134, Charlotte Robert43, Bruce Rosen123,124, Daniel Rueckert128 , Mohammed Safwan107, Mostafa Salem40, Joaquim Salvi40, Irina Sanchez138, Irina Snchez99 , Heitor M. Santos159, Emmett Sartor160, Dawid Schellingerhout59, Klaudius Scheufele166, Matthew R. Scott64, Artur A. Scussel159, Sara Sedlar139, Juan Pablo Serrano-Rubio86, N. Jon Shah130 , Nameetha Shah139, Mazhar Shaikh107, B. Uma Shankar36, Zeina Shboul33, Haipeng Shen50 , Dinggang Shen113, Linlin Shen133, Haocheng Shen157, Varun Shenoy61, Feng Shi68, Hyung Eun Shin121, Hai Shu52, Diana Sima141, Matthew Sinclair128, Orjan Smedby167, James M. Snyder41 , Mohammadreza Soltaninejad143, Guidong Song145, Mehul Soni107, Jean Stawiaski78, Shashank Subramanian62, Li Sun30, Roger Sun42,43, Jiawei Sun46, Kay Sun60, Yu Sun69, Guoxia Sun115 , Shuang Sun115, Yannick R Suter4 , Laszlo Szilagyi97, Sanjay Talbar20, Dacheng Tao26, Dacheng Tao90, Zhongzhao Teng154, Siddhesh Thakur20, Meenakshi H Thakur19, Sameer Tharakan62 , Pallavi Tiwari129, Guillaume Tochon29, Tuan Tran103, Yuhsiang M. Tsai111, Kuan-Lun Tseng111 , Tran Anh Tuan103, Vadim Turlapov85, Nicholas Tustison28, Maria Vakalopoulou42,43, Sergi Valverde40, Rami Vanguri48,49, Evgeny Vasiliev85, Jonathan Ventura132, Luis Vera142, Tom Vercauteren76,77, C. A. Verrastro149,150, Lasitha Vidyaratne33, Veronica Vilaplana138, Ajeet Vivekanandan60, Guotai Wang76,77, Qian Wang112, Chiatse J. Wang111, Weichung Wang111, Duo Wang153, Ruixuan Wang157, Yuanyuan Wang158, Chunliang Wang167, Guotai Wang76,77, Ning Wen41, Xin Wen67, Leon Weninger32, Wolfgang Wick24, Shaocheng Wu108, Qiang Wu115,116 , Yihong Wu144, Yong Xia66, Yanwu Xu88, Xiaowen Xu115, Peiyuan Xu117, Tsai-Ling Yang45 , Xiaoping Yang73, Hao-Yu Yang93,94, Junlin Yang93, Haojin Yang95, Guang Yang170, Hongdou Yao98, Xujiong Ye143, Changchang Yin67, Brett Young-Moxon60, Jinhua Yu158, Xiangyu Yue61 , Songtao Zhang30, Angela Zhang79, Kun Zhang89, Xuejie Zhang98, Lichi Zhang112, Xiaoyue Zhang118, Yazhuo Zhang145,146,147, Lei Zhang143, Jianguo Zhang157, Xiang Zhang162, Tianhao Zhang168, Sicheng Zhao61, Yu Zhao65, Xiaomei Zhao144,55, Liang Zhao163,164, Yefeng Zheng117 , Liming Zhong53, Chenhong Zhou25, Xiaobing Zhou98, Fan Zhou51, Hongtu Zhu51, Jin Zhu153, Ying Zhuge131, Weiwei Zong41, Jayashree Kalpathy-Cramer123,124,†, Keyvan Farahani12,†,‡ , Christos Davatzikos1,2,†,‡ , Koen van Leemput123,124,125,†, and Bjoern Menze9,65,127,†,∗Preprin
Detecció de bounding boxes 3D a partir d'imatges monoculars
Object detection is particularly important in robotic applications that require interaction with the environment. Although 2D object detection methods obtain accurate results, these are not enough to provide a complete description of the 3D scenario. Therefore, many models have recently showed promising progresses in this challenging field [5, 22, 25, 30]. In this work, the goal is to predict 3D bounding boxes from single images without using temporal data nor any explicit depth estimation. We propose an approach for 3D monocular object detection based on Deep3DBox [20]. We aim to replace the geometric constraints taken into account to predict the 3D location of objects by a deep learning module. Moreover, we undertake a study on the different parameters for the modules that are used to predict dimensions and orientation of objects. We conduct experiments in order to search for the best hyperparameters of our model for KITTI [7] cars and we reported and compared our results on KITTI and the challenging NuScenes [2] benchmarks for cars and pedestrians with other state of the art methods. Therefore, we conclude that our approach performs on par with similar methods [22, 30] and improves Deep3DBox [20] results.La detecció d'objectes és particularment important en aplicacions robòtiques que requereixen interacció amb l'entorn. Tot i que s'han obtingut resultats acurats en detecció d'objectes en 2D, aquests no són suficients per a donar una descripcó completa de l'entorn en 3D. De totes maneres, força models han demostrat progressos prometedors en aquest camp [5, 22, 25, 30]. L'objectiu d'aquest treball és predir bounding boxes 3D a partir d'imatges sense utilitzar informació temporal ni cap predicció de profunditat explÃcitament. Proposem un model per detecció monocular d'objectes 3D basada en Deep3DBox [20]. Volem substituir les restriccions geomètriques usades per predir la localització en 3D dels objectes per un mòdul de deep learning. A més, duem a terme un estudi sobre els diferents parà metres dels mòduls utilitzats per a predir les dimensions i l'orientació dels objectes. Hem realitzat experiments per tal de cercar els millors hiperparà metres pel nostre model pels cotxes de KITTI [7] i hem reportat i comparat els nostres resultats sobre KITTI i NuScenes en cotxes i vianants amb els altres metòdes de l'state of the art. Finalment, concloem que el nostre model obté resultats al nivell dels mètodes similars [22, 30] i millora els resultats de Deep3DBox [20
Detecció de bounding boxes 3D a partir d'imatges monoculars
Object detection is particularly important in robotic applications that require interaction with the environment. Although 2D object detection methods obtain accurate results, these are not enough to provide a complete description of the 3D scenario. Therefore, many models have recently showed promising progresses in this challenging field [5, 22, 25, 30]. In this work, the goal is to predict 3D bounding boxes from single images without using temporal data nor any explicit depth estimation. We propose an approach for 3D monocular object detection based on Deep3DBox [20]. We aim to replace the geometric constraints taken into account to predict the 3D location of objects by a deep learning module. Moreover, we undertake a study on the different parameters for the modules that are used to predict dimensions and orientation of objects. We conduct experiments in order to search for the best hyperparameters of our model for KITTI [7] cars and we reported and compared our results on KITTI and the challenging NuScenes [2] benchmarks for cars and pedestrians with other state of the art methods. Therefore, we conclude that our approach performs on par with similar methods [22, 30] and improves Deep3DBox [20] results.La detecció d'objectes és particularment important en aplicacions robòtiques que requereixen interacció amb l'entorn. Tot i que s'han obtingut resultats acurats en detecció d'objectes en 2D, aquests no són suficients per a donar una descripcó completa de l'entorn en 3D. De totes maneres, força models han demostrat progressos prometedors en aquest camp [5, 22, 25, 30]. L'objectiu d'aquest treball és predir bounding boxes 3D a partir d'imatges sense utilitzar informació temporal ni cap predicció de profunditat explÃcitament. Proposem un model per detecció monocular d'objectes 3D basada en Deep3DBox [20]. Volem substituir les restriccions geomètriques usades per predir la localització en 3D dels objectes per un mòdul de deep learning. A més, duem a terme un estudi sobre els diferents parà metres dels mòduls utilitzats per a predir les dimensions i l'orientació dels objectes. Hem realitzat experiments per tal de cercar els millors hiperparà metres pel nostre model pels cotxes de KITTI [7] i hem reportat i comparat els nostres resultats sobre KITTI i NuScenes en cotxes i vianants amb els altres metòdes de l'state of the art. Finalment, concloem que el nostre model obté resultats al nivell dels mètodes similars [22, 30] i millora els resultats de Deep3DBox [20
Masked V-Net: an approach to brain tumor segmentation
This paper introduces Masked V-Net architecture, a variant
of the recently introduced V-Net[13] that reformulates the residual connections and uses a ROI mask to constrain the network to train only
on relevant voxels. This architecture allows dense training on problems
with highly skewed class distributions by performing data sampling on
the output instead of in the input. We use Masked V-Net in the context
of brain tumor segmentation and report results on the BraTS2017
Training and Validation sets.Peer ReviewedPostprint (published version
Masked V-Net: an approach to brain tumor segmentation
This paper introduces Masked V-Net architecture, a variant
of the recently introduced V-Net[13] that reformulates the residual connections and uses a ROI mask to constrain the network to train only
on relevant voxels. This architecture allows dense training on problems
with highly skewed class distributions by performing data sampling on
the output instead of in the input. We use Masked V-Net in the context
of brain tumor segmentation and report results on the BraTS2017
Training and Validation sets.Peer Reviewe
Cascaded V-Net using ROI masks for brain tumor segmentation
This book constitutes revised selected papers from the Third International MICCAI Brainlesion Workshop, BrainLes 2017, as well as the International Multimodal Brain Tumor Segmentation, BraTS, and White Matter Hyperintensities, WMH, segmentation challenges, which were held jointly at the Medical Image computing for Computer Assisted Intervention Conference, MICCAI, in Quebec City, Canada, in September 2017.Peer Reviewe
Recommended from our members
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Gliomas are the most common primary brain malignancies, with different
degrees of aggressiveness, variable prognosis and various heterogeneous
histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic
core, active and non-enhancing core. This intrinsic heterogeneity is also
portrayed in their radio-phenotype, as their sub-regions are depicted by
varying intensity profiles disseminated across multi-parametric magnetic
resonance imaging (mpMRI) scans, reflecting varying biological properties.
Their heterogeneous shape, extent, and location are some of the factors that
make these tumors difficult to resect, and in some cases inoperable. The amount
of resected tumor is a factor also considered in longitudinal scans, when
evaluating the apparent tumor for potential diagnosis of progression.
Furthermore, there is mounting evidence that accurate segmentation of the
various tumor sub-regions can offer the basis for quantitative image analysis
towards prediction of patient overall survival. This study assesses the
state-of-the-art machine learning (ML) methods used for brain tumor image
analysis in mpMRI scans, during the last seven instances of the International
Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we
focus on i) evaluating segmentations of the various glioma sub-regions in
pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue
of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO
criteria, and iii) predicting the overall survival from pre-operative mpMRI
scans of patients that underwent gross total resection. Finally, we investigate
the challenge of identifying the best ML algorithms for each of these tasks,
considering that apart from being diverse on each instance of the challenge,
the multi-institutional mpMRI BraTS dataset has also been a continuously
evolving/growing dataset
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Gliomas are the most common primary brain malignancies, with different
degrees of aggressiveness, variable prognosis and various heterogeneous
histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic
core, active and non-enhancing core. This intrinsic heterogeneity is also
portrayed in their radio-phenotype, as their sub-regions are depicted by
varying intensity profiles disseminated across multi-parametric magnetic
resonance imaging (mpMRI) scans, reflecting varying biological properties.
Their heterogeneous shape, extent, and location are some of the factors that
make these tumors difficult to resect, and in some cases inoperable. The amount
of resected tumor is a factor also considered in longitudinal scans, when
evaluating the apparent tumor for potential diagnosis of progression.
Furthermore, there is mounting evidence that accurate segmentation of the
various tumor sub-regions can offer the basis for quantitative image analysis
towards prediction of patient overall survival. This study assesses the
state-of-the-art machine learning (ML) methods used for brain tumor image
analysis in mpMRI scans, during the last seven instances of the International
Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we
focus on i) evaluating segmentations of the various glioma sub-regions in
pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue
of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO
criteria, and iii) predicting the overall survival from pre-operative mpMRI
scans of patients that underwent gross total resection. Finally, we investigate
the challenge of identifying the best ML algorithms for each of these tasks,
considering that apart from being diverse on each instance of the challenge,
the multi-institutional mpMRI BraTS dataset has also been a continuously
evolving/growing dataset