50 research outputs found

    Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks

    Full text link
    A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201

    Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction

    Full text link
    Deep learning for regression tasks on medical imaging data has shown promising results. However, compared to other approaches, their power is strongly linked to the dataset size. In this study, we evaluate 3D-convolutional neural networks (CNNs) and classical regression methods with hand-crafted features for survival time regression of patients with high grade brain tumors. The tested CNNs for regression showed promising but unstable results. The best performing deep learning approach reached an accuracy of 51.5% on held-out samples of the training set. All tested deep learning experiments were outperformed by a Support Vector Classifier (SVC) using 30 radiomic features. The investigated features included intensity, shape, location and deep features. The submitted method to the BraTS 2018 survival prediction challenge is an ensemble of SVCs, which reached a cross-validated accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set, and 42.9% on the testing set. The results suggest that more training data is necessary for a stable performance of a CNN model for direct regression from magnetic resonance images, and that non-imaging clinical patient information is crucial along with imaging information.Comment: Contribution to The International Multimodal Brain Tumor Segmentation (BraTS) Challenge 2018, survival prediction tas

    Алгоритм сСгмСнтації Π½ΠΎΠ²ΠΎΡƒΡ‚Π²ΠΎΡ€Π΅Π½ΠΈΡ… ΠΏΡƒΡ…Π»ΠΈΠ½ Π½Π° МРВ Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½Π½Ρ– Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ·ΠΊΡƒ Π·Π° допомогою ΠΊΠΎΠΌΠ±Ρ–Π½Π°Ρ†Ρ–ΠΉ Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… ΠΌΠ΅Ρ€Π΅ΠΆ

    Get PDF
    Π—Π°ΠΏΡ€ΠΎΠΏΠΎΠ½ΠΎΠ²Π°Π½ΠΎ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌ сСгмСнтації ΠΏΡƒΡ…Π»ΠΈΠ½ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ·ΠΊΡƒ Π½Π° зобраТСннях МРВ, Ρ‰ΠΎ Ρ€Π΅Π°Π»Ρ–Π·ΠΎΠ²Π°Π½ΠΈΠΉ Π½Π° основі Π΄Π΅ΠΊΡ–Π»ΡŒΠΊΠΎΡ… ансамблів Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… ΠΌΠ΅Ρ€Π΅ΠΆ. ΠŸΡ€ΠΈ Ρ–Ρ‚Π΅Ρ€Π°Ρ†Ρ–Ρ— Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡƒ обчислСння Π²ΠΈΠΊΠΎΡ€ΠΈΡΡ‚ΠΎΠ²ΡƒΡŽΡ‚ΡŒΡΡ Π²ΠΈΡ…ΠΎΠ΄ΠΈ Π±Π°Π·ΠΎΠ²ΠΈΡ… Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… ΠΌΠ΅Ρ€Π΅ΠΆ як Π²Ρ…Ρ–Π΄Π½Ρ– Π΄Π°Π½Π½Ρ– для Π½ΠΎΠ²ΠΎΡ— Ρ‚Ρ€Π΅Π½ΠΎΠ²Π°Π½ΠΎΡ— Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΎΡ— ΠΌΠ΅Ρ€Π΅ΠΆΡ–, яка Π² ΠΏΠΎΠ΄Π°Π»ΡŒΡˆΠΎΠΌΡƒ виступає об’єднувачСм для Ρ‚ΠΎΠ³ΠΎ, Ρ‰ΠΎΠ± Π²Ρ–Π΄Ρ€Ρ–Π·Π½ΠΈΡ‚ΠΈ Ρ€ΡƒΠ±Ρ†Π΅Π²Ρƒ Ρ‚ΠΊΠ°Π½ΠΈΠ½Ρƒ Π°Π±ΠΎ Π½Π΅ Π²Ρ€Π°ΠΆΠ΅Π½Ρƒ Ρ‚ΠΊΠ°Π½ΠΈΠ½Ρƒ Π²Ρ–Π΄ ΠΊΠ»Ρ–Ρ‚ΠΈΠ½ ΠΏΡƒΡ…Π»ΠΈΠ½ΠΈ. Π”Π°Π½ΠΈΠΉ ΠΏΡ–Π΄Ρ…Ρ–Π΄ ΠΌΠ°Ρ” складний ΡƒΠ·Π°Π³Π°Π»ΡŒΠ½ΡŽΡŽΡ‡ΠΈΠΉ Ρ…Π°Ρ€Π°ΠΊΡ‚Π΅Ρ€, Π°Π»Π΅, Ρ‚Π°ΠΊΠΈΠΌ Ρ‡ΠΈΠ½ΠΎΠΌ, Π²Π΄Π°Ρ”Ρ‚ΡŒΡΡ ΠΏΡ–Π΄Π²ΠΈΡ‰ΠΈΡ‚ΠΈ ΡΠΊΡ–ΡΡ‚ΡŒ сСгмСнтації ΠΏΡƒΡ…Π»ΠΈΠ½ΠΈ ΠΊΠΎΠΌΠ±Ρ–Π½Π°Ρ†Ρ–Ρ”ΡŽ Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… ΠΌΠ΅Ρ€Π΅ΠΆ. ΠžΡΠΎΠ±Π»ΠΈΠ²Ρ–ΡΡ‚ΡŒ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡƒ полягає Π² Ρ‚ΠΎΠΌΡƒ, Ρ‰ΠΎ Ρ–Π½Π΄ΠΈΠ²Ρ–Π΄ΡƒΠ°Π»ΡŒΠ½ΠΈΠΉ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ для ΠΊΠΎΠΆΠ½ΠΎΠ³ΠΎ класифікатора Π²ΠΈΠ·Π½Π°Ρ‡Π°Ρ”Ρ‚ΡŒΡΡ Π½Π° основі Π½Π°Ρ‚Ρ€Π΅Π½ΠΎΠ²Π°Π½ΠΈΡ… Ρ€Π°Π½Ρ–ΡˆΠ΅ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΡ‚Ρ–ΠΌ воксСль ΠΊΠ»Π°ΡΠΈΡ„Ρ–ΠΊΡƒΡ”Ρ‚ΡŒΡΡ як частина ΠΏΡƒΡ…Π»ΠΈΠ½ΠΈ, якщо Ρ…ΠΎΡ‡Π° Π± ΠΎΠ΄ΠΈΠ½ Π· класифікаторів Π²ΠΈΠ·Π½Π°Ρ‡ΠΈΡ‚ΡŒ ΠΉΠΎΠ³ΠΎ як ΠΏΡƒΡ…Π»ΠΈΠ½Ρƒ. Π”Π°Π»Ρ–, Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ сСгмСнтації Π±Π°Π·ΠΎΠ²ΠΈΡ… класифікаторів потрапляє Π½Π° Π²Ρ…Ρ–Π΄ Π²ΠΆΠ΅ Π½Π°Π²Ρ‡Π΅Π½ΠΎΠ³ΠΎ ΠΌΠ΅Ρ‚Π°-класифікатора, який ΠΏΡ€ΠΈΠΉΠΌΠ°Ρ” остаточнС Ρ€Ρ–ΡˆΠ΅Π½Π½Ρ Ρ‰ΠΎΠ΄ΠΎ приналСТності воксСля Π½Π° Π·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½Π½Ρ– Π΄ΠΎ ΠΊΠ»Ρ–Ρ‚ΠΈΠ½ ΠΏΡƒΡ…Π»ΠΈΠ½ΠΈ.The algorithm of segmentation of brain tumors in MRI images is proposed. In the iteration of the computation algorithm, the outputs of the base neural networks are used as input data for a new trained neural network, which in the future serves as a unifier in order to distinguish scar tissue or non-affected tissue from tumor cells. This approach has a complex generalization, but, thus, it is possible to improve the quality of segmentation of the tumor by a combination of neural networks. The components of the algorithm are basic classifiers that will extract complex functions of the regularities (often implicit) from the data stream, and the unifier will become a classifier that aggregates these functions. At the aggregation level, the data is derived from the classifiers, and the aggregation of the single output. When iterating the computation algorithm, the outputs of the basic classifiers are used as input data for the new trained neural network, which later acts as a unifier. The key idea of the algorithm is that the individual result for each classifier is determined based on the models previously trained, then the voxel is classified as part of the tumor if at least one of the classifiers determines it as a tumor. Further, the result of segmentation of the basic classifiers falls on the input of the already trained meta-classifier, which makes the final decision regarding the voxel's belonging to the image to the tumor cells. In this case, a special algorithm is used. The pixel algorithm proposes to classify pixels in adjacent areas based on gray levels. This method uses local information - the values of the gray levels of adjacent pixels, or, global information - the total distribution of the gray levels of adjacent pixels. The gray levels reflect the intensity of the light in each pixel. At the level of input data and manipulations with them there is an input to the input of the neural network for training.ΠŸΡ€Π΅Π΄Π»ΠΎΠΆΠ΅Π½ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌ сСгмСнтации ΠΎΠΏΡƒΡ…ΠΎΠ»Π΅ΠΉ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠ΅Π·Π³Π° Π½Π° изобраТСниях МРВ, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Ρ€Π΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½ Π½Π° основС Π½Π΅ΡΠΊΠΎΠ»ΡŒΠΊΠΈΡ… ансамблів Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… сСтСй. ΠŸΡ€ΠΈ ΠΈΠ½Ρ‚Π΅Ρ€Π°Ρ†ΠΈΠΈ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° вычислСния ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΡŽΡ‚ΡΡ Π²Ρ‹Ρ…ΠΎΠ΄Ρ‹ Π±Π°Π·ΠΎΠ²ΠΈΡ… Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… сСтСй ΠΊΠ°ΠΊ Π²Ρ…ΠΎΠ΄Π½Ρ‹Π΅ Π΄Π°Π½Π½Ρ‹Π΅ для Π½ΠΎΠ²ΠΎΠΉ Ρ‚Ρ€Π΅Π½ΠΈΡ€ΠΎΠ²Π°Π½Π½ΠΎΠΉ Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΎΠΉ сСти, которая Π² дальнСйшСм выступаСт Π² качСствС ΠΎΠ±ΡŠΠ΅Π΄ΠΈΠ½ΠΈΡ‚Π΅Π»Ρ для Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΠΎΡ‚ Π»ΠΈΡ‡ΠΈΡ‚ΡŒ Ρ€ΡƒΠ±Ρ†ΠΎΠ²ΡƒΡŽ Ρ‚ΠΊΠ°Π½ΡŒ ΠΈΠ»ΠΈ Π½Π΅ ΠΏΠΎΡ€Π°ΠΆΠ΅Π½Π½ΡƒΡŽ Ρ‚ΠΊΠ°Π½ΡŒ ΠΎΡ‚ ΠΊΠ»Π΅Ρ‚ΠΎΠΊ ΠΎΠΏΡƒΡ…ΠΎΠ»ΠΈ. Π”Π°Π½Π½Ρ‹ΠΉ ΠΏΠΎΡ…ΠΎΠ΄ ΠΈΠΌΠ΅Π΅Ρ‚ слоТный ΠΎΠ±ΠΎΠ±Ρ‰Π°ΡŽΡ‰ΠΈΠΉ Ρ…Π°Ρ€Π°ΠΊΡ‚Π΅Ρ€, Π½ΠΎ, Ρ‚Π°ΠΊΠΈΠΌ ΠΎΠ±Ρ€Π°Π·ΠΎΠΌ, удаСтся ΠΏΠΎΠ²Ρ‹ΡΠΈΡ‚ΡŒ качСство сСгмСнтации ΠΎΠΏΡƒΡ…ΠΎΠ»ΠΈ ΠΊΠΎΠΌΠ±ΠΈΠ½Π°Ρ†ΠΈΠ΅ΠΉ Π½Π΅ΠΉΡ€ΠΎΠ½Π½ΠΈΡ… сСтСй. ΠžΡΠΎΠ±Π΅Π½Π½ΠΎΡΡ‚ΡŒ Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΠ° Π·Π°ΠΊΠ»ΡŽΡ‡Π°Π΅Ρ‚ΡΡ Π² Ρ‚ΠΎΠΌ, Ρ‡Ρ‚ΠΎ ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ классификатора опрСдСляСтся Π½Π° основС Π½Π°Ρ‚Ρ€Π΅Π½ΠΈΡ€ΠΎΠ²Π°Π½Π½Ρ‹Ρ… Ρ€Π°Π½Π΅Π΅ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΡ‚ΠΎΠΌ вСксСль классифицируСтся, ΠΊΠ°ΠΊ Ρ‡Π°ΡΡ‚ΡŒ ΠΎΠΏΡƒΡ…ΠΎΠ»ΠΈ, Ссли Ρ…ΠΎΡ‚ΡŒ ΠΎΠ΄ΠΈΠ½ ΠΈΠ· классификаторов опрСдСляСт Π΅Π³ΠΎ ΠΊΠ°ΠΊ ΠΎΠΏΡƒΡ…ΠΎΠ»ΡŒ. Π”Π°Π»Π΅Π΅ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ сСгмСнтации Π±Π°Π·ΠΎΠ²ΠΈΡ… классификаторов ΠΏΠΎΠΏΠ°Π΄Π°Π΅Ρ‚ Π½Π° Π²Ρ…ΠΎΠ΄ ΡƒΠΆΠ΅ Π½Π°ΡƒΡ‡Π΅Π½Π½ΠΎΠ³ΠΎ ΠΌΠ΅Ρ‚Π°-классификатора, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ ΠΏΡ€ΠΈΠ½ΠΈΠΌΠ°Π΅Ρ‚ ΠΎΠΊΠΎΠ½Ρ‡Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎΠ΅ Ρ€Π΅ΡˆΠ΅Π½ΠΈΠ΅ ΠΏΠΎ принадлСТности вСксСля Π½Π° ΠΈΠ·ΠΎΠ±Ρ€Π°ΠΆΠ΅Π½ΠΈΠΈ ΠΊ ΠΊΠ»Π΅Ρ‚ΠΊΠ°ΠΌ ΠΎΠΏΡƒΡ…ΠΎΠ»ΠΈ

    Semi-Supervised Variational Autoencoder for Survival Prediction

    Full text link
    In this paper we propose a semi-supervised variational autoencoder for classification of overall survival groups from tumor segmentation masks. The model can use the output of any tumor segmentation algorithm, removing all assumptions on the scanning platform and the specific type of pulse sequences used, thereby increasing its generalization properties. Due to its semi-supervised nature, the method can learn to classify survival time by using a relatively small number of labeled subjects. We validate our model on the publicly available dataset from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019.Comment: Published in the pre-conference proceeding of "2019 International MICCAI BraTS Challenge
    corecore