50 research outputs found
Automatic Brain Tumor Segmentation using Cascaded Anisotropic Convolutional Neural Networks
A cascade of fully convolutional neural networks is proposed to segment
multi-modal Magnetic Resonance (MR) images with brain tumor into background and
three hierarchical regions: whole tumor, tumor core and enhancing tumor core.
The cascade is designed to decompose the multi-class segmentation problem into
a sequence of three binary segmentation problems according to the subregion
hierarchy. The whole tumor is segmented in the first step and the bounding box
of the result is used for the tumor core segmentation in the second step. The
enhancing tumor core is then segmented based on the bounding box of the tumor
core segmentation result. Our networks consist of multiple layers of
anisotropic and dilated convolution filters, and they are combined with
multi-view fusion to reduce false positives. Residual connections and
multi-scale predictions are employed in these networks to boost the
segmentation performance. Experiments with BraTS 2017 validation set show that
the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for
enhancing tumor core, whole tumor and tumor core, respectively. The
corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and
0.7748, respectively.Comment: 12 pages, 5 figures. MICCAI Brats Challenge 201
Deep Learning versus Classical Regression for Brain Tumor Patient Survival Prediction
Deep learning for regression tasks on medical imaging data has shown
promising results. However, compared to other approaches, their power is
strongly linked to the dataset size. In this study, we evaluate
3D-convolutional neural networks (CNNs) and classical regression methods with
hand-crafted features for survival time regression of patients with high grade
brain tumors. The tested CNNs for regression showed promising but unstable
results. The best performing deep learning approach reached an accuracy of
51.5% on held-out samples of the training set. All tested deep learning
experiments were outperformed by a Support Vector Classifier (SVC) using 30
radiomic features. The investigated features included intensity, shape,
location and deep features. The submitted method to the BraTS 2018 survival
prediction challenge is an ensemble of SVCs, which reached a cross-validated
accuracy of 72.2% on the BraTS 2018 training set, 57.1% on the validation set,
and 42.9% on the testing set. The results suggest that more training data is
necessary for a stable performance of a CNN model for direct regression from
magnetic resonance images, and that non-imaging clinical patient information is
crucial along with imaging information.Comment: Contribution to The International Multimodal Brain Tumor Segmentation
(BraTS) Challenge 2018, survival prediction tas
ΠΠ»Π³ΠΎΡΠΈΡΠΌ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ Π½ΠΎΠ²ΠΎΡΡΠ²ΠΎΡΠ΅Π½ΠΈΡ ΠΏΡΡ Π»ΠΈΠ½ Π½Π° ΠΠ Π’ Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½Ρ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ·ΠΊΡ Π·Π° Π΄ΠΎΠΏΠΎΠΌΠΎΠ³ΠΎΡ ΠΊΠΎΠΌΠ±ΡΠ½Π°ΡΡΠΉ Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ ΠΌΠ΅ΡΠ΅ΠΆ
ΠΠ°ΠΏΡΠΎΠΏΠΎΠ½ΠΎΠ²Π°Π½ΠΎ Π°Π»Π³ΠΎΡΠΈΡΠΌ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ ΠΏΡΡ
Π»ΠΈΠ½ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠΎΠ·ΠΊΡ Π½Π° Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½ΡΡ
ΠΠ Π’, ΡΠΎ ΡΠ΅Π°Π»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΉ Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ Π΄Π΅ΠΊΡΠ»ΡΠΊΠΎΡ
Π°Π½ΡΠ°ΠΌΠ±Π»ΡΠ² Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ
ΠΌΠ΅ΡΠ΅ΠΆ. ΠΡΠΈ ΡΡΠ΅ΡΠ°ΡΡΡ Π°Π»Π³ΠΎΡΠΈΡΠΌΡ ΠΎΠ±ΡΠΈΡΠ»Π΅Π½Π½Ρ Π²ΠΈΠΊΠΎΡΠΈΡΡΠΎΠ²ΡΡΡΡΡΡ Π²ΠΈΡ
ΠΎΠ΄ΠΈ Π±Π°Π·ΠΎΠ²ΠΈΡ
Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ
ΠΌΠ΅ΡΠ΅ΠΆ ΡΠΊ Π²Ρ
ΡΠ΄Π½Ρ Π΄Π°Π½Π½Ρ Π΄Π»Ρ Π½ΠΎΠ²ΠΎΡ ΡΡΠ΅Π½ΠΎΠ²Π°Π½ΠΎΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΠΎΡ ΠΌΠ΅ΡΠ΅ΠΆΡ, ΡΠΊΠ° Π² ΠΏΠΎΠ΄Π°Π»ΡΡΠΎΠΌΡ Π²ΠΈΡΡΡΠΏΠ°Ρ ΠΎΠ±βΡΠ΄Π½ΡΠ²Π°ΡΠ΅ΠΌ Π΄Π»Ρ ΡΠΎΠ³ΠΎ, ΡΠΎΠ± Π²ΡΠ΄ΡΡΠ·Π½ΠΈΡΠΈ ΡΡΠ±ΡΠ΅Π²Ρ ΡΠΊΠ°Π½ΠΈΠ½Ρ Π°Π±ΠΎ Π½Π΅ Π²ΡΠ°ΠΆΠ΅Π½Ρ ΡΠΊΠ°Π½ΠΈΠ½Ρ Π²ΡΠ΄ ΠΊΠ»ΡΡΠΈΠ½ ΠΏΡΡ
Π»ΠΈΠ½ΠΈ. ΠΠ°Π½ΠΈΠΉ ΠΏΡΠ΄Ρ
ΡΠ΄ ΠΌΠ°Ρ ΡΠΊΠ»Π°Π΄Π½ΠΈΠΉ ΡΠ·Π°Π³Π°Π»ΡΠ½ΡΡΡΠΈΠΉ Ρ
Π°ΡΠ°ΠΊΡΠ΅Ρ, Π°Π»Π΅, ΡΠ°ΠΊΠΈΠΌ ΡΠΈΠ½ΠΎΠΌ, Π²Π΄Π°ΡΡΡΡΡ ΠΏΡΠ΄Π²ΠΈΡΠΈΡΠΈ ΡΠΊΡΡΡΡ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ ΠΏΡΡ
Π»ΠΈΠ½ΠΈ ΠΊΠΎΠΌΠ±ΡΠ½Π°ΡΡΡΡ Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ
ΠΌΠ΅ΡΠ΅ΠΆ. ΠΡΠΎΠ±Π»ΠΈΠ²ΡΡΡΡ Π°Π»Π³ΠΎΡΠΈΡΠΌΡ ΠΏΠΎΠ»ΡΠ³Π°Ρ Π² ΡΠΎΠΌΡ, ΡΠΎ ΡΠ½Π΄ΠΈΠ²ΡΠ΄ΡΠ°Π»ΡΠ½ΠΈΠΉ ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ Π΄Π»Ρ ΠΊΠΎΠΆΠ½ΠΎΠ³ΠΎ ΠΊΠ»Π°ΡΠΈΡΡΠΊΠ°ΡΠΎΡΠ° Π²ΠΈΠ·Π½Π°ΡΠ°ΡΡΡΡΡ Π½Π° ΠΎΡΠ½ΠΎΠ²Ρ Π½Π°ΡΡΠ΅Π½ΠΎΠ²Π°Π½ΠΈΡ
ΡΠ°Π½ΡΡΠ΅ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΡΡΠΌ Π²ΠΎΠΊΡΠ΅Π»Ρ ΠΊΠ»Π°ΡΠΈΡΡΠΊΡΡΡΡΡΡ ΡΠΊ ΡΠ°ΡΡΠΈΠ½Π° ΠΏΡΡ
Π»ΠΈΠ½ΠΈ, ΡΠΊΡΠΎ Ρ
ΠΎΡΠ° Π± ΠΎΠ΄ΠΈΠ½ Π· ΠΊΠ»Π°ΡΠΈΡΡΠΊΠ°ΡΠΎΡΡΠ² Π²ΠΈΠ·Π½Π°ΡΠΈΡΡ ΠΉΠΎΠ³ΠΎ ΡΠΊ ΠΏΡΡ
Π»ΠΈΠ½Ρ. ΠΠ°Π»Ρ, ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΡΡ Π±Π°Π·ΠΎΠ²ΠΈΡ
ΠΊΠ»Π°ΡΠΈΡΡΠΊΠ°ΡΠΎΡΡΠ² ΠΏΠΎΡΡΠ°ΠΏΠ»ΡΡ Π½Π° Π²Ρ
ΡΠ΄ Π²ΠΆΠ΅ Π½Π°Π²ΡΠ΅Π½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠ°-ΠΊΠ»Π°ΡΠΈΡΡΠΊΠ°ΡΠΎΡΠ°, ΡΠΊΠΈΠΉ ΠΏΡΠΈΠΉΠΌΠ°Ρ ΠΎΡΡΠ°ΡΠΎΡΠ½Π΅ ΡΡΡΠ΅Π½Π½Ρ ΡΠΎΠ΄ΠΎ ΠΏΡΠΈΠ½Π°Π»Π΅ΠΆΠ½ΠΎΡΡΡ Π²ΠΎΠΊΡΠ΅Π»Ρ Π½Π° Π·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½Π½Ρ Π΄ΠΎ ΠΊΠ»ΡΡΠΈΠ½ ΠΏΡΡ
Π»ΠΈΠ½ΠΈ.The algorithm of segmentation of brain tumors in MRI images is proposed. In the iteration of the computation algorithm, the outputs of the base neural networks are used as input data for a new trained neural network, which in the future serves as a unifier in order to distinguish scar tissue or non-affected tissue from tumor cells. This approach has a complex generalization, but, thus, it is possible to improve the quality of segmentation of the tumor by a combination of neural networks. The components of the algorithm are basic classifiers that will extract complex functions of the regularities (often implicit) from the data stream, and the unifier will become a classifier that aggregates these functions. At the aggregation level, the data is derived from the classifiers, and the aggregation of the single output. When iterating the computation algorithm, the outputs of the basic classifiers are used as input data for the new trained neural network, which later acts as a unifier. The key idea of the algorithm is that the individual result for each classifier is determined based on the models previously trained, then the voxel is classified as part of the tumor if at least one of the classifiers determines it as a tumor. Further, the result of segmentation of the basic classifiers falls on the input of the already trained meta-classifier, which makes the final decision regarding the voxel's belonging to the image to the tumor cells. In this case, a special algorithm is used. The pixel algorithm proposes to classify pixels in adjacent areas based on gray levels. This method uses local information - the values of the gray levels of adjacent pixels, or, global information - the total distribution of the gray levels of adjacent pixels. The gray levels reflect the intensity of the light in each pixel. At the level of input data and manipulations with them there is an input to the input of the neural network for training.ΠΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ Π°Π»Π³ΠΎΡΠΈΡΠΌ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΠΈΠΈ ΠΎΠΏΡΡ
ΠΎΠ»Π΅ΠΉ Π³ΠΎΠ»ΠΎΠ²Π½ΠΎΠ³ΠΎ ΠΌΠ΅Π·Π³Π° Π½Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΡΡ
ΠΠ Π’, ΠΊΠΎΡΠΎΡΡΠΉ ΡΠ΅Π°Π»ΠΈΠ·ΠΎΠ²Π°Π½ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΈΡ
Π°Π½ΡΠ°ΠΌΠ±Π»ΡΠ² Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ
ΡΠ΅ΡΠ΅ΠΉ. ΠΡΠΈ ΠΈΠ½ΡΠ΅ΡΠ°ΡΠΈΠΈ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ° Π²ΡΡΠΈΡΠ»Π΅Π½ΠΈΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ Π²ΡΡ
ΠΎΠ΄Ρ Π±Π°Π·ΠΎΠ²ΠΈΡ
Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ
ΡΠ΅ΡΠ΅ΠΉ ΠΊΠ°ΠΊ Π²Ρ
ΠΎΠ΄Π½ΡΠ΅ Π΄Π°Π½Π½ΡΠ΅ Π΄Π»Ρ Π½ΠΎΠ²ΠΎΠΉ ΡΡΠ΅Π½ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΠΉ Π½Π΅ΠΉΡΠΎΠ½Π½ΠΎΠΉ ΡΠ΅ΡΠΈ, ΠΊΠΎΡΠΎΡΠ°Ρ Π² Π΄Π°Π»ΡΠ½Π΅ΠΉΡΠ΅ΠΌ Π²ΡΡΡΡΠΏΠ°Π΅Ρ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΎΠ±ΡΠ΅Π΄ΠΈΠ½ΠΈΡΠ΅Π»Ρ Π΄Π»Ρ ΡΠΎΠ³ΠΎ, ΡΡΠΎΠ±Ρ ΠΎΡ Π»ΠΈΡΠΈΡΡ ΡΡΠ±ΡΠΎΠ²ΡΡ ΡΠΊΠ°Π½Ρ ΠΈΠ»ΠΈ Π½Π΅ ΠΏΠΎΡΠ°ΠΆΠ΅Π½Π½ΡΡ ΡΠΊΠ°Π½Ρ ΠΎΡ ΠΊΠ»Π΅ΡΠΎΠΊ ΠΎΠΏΡΡ
ΠΎΠ»ΠΈ. ΠΠ°Π½Π½ΡΠΉ ΠΏΠΎΡ
ΠΎΠ΄ ΠΈΠΌΠ΅Π΅Ρ ΡΠ»ΠΎΠΆΠ½ΡΠΉ ΠΎΠ±ΠΎΠ±ΡΠ°ΡΡΠΈΠΉ Ρ
Π°ΡΠ°ΠΊΡΠ΅Ρ, Π½ΠΎ, ΡΠ°ΠΊΠΈΠΌ ΠΎΠ±ΡΠ°Π·ΠΎΠΌ, ΡΠ΄Π°Π΅ΡΡΡ ΠΏΠΎΠ²ΡΡΠΈΡΡ ΠΊΠ°ΡΠ΅ΡΡΠ²ΠΎ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΠΈΠΈ ΠΎΠΏΡΡ
ΠΎΠ»ΠΈ ΠΊΠΎΠΌΠ±ΠΈΠ½Π°ΡΠΈΠ΅ΠΉ Π½Π΅ΠΉΡΠΎΠ½Π½ΠΈΡ
ΡΠ΅ΡΠ΅ΠΉ. ΠΡΠΎΠ±Π΅Π½Π½ΠΎΡΡΡ Π°Π»Π³ΠΎΡΠΈΡΠΌΠ° Π·Π°ΠΊΠ»ΡΡΠ°Π΅ΡΡΡ Π² ΡΠΎΠΌ, ΡΡΠΎ ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡΠ°Π»ΡΠ½ΡΠΉ ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΎΡΠ° ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΡΡΡ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ Π½Π°ΡΡΠ΅Π½ΠΈΡΠΎΠ²Π°Π½Π½ΡΡ
ΡΠ°Π½Π΅Π΅ ΠΌΠΎΠ΄Π΅Π»Π΅ΠΉ, ΠΏΠΎΡΠΎΠΌ Π²Π΅ΠΊΡΠ΅Π»Ρ ΠΊΠ»Π°ΡΡΠΈΡΠΈΡΠΈΡΡΠ΅ΡΡΡ, ΠΊΠ°ΠΊ ΡΠ°ΡΡΡ ΠΎΠΏΡΡ
ΠΎΠ»ΠΈ, Π΅ΡΠ»ΠΈ Ρ
ΠΎΡΡ ΠΎΠ΄ΠΈΠ½ ΠΈΠ· ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΎΡΠΎΠ² ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅Ρ Π΅Π³ΠΎ ΠΊΠ°ΠΊ ΠΎΠΏΡΡ
ΠΎΠ»Ρ. ΠΠ°Π»Π΅Π΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°Ρ ΡΠ΅Π³ΠΌΠ΅Π½ΡΠ°ΡΠΈΠΈ Π±Π°Π·ΠΎΠ²ΠΈΡ
ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΎΡΠΎΠ² ΠΏΠΎΠΏΠ°Π΄Π°Π΅Ρ Π½Π° Π²Ρ
ΠΎΠ΄ ΡΠΆΠ΅ Π½Π°ΡΡΠ΅Π½Π½ΠΎΠ³ΠΎ ΠΌΠ΅ΡΠ°-ΠΊΠ»Π°ΡΡΠΈΡΠΈΠΊΠ°ΡΠΎΡΠ°, ΠΊΠΎΡΠΎΡΡΠΉ ΠΏΡΠΈΠ½ΠΈΠΌΠ°Π΅Ρ ΠΎΠΊΠΎΠ½ΡΠ°ΡΠ΅Π»ΡΠ½ΠΎΠ΅ ΡΠ΅ΡΠ΅Π½ΠΈΠ΅ ΠΏΠΎ ΠΏΡΠΈΠ½Π°Π΄Π»Π΅ΠΆΠ½ΠΎΡΡΠΈ Π²Π΅ΠΊΡΠ΅Π»Ρ Π½Π° ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΈ ΠΊ ΠΊΠ»Π΅ΡΠΊΠ°ΠΌ ΠΎΠΏΡΡ
ΠΎΠ»ΠΈ
Semi-Supervised Variational Autoencoder for Survival Prediction
In this paper we propose a semi-supervised variational autoencoder for
classification of overall survival groups from tumor segmentation masks. The
model can use the output of any tumor segmentation algorithm, removing all
assumptions on the scanning platform and the specific type of pulse sequences
used, thereby increasing its generalization properties. Due to its
semi-supervised nature, the method can learn to classify survival time by using
a relatively small number of labeled subjects. We validate our model on the
publicly available dataset from the Multimodal Brain Tumor Segmentation
Challenge (BraTS) 2019.Comment: Published in the pre-conference proceeding of "2019 International
MICCAI BraTS Challenge