29,934 research outputs found
Recommended from our members
Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study
BACKGROUND: For virtually every patient with colorectal cancer (CRC), hematoxylin-eosin (HE)-stained tissue slides are available. These images contain quantitative information, which is not routinely used to objectively extract prognostic biomarkers. In the present study, we investigated whether deep convolutional neural networks (CNNs) can extract prognosticators directly from these widely available images.
METHODS AND FINDINGS: We hand-delineated single-tissue regions in 86 CRC tissue slides, yielding more than 100,000 HE image patches, and used these to train a CNN by transfer learning, reaching a nine-class accuracy of >94% in an independent data set of 7,180 images from 25 CRC patients. With this tool, we performed automated tissue decomposition of representative multitissue HE images from 862 HE slides in 500 stage I-IV CRC patients in the The Cancer Genome Atlas (TCGA) cohort, a large international multicenter collection of CRC tissue. Based on the output neuron activations in the CNN, we calculated a "deep stroma score," which was an independent prognostic factor for overall survival (OS) in a multivariable Cox proportional hazard model (hazard ratio [HR] with 95% confidence interval [CI]: 1.99 [1.27-3.12], p = 0.0028), while in the same cohort, manual quantification of stromal areas and a gene expression signature of cancer-associated fibroblasts (CAFs) were only prognostic in specific tumor stages. We validated these findings in an independent cohort of 409 stage I-IV CRC patients from the "Darmkrebs: Chancen der Verhütung durch Screening" (DACHS) study who were recruited between 2003 and 2007 in multiple institutions in Germany. Again, the score was an independent prognostic factor for OS (HR 1.63 [1.14-2.33], p = 0.008), CRC-specific OS (HR 2.29 [1.5-3.48], p = 0.0004), and relapse-free survival (RFS; HR 1.92 [1.34-2.76], p = 0.0004). A prospective validation is required before this biomarker can be implemented in clinical workflows.
CONCLUSIONS: In our retrospective study, we show that a CNN can assess the human tumor microenvironment and predict prognosis directly from histopathological images
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Automated 5-year Mortality Prediction using Deep Learning and Radiomics Features from Chest Computed Tomography
We propose new methods for the prediction of 5-year mortality in elderly
individuals using chest computed tomography (CT). The methods consist of a
classifier that performs this prediction using a set of features extracted from
the CT image and segmentation maps of multiple anatomic structures. We explore
two approaches: 1) a unified framework based on deep learning, where features
and classifier are automatically learned in a single optimisation process; and
2) a multi-stage framework based on the design and selection/extraction of
hand-crafted radiomics features, followed by the classifier learning process.
Experimental results, based on a dataset of 48 annotated chest CTs, show that
the deep learning model produces a mean 5-year mortality prediction accuracy of
68.5%, while radiomics produces a mean accuracy that varies between 56% to 66%
(depending on the feature selection/extraction method and classifier). The
successful development of the proposed models has the potential to make a
profound impact in preventive and personalised healthcare.Comment: 9 page
Diagnostic performance of deep learning-based reconstruction algorithm in 3D MR neurography
OBJECTIVE
The study aims to evaluate the diagnostic performance of deep learning-based reconstruction method (DLRecon) in 3D MR neurography for assessment of the brachial and lumbosacral plexus.
MATERIALS AND METHODS
Thirty-five exams (18 brachial and 17 lumbosacral plexus) of 34 patients undergoing routine clinical MR neurography at 1.5 T were retrospectively included (mean age: 49 ± 12 years, 15 female). Coronal 3D T2-weighted short tau inversion recovery fast spin echo with variable flip angle sequences covering plexial nerves on both sides were obtained as part of the standard protocol. In addition to standard-of-care (SOC) reconstruction, k-space was reconstructed with a 3D DLRecon algorithm. Two blinded readers evaluated images for image quality and diagnostic confidence in assessing nerves, muscles, and pathology using a 4-point scale. Additionally, signal-to-noise ratio (SNR) and contrast-to-noise ratios (CNR) between nerve, muscle, and fat were measured. For comparison of visual scoring result non-parametric paired sample Wilcoxon signed-rank testing and for quantitative analysis paired sample Student's t-testing was performed.
RESULTS
DLRecon scored significantly higher than SOC in all categories of image quality (p < 0.05) and diagnostic confidence (p < 0.05), including conspicuity of nerve branches and pathology. With regard to artifacts there was no significant difference between the reconstruction methods. Quantitatively, DLRecon achieved significantly higher CNR and SNR than SOC (p < 0.05).
CONCLUSION
DLRecon enhanced overall image quality, leading to improved conspicuity of nerve branches and pathology, and allowing for increased diagnostic confidence in evaluation of the brachial and lumbosacral plexus
- …