896 research outputs found
Joint and individual analysis of breast cancer histologic images and genomic covariates
A key challenge in modern data analysis is understanding connections between
complex and differing modalities of data. For example, two of the main
approaches to the study of breast cancer are histopathology (analyzing visual
characteristics of tumors) and genetics. While histopathology is the gold
standard for diagnostics and there have been many recent breakthroughs in
genetics, there is little overlap between these two fields. We aim to bridge
this gap by developing methods based on Angle-based Joint and Individual
Variation Explained (AJIVE) to directly explore similarities and differences
between these two modalities. Our approach exploits Convolutional Neural
Networks (CNNs) as a powerful, automatic method for image feature extraction to
address some of the challenges presented by statistical analysis of
histopathology image data. CNNs raise issues of interpretability that we
address by developing novel methods to explore visual modes of variation
captured by statistical algorithms (e.g. PCA or AJIVE) applied to CNN features.
Our results provide many interpretable connections and contrasts between
histopathology and genetics
Combination of multiple neural networks using transfer learning and extensive geometric data augmentation for assessing cellularity scores in histopathology images
Classification of cancer cellularity within tissue samples is currently a
manual process performed by pathologists. This process of correctly determining
cancer cellularity can be time intensive. Deep Learning (DL) techniques in
particular have become increasingly more popular for this purpose, due to the
accuracy and performance they exhibit, which can be comparable to the
pathologists. This work investigates the capabilities of two DL approaches to
assess cancer cellularity in whole slide images (WSI) in the SPIE-AAPM-NCI
BreastPathQ challenge dataset. The effects of training on augmented data via
rotations, and combinations of multiple architectures into a single network
were analyzed using a modified Kendall Tau-b prediction probability metric
known as the average prediction probability PK. A deep, transfer learned,
Convolutional Neural Network (CNN) InceptionV3 was used as a baseline,
achieving an average PK value of 0.884, showing improvement from the average PK
value of 0.83 achieved by pathologists. The network was then trained on
additional training datasets which were rotated between 1 and 360 degrees,
which saw a peak increase of PK up to 4.2%. An additional architecture
consisting of the InceptionV3 network and VGG16, a shallow, transfer learned
CNN, was combined in a parallel architecture. This parallel architecture
achieved a baseline average PK value of 0.907, a statistically significantly
improvement over either of the architectures' performances separately (p<0.0001
by unpaired t-test).Comment: 7 pages (includes a cover page), 5 figures, 1 table, work addresses
the BreastPathQ challeng
Transitioning between Convolutional and Fully Connected Layers in Neural Networks
Digital pathology has advanced substantially over the last decade however
tumor localization continues to be a challenging problem due to highly complex
patterns and textures in the underlying tissue bed. The use of convolutional
neural networks (CNNs) to analyze such complex images has been well adopted in
digital pathology. However in recent years, the architecture of CNNs have
altered with the introduction of inception modules which have shown great
promise for classification tasks. In this paper, we propose a modified
"transition" module which learns global average pooling layers from filters of
varying sizes to encourage class-specific filters at multiple spatial
resolutions. We demonstrate the performance of the transition module in AlexNet
and ZFNet, for classifying breast tumors in two independent datasets of scanned
histology sections, of which the transition module was superior.Comment: This work is to appear at the 3rd workshop on Deep Learning in
Medical Image Analysis (DLMIA), MICCAI 201
Assessment of algorithms for mitosis detection in breast cancer histopathology images
The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues.
In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists
Longitudinal Brain Tumor Tracking, Tumor Grading, and Patient Survival Prediction Using MRI
This work aims to develop novel methods for brain tumor classification, longitudinal brain tumor tracking, and patient survival prediction. Consequently, this dissertation proposes three tasks. First, we develop a framework for brain tumor segmentation prediction in longitudinal multimodal magnetic resonance imaging (mMRI) scans, comprising two methods: feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density features, in order to obtain tumor segmentation predictions in follow-up scans from a baseline pre-operative timepoint. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). With the advantages of feature fusion and label fusion, we achieve state-of-the-art brain tumor segmentation prediction.
Second, we propose a deep neural network (DNN) learning-based method for brain tumor type and subtype grading using phenotypic and genotypic data, following the World Health Organization (WHO) criteria. In addition, the classification method integrates a cellularity feature which is derived from the morphology of a pathology image to improve classification performance. The proposed method achieves state-of-the-art performance for tumor grading following the new CNS tumor grading criteria.
Finally, we investigate brain tumor volume segmentation, tumor subtype classification, and overall patient survival prediction, and then we propose a new context- aware deep learning method, known as the Context Aware Convolutional Neural Network (CANet). Using the proposed method, we participated in the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) for brain tumor volume segmentation and overall survival prediction tasks. In addition, we also participated in the Radiology-Pathology Challenge 2019 (CPM-RadPath 2019) for Brain Tumor Subtype Classification, organized by the Medical Image Computing & Computer Assisted Intervention (MICCAI) Society. The online evaluation results show that the proposed methods offer competitive performance from their use of state-of-the-art methods in tumor volume segmentation, promising performance on overall survival prediction, and state-of-the-art performance on tumor subtype classification. Moreover, our result was ranked second place in the testing phase of the CPM-RadPath 2019
Deep Neural Network Analysis of Pathology Images With Integrated Molecular Data for Enhanced Glioma Classification and Grading
Gliomas are primary brain tumors that originate from glial cells. Classification and grading of these tumors is critical to prognosis and treatment planning. The current criteria for glioma classification in central nervous system (CNS) was introduced by World Health Organization (WHO) in 2016. This criteria for glioma classification requires the integration of histology with genomics. In 2017, the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW) was established to provide up-to-date recommendations for CNS tumor classification, which in turn the WHO is expected to adopt in its upcoming edition. In this work, we propose a novel glioma analytical method that, for the first time in the literature, integrates a cellularity feature derived from the digital analysis of brain histopathology images integrated with molecular features following the latest WHO criteria. We first propose a novel over-segmentation strategy for region-of-interest (ROI) selection in large histopathology whole slide images (WSIs). A Deep Neural Network (DNN)-based classification method then fuses molecular features with cellularity features to improve tumor classification performance. We evaluate the proposed method with 549 patient cases from The Cancer Genome Atlas (TCGA) dataset for evaluation. The cross validated classification accuracies are 93.81% for lower-grade glioma (LGG) and high-grade glioma (HGG) using a regular DNN, and 73.95% for LGG II and LGG III using a residual neural network (ResNet) DNN, respectively. Our experiments suggest that the type of deep learning has a significant impact on tumor subtype discrimination between LGG II vs. LGG III. These results outperform state-of-the-art methods in classifying LGG II vs. LGG III and offer competitive performance in distinguishing LGG vs. HGG in the literature. In addition, we also investigate molecular subtype classification using pathology images and cellularity information. Finally, for the first time in literature this work shows promise for cellularity quantification to predict brain tumor grading for LGGs with IDH mutations
Deep learning features encode interpretable morphologies within histological images.
Convolutional neural networks (CNNs) are revolutionizing digital pathology by enabling machine learning-based classification of a variety of phenotypes from hematoxylin and eosin (H&E) whole slide images (WSIs), but the interpretation of CNNs remains difficult. Most studies have considered interpretability in a post hoc fashion, e.g. by presenting example regions with strongly predicted class labels. However, such an approach does not explain the biological features that contribute to correct predictions. To address this problem, here we investigate the interpretability of H&E-derived CNN features (the feature weights in the final layer of a transfer-learning-based architecture). While many studies have incorporated CNN features into predictive models, there has been little empirical study of their properties. We show such features can be construed as abstract morphological genes ( mones ) with strong independent associations to biological phenotypes. Many mones are specific to individual cancer types, while others are found in multiple cancers especially from related tissue types. We also observe that mone-mone correlations are strong and robustly preserved across related cancers. Importantly, linear mone-based classifiers can very accurately separate 38 distinct classes (19 tumor types and their adjacent normals, AUC = [Formula: see text] for each class prediction), and linear classifiers are also highly effective for universal tumor detection (AUC = [Formula: see text]). This linearity provides evidence that individual mones or correlated mone clusters may be associated with interpretable histopathological features or other patient characteristics. In particular, the statistical similarity of mones to gene expression values allows integrative mone analysis via expression-based bioinformatics approaches. We observe strong correlations between individual mones and individual gene expression values, notably mones associated with collagen gene expression in ovarian cancer. Mone-expression comparisons also indicate that immunoglobulin expression can be identified using mones in colon adenocarcinoma and that immune activity can be identified across multiple cancer types, and we verify these findings by expert histopathological review. Our work demonstrates that mones provide a morphological H&E decomposition that can be effectively associated with diverse phenotypes, analogous to the interpretability of transcription via gene expression values. Our work also demonstrates mones can be interpreted without using a classifier as a proxy
- …