783 research outputs found
Context-aware stacked convolutional neural networks for classification of breast carcinomas in whole-slide histopathology images
Automated classification of histopathological whole-slide images (WSI) of
breast tissue requires analysis at very high resolutions with a large
contextual area. In this paper, we present context-aware stacked convolutional
neural networks (CNN) for classification of breast WSIs into normal/benign,
ductal carcinoma in situ (DCIS), and invasive ductal carcinoma (IDC). We first
train a CNN using high pixel resolution patches to capture cellular level
information. The feature responses generated by this model are then fed as
input to a second CNN, stacked on top of the first. Training of this stacked
architecture with large input patches enables learning of fine-grained
(cellular) details and global interdependence of tissue structures. Our system
is trained and evaluated on a dataset containing 221 WSIs of H&E stained breast
tissue specimens. The system achieves an AUC of 0.962 for the binary
classification of non-malignant and malignant slides and obtains a three class
accuracy of 81.3% for classification of WSIs into normal/benign, DCIS, and IDC,
demonstrating its potentials for routine diagnostics
Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images
Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images
of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumorinfiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL
maps are derived through computational staining using a convolutional neural network trained to
classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and
correlation with overall survival. TIL map structural patterns were grouped using standard
histopathological parameters. These patterns are enriched in particular T cell subpopulations
derived from molecular measures. TIL densities and spatial structure were differentially enriched
among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial
infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic
patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for
the TCGA image archives with insights into the tumor-immune microenvironment
Learning where to see : a novel attention model for automated immunohistochemical scoring
Estimatingover-amplification of human epidermal growth factor receptor2 (HER2) on invasive breast cancer (BC) is regarded as a significant predictive and prognostic marker. We propose a novel deep reinforcement learning (DRL) based model that treats immunohistochemical (IHC) scoring of HER2 as a sequential learning task. For a given image tile sampled from multi-resolution giga-pixel whole slide image (WSI), the model learns to sequentially identify some of the diagnostically relevant regions of interest (ROIs) by following a parameterized policy. The selected ROIs are processed by recurrent and residual convolution networks to learn the discriminative features for different HER2 scores and predict the next location, without requiring to process all the subimage patches of a given tile for predicting the HER2 score, mimicking the histopathologist who would not usually analyse every part of the slide at the highest magnification. The proposed model incorporates a task-specific regularization term and inhibition of return mechanism to prevent the model from revisiting the previously attended locations. We evaluated our model on two IHC datasets: a publicly available dataset from the HER2 scoring challenge contest and another dataset consisting of WSIs of gastroenteropancreatic neuroendocrine tumor sections stained with Glo1 marker. We demonstrate that the proposed model out performs other methods based on state-of-the-art deep convolutional networks. To the best of our knowledge, this is the first study using DRL for IHC scoring and could potentially lead to wider use of DRL in the domain of computational pathology reducing the computational burden of the analysis of large multi-gigapixel histology images
Using Feature Extraction From Deep Convolutional Neural Networks for Pathological Image Analysis and Its Visual Interpretability
This dissertation presents a computer-aided diagnosis (CAD) system using deep learning approaches for lesion detection and classification on whole-slide images (WSIs) with breast cancer. The deep features being distinguishing in classification from the convolutional neural networks (CNN) are demonstrated in this study to provide comprehensive interpretability for the proposed CAD system using the domain knowledge in pathology. In the experiment, a total of 186 slides of WSIs were collected and classified into three categories: Non-Carcinoma, Ductal Carcinoma in Situ (DCIS), and Invasive Ductal Carcinoma (IDC). Instead of conducting pixel-wise classification (segmentation) into three classes directly, a hierarchical framework with the multi-view scheme was designed in the proposed system that performs lesion detection for region proposal at higher magnification first and then conducts lesion classification at lower magnification for each detected lesion. The majority voting scheme was adopted to improve the error tolerance of the system in lesion-wise prediction. For all collected 186 slides, the slide-wise prediction accuracy rate strikes to 95.16% (177/186) in binary classification to predict carcinoma (malignant) or non-carcinoma (benign), and the sensitivity for cases with carcinoma reaches 96.36% (106/110). In multi-classification, the accuracy rate is 92.47% (172/186) when predicting Non-Carcinoma, DCIS, and IDC for each slide. Most importantly, the interpretability for the mechanism of the proposed CAD system is provided from the pathological perspective. The experimental results show that the morphological characteristics and co-occurrence properties learned by the deep learning models for lesion detection and classification meet the clinical rules in diagnosis. Accordingly, the pathological interpretability of the deep features not only enhances the reliability of the proposed CAD system to gain acceptance from medical specialists, but also facilitates the development of deep learning frameworks for various tasks in pathology
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images
We propose HookNet, a semantic segmentation model for histopathology
whole-slide images, which combines context and details via multiple branches of
encoder-decoder convolutional neural networks. Concentricpatches at multiple
resolutions with different fields of view are used to feed different branches
of HookNet, and intermediate representations are combined via a hooking
mechanism. We describe a framework to design and train HookNet for achieving
high-resolution semantic segmentation and introduce constraints to guarantee
pixel-wise alignment in feature maps during hooking. We show the advantages of
using HookNet in two histopathology image segmentation tasks where tissue type
prediction accuracy strongly depends on contextual information, namely (1)
multi-class tissue segmentation in breast cancer and, (2) segmentation of
tertiary lymphoid structures and germinal centers in lung cancer. Weshow the
superiority of HookNet when compared with single-resolution U-Net models
working at different resolutions as well as with a recently published
multi-resolution model for histopathology image segmentatio
Towards Secure and Intelligent Diagnosis: Deep Learning and Blockchain Technology for Computer-Aided Diagnosis Systems
Cancer is the second leading cause of death across the world after cardiovascular disease. The survival rate of patients with cancerous tissue can significantly decrease due to late-stage diagnosis. Nowadays, advancements of whole slide imaging scanners have resulted in a dramatic increase of patient data in the domain of digital pathology. Large-scale histopathology images need to be analyzed promptly for early cancer detection which is critical for improving patient's survival rate and treatment planning. Advances of medical image processing and deep learning methods have facilitated the extraction and analysis of high-level features from histopathological data that could assist in life-critical diagnosis and reduce the considerable healthcare cost associated with cancer. In clinical trials, due to the complexity and large variance of collected image data, developing computer-aided diagnosis systems to support quantitative medical image analysis is an area of active research. The first goal of this research is to automate the classification and segmentation process of cancerous regions in histopathology images of different cancer tissues by developing models using deep learning-based architectures. In this research, a framework with different modules is proposed, including (1) data pre-processing, (2) data augmentation, (3) feature extraction, and (4) deep learning architectures. Four validation studies were designed to conduct this research. (1) differentiating benign and malignant lesions in breast cancer (2) differentiating between immature leukemic blasts and normal cells in leukemia cancer (3) differentiating benign and malignant regions in lung cancer, and (4) differentiating benign and malignant regions in colorectal cancer.
Training machine learning models, disease diagnosis, and treatment often requires collecting patients' medical data. Privacy and trusted authenticity concerns make data owners reluctant to share their personal and medical data. Motivated by the advantages of Blockchain technology in healthcare data sharing frameworks, the focus of the second part of this research is to integrate Blockchain technology in computer-aided diagnosis systems to address the problems of managing access control, authentication, provenance, and confidentiality of sensitive medical data. To do so, a hierarchical identity and attribute-based access control mechanism using smart contract and Ethereum Blockchain is proposed to securely process healthcare data without revealing sensitive information to an unauthorized party leveraging the trustworthiness of transactions in a collaborative healthcare environment. The proposed access control mechanism provides a solution to the challenges associated with centralized access control systems and ensures data transparency and traceability for secure data sharing, and data ownership
- …