66 research outputs found

    Development of deep learning methods for head and neck cancer detection in hyperspectral imaging and digital pathology for surgical guidance

    Get PDF
    Surgeons performing routine cancer resections utilize palpation and visual inspection, along with time-consuming microscopic tissue analysis, to ensure removal of cancer. Despite this, inadequate surgical cancer margins are reported for up to 10-20% of head and neck squamous cell carcinoma (SCC) operations. There exists a need for surgical guidance with optical imaging to ensure complete cancer resection in the operating room. The objective of this dissertation is to evaluate hyperspectral imaging (HSI) as a non-contact, label-free optical imaging modality to provide intraoperative diagnostic information. For comparison of different optical methods, autofluorescence, RGB composite images synthesized from HSI, and two fluorescent dyes are also acquired and investigated for head and neck cancer detection. A novel and comprehensive dataset was obtained of 585 excised tissue specimens from 204 patients undergoing routine head and neck cancer surgeries. The first aim was to use SCC tissue specimens to determine the potential of HSI for surgical guidance in the challenging task of head and neck SCC detection. It is hypothesized that HSI could reduce time and provide quantitative cancer predictions. State-of-the-art deep learning algorithms were developed for SCC detection in 102 patients and compared to other optical methods. HSI detected SCC with a median AUC score of 85%, and several anatomical locations demonstrated good SCC detection, such as the larynx, oropharynx, hypopharynx, and nasal cavity. To understand the ability of HSI for SCC detection, the most important spectral features were calculated and correlated with known cancer physiology signals, notably oxygenated and deoxygenated hemoglobin. The second aim was to evaluate HSI for tumor detection in thyroid and salivary glands, and RGB images were synthesized using the spectral response curves of the human eye for comparison. Using deep learning, HSI detected thyroid tumors with 86% average AUC score, which outperformed fluorescent dyes and autofluorescence, but HSI-synthesized RGB imagery performed with 90% AUC score. The last aim was to develop deep learning algorithms for head and neck cancer detection in hundreds of digitized histology slides. Slides containing SCC or thyroid carcinoma can be distinguished from normal slides with 94% and 99% AUC scores, respectively, and SCC and thyroid carcinoma can be localized within whole-slide images with 92% and 95% AUC scores, respectively. In conclusion, the outcomes of this thesis work demonstrate that HSI and deep learning methods could aid surgeons and pathologists in detecting head and neck cancers.Ph.D

    A novel digital score for abundance of tumour infiltrating lymphocytes predicts disease free survival in oral squamous cell carcinoma

    Get PDF
    Oral squamous cell carcinoma (OSCC) is the most common type of head and neck (H&N) cancers with an increasing worldwide incidence and a worsening prognosis. The abundance of tumour infiltrating lymphocytes (TILs) has been shown to be a key prognostic indicator in a range of cancers with emerging evidence of its role in OSCC progression and treatment response. However, the current methods of TIL analysis are subjective and open to variability in interpretation. An automated method for quantification of TIL abundance has the potential to facilitate better stratification and prognostication of oral cancer patients. We propose a novel method for objective quantification of TIL abundance in OSCC histology images. The proposed TIL abundance (TILAb) score is calculated by first segmenting the whole slide images (WSIs) into underlying tissue types (tumour, lymphocytes, etc.) and then quantifying the co-localization of lymphocytes and tumour areas in a novel fashion. We investigate the prognostic significance of TILAb score on digitized WSIs of Hematoxylin and Eosin (H&E) stained slides of OSCC patients. Our deep learning based tissue segmentation achieves high accuracy of 96.31%, which paves the way for reliable downstream analysis. We show that the TILAb score is a strong prognostic indicator (p = 0.0006) of disease free survival (DFS) on our OSCC test cohort. The automated TILAb score has a significantly higher prognostic value than the manual TIL score (p = 0.0024). In summary, the proposed TILAb score is a digital biomarker which is based on more accurate classification of tumour and lymphocytic regions, is motivated by the biological definition of TILs as tumour infiltrating lymphocytes, with the added advantages of objective and reproducible quantification

    Human papilloma virus detection in oropharyngeal carcinomas with in situ hybridisation using hand crafted morphological features and deep central attention residual networks

    Get PDF
    Human Papilloma Virus (HPV) is a major risk factor for the development of oropharyngeal cancer. Automatic detection of HPV in digitized pathology tissues using \textit{in situ} hybridisation (ISH) is a difficult task due to the variability and complexity of staining patterns as well as the presence of imaging and staining artefacts. This paper proposes an intelligent image analysis framework to determine HPV status in digitized samples of oropharyngeal cancer tissue micro-arrays (TMA). The proposed pipeline mixes handcrafted feature extraction with a deep learning for epithelial region segmentation as a preliminary step. We apply a deep central attention learning technique to segment epithelial regions and within those assess the presence of regions representing ISH products. We then extract relevant morphological measurements from those regions which are then input into a supervised learning model for the identification of HPV status. The performance of the proposed method has been evaluated on 2,009 TMA images of oropharyngeal carcinoma tissues captured with a ×\times20 objective. The experimental results show that our technique provides around 91\% classification accuracy in detecting HPV status when compared with the histopatholgist gold standard. We also tested the performance of end-to-end deep learning classification methods to assess HPV status by learning directly from the original ISH processed images, rather than from the handcrafted features extracted from the segmented images. We examined the performance of sequential convolutional neural networks (CNN) architectures including {three popular image recognition networks (VGG-16, ResNet and Inception V3) in their pre-trained and trained from scratch versions, however their highest classification accuracy was inferior (78\%) to the hybrid pipeline presented here}

    Pan-tumor CAnine cuTaneous Cancer Histology (CATCH) dataset

    Get PDF
    Due to morphological similarities, the differentiation of histologic sections of cutaneous tumors into individual subtypes can be challenging. Recently, deep learning-based approaches have proven their potential for supporting pathologists in this regard. However, many of these supervised algorithms require a large amount of annotated data for robust development. We present a publicly available dataset of 350 whole slide images of seven different canine cutaneous tumors complemented by 12,424 polygon annotations for 13 histologic classes, including seven cutaneous tumor subtypes. In inter-rater experiments, we show a high consistency of the provided labels, especially for tumor annotations. We further validate the dataset by training a deep neural network for the task of tissue segmentation and tumor subtype classification. We achieve a class-averaged Jaccard coefficient of 0.7047, and 0.9044 for tumor in particular. For classification, we achieve a slide-level accuracy of 0.9857. Since canine cutaneous tumors possess various histologic homologies to human tumors the added value of this dataset is not limited to veterinary pathology but extends to more general fields of application

    Artificial Intelligence-based methods in head and neck cancer diagnosis : an overview

    Get PDF
    Background This paper reviews recent literature employing Artificial Intelligence/Machine Learning (AI/ML) methods for diagnostic evaluation of head and neck cancers (HNC) using automated image analysis. Methods Electronic database searches using MEDLINE via OVID, EMBASE and Google Scholar were conducted to retrieve articles using AI/ML for diagnostic evaluation of HNC (2009–2020). No restrictions were placed on the AI/ML method or imaging modality used. Results In total, 32 articles were identified. HNC sites included oral cavity (n = 16), nasopharynx (n = 3), oropharynx (n = 3), larynx (n = 2), salivary glands (n = 2), sinonasal (n = 1) and in five studies multiple sites were studied. Imaging modalities included histological (n = 9), radiological (n = 8), hyperspectral (n = 6), endoscopic/clinical (n = 5), infrared thermal (n = 1) and optical (n = 1). Clinicopathologic/genomic data were used in two studies. Traditional ML methods were employed in 22 studies (69%), deep learning (DL) in eight studies (25%) and a combination of these methods in two studies (6%). Conclusions There is an increasing volume of studies exploring the role of AI/ML to aid HNC detection using a range of imaging modalities. These methods can achieve high degrees of accuracy that can exceed the abilities of human judgement in making data predictions. Large-scale multi-centric prospective studies are required to aid deployment into clinical practice

    An Aggregation of Aggregation Methods in Computational Pathology

    Full text link
    Image analysis and machine learning algorithms operating on multi-gigapixel whole-slide images (WSIs) often process a large number of tiles (sub-images) and require aggregating predictions from the tiles in order to predict WSI-level labels. In this paper, we present a review of existing literature on various types of aggregation methods with a view to help guide future research in the area of computational pathology (CPath). We propose a general CPath workflow with three pathways that consider multiple levels and types of data and the nature of computation to analyse WSIs for predictive modelling. We categorize aggregation methods according to the context and representation of the data, features of computational modules and CPath use cases. We compare and contrast different methods based on the principle of multiple instance learning, perhaps the most commonly used aggregation method, covering a wide range of CPath literature. To provide a fair comparison, we consider a specific WSI-level prediction task and compare various aggregation methods for that task. Finally, we conclude with a list of objectives and desirable attributes of aggregation methods in general, pros and cons of the various approaches, some recommendations and possible future directions.Comment: 32 pages, 4 figure

    The State of Applying Artificial Intelligence to Tissue Imaging for Cancer Research and Early Detection

    Full text link
    Artificial intelligence represents a new frontier in human medicine that could save more lives and reduce the costs, thereby increasing accessibility. As a consequence, the rate of advancement of AI in cancer medical imaging and more particularly tissue pathology has exploded, opening it to ethical and technical questions that could impede its adoption into existing systems. In order to chart the path of AI in its application to cancer tissue imaging, we review current work and identify how it can improve cancer pathology diagnostics and research. In this review, we identify 5 core tasks that models are developed for, including regression, classification, segmentation, generation, and compression tasks. We address the benefits and challenges that such methods face, and how they can be adapted for use in cancer prevention and treatment. The studies looked at in this paper represent the beginning of this field and future experiments will build on the foundations that we highlight

    Pan-cancer image-based detection of clinically actionable genetic alterations

    Get PDF
    Molecular alterations in cancer can cause phenotypic changes in tumor cells and their microenvironment. Routine histopathology tissue slides, which are ubiquitously available, can reflect such morphological changes. Here, we show that deep learning can consistently infer a wide range of genetic mutations, molecular tumor subtypes, gene expression signatures and standard pathology biomarkers directly from routine histology. We developed, optimized, validated and publicly released a one-stop-shop workflow and applied it to tissue slides of more than 5,000 patients across multiple solid tumors. Our findings show that a single deep learning algorithm can be trained to predict a wide range of molecular alterations from routine, paraffin-embedded histology slides stained with hematoxylin and eosin. These predictions generalize to other populations and are spatially resolved. Our method can be implemented on mobile hardware, potentially enabling point-of-care diagnostics for personalized cancer treatment. More generally, this approach could elucidate and quantify genotype–phenotype links in cancer

    Deep learning-based survival prediction for multiple cancer types using histopathology images

    Full text link
    Prognostic information at diagnosis has important implications for cancer treatment and monitoring. Although cancer staging, histopathological assessment, molecular features, and clinical variables can provide useful prognostic insights, improving risk stratification remains an active research area. We developed a deep learning system (DLS) to predict disease specific survival across 10 cancer types from The Cancer Genome Atlas (TCGA). We used a weakly-supervised approach without pixel-level annotations, and tested three different survival loss functions. The DLS was developed using 9,086 slides from 3,664 cases and evaluated using 3,009 slides from 1,216 cases. In multivariable Cox regression analysis of the combined cohort including all 10 cancers, the DLS was significantly associated with disease specific survival (hazard ratio of 1.58, 95% CI 1.28-1.70, p<0.0001) after adjusting for cancer type, stage, age, and sex. In a per-cancer adjusted subanalysis, the DLS remained a significant predictor of survival in 5 of 10 cancer types. Compared to a baseline model including stage, age, and sex, the c-index of the model demonstrated an absolute 3.7% improvement (95% CI 1.0-6.5) in the combined cohort. Additionally, our models stratified patients within individual cancer stages, particularly stage II (p=0.025) and stage III (p<0.001). By developing and evaluating prognostic models across multiple cancer types, this work represents one of the most comprehensive studies exploring the direct prediction of clinical outcomes using deep learning and histopathology images. Our analysis demonstrates the potential for this approach to provide prognostic information in multiple cancer types, and even within specific pathologic stages. However, given the relatively small number of clinical events, we observed wide confidence intervals, suggesting that future work will benefit from larger datasets
    • …
    corecore