5 research outputs found

    RECENT CNN-BASED TECHNIQUES FOR BREAST CANCER HISTOLOGY IMAGE CLASSIFICATION

    Get PDF
    Histology images are extensively used by pathologists to assess abnormalities and detect malignancy in breast tissues. On the other hand, Convolutional Neural Networks (CNN) are by far, the privileged models for image classification and interpretation. Based on these two facts, we surveyed the recent CNN-based methods for breast cancer histology image analysis. The survey focuses on two major issues usually faced by CNN-based methods namely the design of an appropriate CNN architecture and the lack of a sufficient labelled dataset for training the model. Regarding the design of the CNN architecture, methods examining breast histology images adopt three main approaches: Designing manually from scratch the CNN architecture, using pre-trained models and adopting an automatic architecture design. Methods addressing the lack of labelled datasets are grouped into four categories: methods using pre-trained models, methods using data augmentation, methods adopting weakly supervised learning and those adopting feedforward filter learning. Research works from each category and reported performance are presented in this paper. We conclude the paper by indicating some future research directions related to the analysis of histology images

    Magnification-independent Histopathological Image Classification with Similarity-based Multi-scale Embeddings

    Full text link
    The classification of histopathological images is of great value in both cancer diagnosis and pathological studies. However, multiple reasons, such as variations caused by magnification factors and class imbalance, make it a challenging task where conventional methods that learn from image-label datasets perform unsatisfactorily in many cases. We observe that tumours of the same class often share common morphological patterns. To exploit this fact, we propose an approach that learns similarity-based multi-scale embeddings (SMSE) for magnification-independent histopathological image classification. In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets. The learned embeddings provide accurate measurements of similarities between images, which are regarded as a more effective form of representation for histopathological morphology than normal image features. Furthermore, in order to ensure the generated models are magnification-independent, images acquired at different magnification factors are simultaneously fed to networks during training for learning multi-scale embeddings. In addition to the SMSE, to eliminate the impact of class imbalance, instead of using the hard sample mining strategy that intuitively discards some easy samples, we introduce a new reinforced focal loss to simultaneously punish hard misclassified samples while suppressing easy well-classified samples. Experimental results show that the SMSE improves the performance for histopathological image classification tasks for both breast and liver cancers by a large margin compared to previous methods. In particular, the SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods using traditional features

    Medical Image Classification using Deep Learning Techniques and Uncertainty Quantification

    Get PDF
    The emergence of medical image analysis using deep learning techniques has introduced multiple challenges in terms of developing robust and trustworthy systems for automated grading and diagnosis. Several works have been presented to improve classification performance. However, these methods lack the diversity of capturing different levels of contextual information among image regions, strategies to present diversity in learning by using ensemble-based techniques, or uncertainty measures for predictions generated from automated systems. Consequently, the presented methods provide sub-optimal results which is not enough for clinical practice. To enhance classification performance and introduce trustworthiness, deep learning techniques and uncertainty quantification methods are required to provide diversity in contextual learning and the initial stage of explainability, respectively. This thesis aims to explore and develop novel deep learning techniques escorted by uncertainty quantification for developing actionable automated grading and diagnosis systems. More specifically, the thesis provides the following three main contributions. First, it introduces a novel entropy-based elastic ensemble of Deep Convolutional Neural Networks (DCNNs) architecture termed as 3E-Net for classifying grades of invasive breast carcinoma microscopic images. 3E-Net is based on a patch-wise network for feature extraction and image-wise networks for final image classification and uses an elastic ensemble based on Shannon Entropy as an uncertainty quantification method for measuring the level of randomness in image predictions. As the second contribution, the thesis presents a novel multi-level context and uncertainty-aware deep learning architecture named MCUa for the classification of breast cancer microscopic images. MCUa consists of multiple feature extractors and multi-level context-aware models in a dynamic ensemble fashion to learn the spatial dependencies among image patches and enhance the learning diversity. Also, the architecture uses Monte Carlo (MC) dropout for measuring the uncertainty of image predictions and deciding whether an input image is accurate based on the generated uncertainty score. The third contribution of the thesis introduces a novel model agnostic method (AUQantO) that establishes an actionable strategy for optimising uncertainty quantification for deep learning architectures. AUQantO method works on optimising a hyperparameter threshold, which is compared against uncertainty scores from Shannon entropy and MC-dropout. The optimal threshold is achieved based on single- and multi-objective functions which are optimised using multiple optimisation methods. A comprehensive set of experiments have been conducted using multiple medical imaging datasets and multiple novel evaluation metrics to prove the effectiveness of our three contributions to clinical practice. First, 3E-Net versions achieved an accuracy of 96.15% and 99.50% on invasive breast carcinoma dataset. The second contribution, MCUa, achieved an accuracy of 98.11% on Breast cancer histology images dataset. Lastly, AUQantO showed significant improvements in performance of the state-of-the-art deep learning models with an average accuracy improvement of 1.76% and 2.02% on Breast cancer histology images dataset and an average accuracy improvement of 5.67% and 4.24% on Skin cancer dataset using two uncertainty quantification techniques. AUQantO demonstrated the ability to generate the optimal number of excluded images in a particular dataset
    corecore