2,277 research outputs found

    Efficient breast cancer classification network with dual squeeze and excitation in histopathological images.

    Get PDF
    Medical image analysis methods for mammograms, ultrasound, and magnetic resonance imaging (MRI) cannot provide the underline features on the cellular level to understand the cancer microenvironment which makes them unsuitable for breast cancer subtype classification study. In this paper, we propose a convolutional neural network (CNN)-based breast cancer classification method for hematoxylin and eosin (H&E) whole slide images (WSIs). The proposed method incorporates fused mobile inverted bottleneck convolutions (FMB-Conv) and mobile inverted bottleneck convolutions (MBConv) with a dual squeeze and excitation (DSE) network to accurately classify breast cancer tissue into binary (benign and malignant) and eight subtypes using histopathology images. For that, a pre-trained EfficientNetV2 network is used as a backbone with a modified DSE block that combines the spatial and channel-wise squeeze and excitation layers to highlight important low-level and high-level abstract features. Our method outperformed ResNet101, InceptionResNetV2, and EfficientNetV2 networks on the publicly available BreakHis dataset for the binary and multi-class breast cancer classification in terms of precision, recall, and F1-score on multiple magnification levels

    Artificial intelligence for breast cancer precision pathology

    Get PDF
    Breast cancer is the most common cancer type in women globally but is associated with a continuous decline in mortality rates. The improved prognosis can be partially attributed to effective treatments developed for subgroups of patients. However, nowadays, it remains challenging to optimise treatment plans for each individual. To improve disease outcome and to decrease the burden associated with unnecessary treatment and adverse drug effects, the current thesis aimed to develop artificial intelligence based tools to improve individualised medicine for breast cancer patients. In study I, we developed a deep learning based model (DeepGrade) to stratify patients that were associated with intermediate risks. The model was optimised with haematoxylin and eosin (HE) stained whole slide images (WSIs) with grade 1 and 3 tumours and applied to stratify grade 2 tumours into grade 1-like (DG2-low) and grade 3-like (DG2-high) subgroups. The efficacy of the DeepGrade model was validated using recurrence free survival where the dichotomised groups exhibited an adjusted hazard ratio (HR) of 2.94 (95% confidence interval [CI] 1.24-6.97, P = 0.015). The observation was further confirmed in the external test cohort with an adjusted HR of 1.91 (95% CI: 1.11-3.29, P = 0.019). In study II, we investigated whether deep learning models were capable of predicting gene expression levels using the morphological patterns from tumours. We optimised convolutional neural networks (CNNs) to predict mRNA expression for 17,695 genes using HE stained WSIs from the training set. An initial evaluation on the validation set showed that a significant correlation between the RNA-seq measurements and model predictions was observed for 52.75% of the genes. The models were further tested in the internal and external test sets. Besides, we compared the model's efficacy in predicting RNA-seq based proliferation scores. Lastly, the ability of capturing spatial gene expression variations for the optimised CNNs was evaluated and confirmed using spatial transcriptomics profiling. In study III, we investigated the relationship between intra-tumour gene expression heterogeneity and patient survival outcomes. Deep learning models optimised from study II were applied to generate spatial gene expression predictions for the PAM50 gene panel. A set of 11 texture based features and one slide average gene expression feature per gene were extracted as input to train a Cox proportional hazards regression model with elastic net regularisation to predict patient risk of recurrence. Through nested cross-validation, the model dichotomised the training cohort into low and high risk groups with an adjusted HR of 2.1 (95% CI: 1.30-3.30, P = 0.002). The model was further validated on two external cohorts. In study IV, we investigated the agreement between the Stratipath Breast, which is the modified, commercialised DeepGrade model developed in study I, and the Prosigna® test. Both tests sought to stratify patients with distinct prognosis. The outputs from Stratipath Breast comprise a risk score and a two-level risk stratification whereas the outputs from Prosigna® include the risk of recurrence score and a three-tier risk stratification. By comparing the number of patients assigned to ‘low’ or ‘high’ risk groups, we found an overall moderate agreement (76.09%) between the two tests. Besides, the risk scores by two tests also revealed a good correlation (Spearman's rho = 0.59, P = 1.16E-08). In addition, a good correlation was observed between the risk score from each test and the Ki67 index. The comparison was also carried out in the subgroup of patients with grade 2 tumours where similar but slightly dropped correlations were found

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions

    DEEP LEARNING BASED SEGMENTATION AND CLASSIFICATION FOR IMPROVED BREAST CANCER DETECTION

    Get PDF
    Breast Cancer is a leading killer of women globally. It is a serious health concern caused by calcifications or abnormal tissue growth in the breast. Doing a screening and identifying the nature of the tumor as benign or malignant is important to facilitate early intervention, which drastically decreases the mortality rate. Usually, it uses ultrasound images, since they are easily accessible to most people and have no drawbacks as such, unlike in the other most famous screening technique of mammograms where in some cases you may not get a clear scan. In this thesis, the approach to this problem is to build a stacked model which makes predictions on the basis of the shape, pattern, and spread of the tumor. To achieve this, typical steps are pre-processing of images followed by segmentation of the image and classification. For pre-processing, the proposed approach in this thesis uses histogram equalization that helps in improving the contrast of the image, making the tumor stand out from its surroundings, and making it easier for the segmentation step. Through segmentation, the approach uses UNet architecture with a ResNet backbone. The UNet architecture is made specifically for biomedical imaging. The aim of segmentation is to separate the tumor from the ultrasound image so that the classification model can make its predictions from this mask. The metric result of the F1-score for the segmentation model turned out to be 97.30%. For classification, the CNN base model is used for feature extraction from provided masks. These are then fed into a network and the predictions are done. The base CNN model used is ResNet50 and the neural network used for the output layer is a simple 8-layer network with ReLU activation in the hidden layers and softmax in the final decision-making layer. The ResNet weights are initialized from training on ImageNet. The ResNet50 returns 2048 features from each mask. These are then fed into the network for decision-making. The hidden layers of the neural network have 1024, 512, 256, 128, 64, 32, and 10 neurons respectively. The classification accuracy achieved for the proposed model was 98.61% with an F1 score of 98.41%. The detailed experimental results are presented along with comparative data

    Interpretable Medical Imagery Diagnosis with Self-Attentive Transformers: A Review of Explainable AI for Health Care

    Full text link
    Recent advancements in artificial intelligence (AI) have facilitated its widespread adoption in primary medical services, addressing the demand-supply imbalance in healthcare. Vision Transformers (ViT) have emerged as state-of-the-art computer vision models, benefiting from self-attention modules. However, compared to traditional machine-learning approaches, deep-learning models are complex and are often treated as a "black box" that can cause uncertainty regarding how they operate. Explainable Artificial Intelligence (XAI) refers to methods that explain and interpret machine learning models' inner workings and how they come to decisions, which is especially important in the medical domain to guide the healthcare decision-making process. This review summarises recent ViT advancements and interpretative approaches to understanding the decision-making process of ViT, enabling transparency in medical diagnosis applications

    Added benefits of computer-assisted analysis of Hematoxylin-Eosin stained breast histopathological digital slides

    Get PDF
    This thesis aims at determining if computer-assisted analysis can be used to better understand pathologists’ perception of mitotic figures on Hematoxylin-Eosin (HE) stained breast histopathological digital slides. It also explores the feasibility of reproducible histologic nuclear atypia scoring by incorporating computer-assisted analysis to cytological scores given by a pathologist. In addition, this thesis investigates the possibility of computer-assisted diagnosis for categorizing HE breast images into different subtypes of cancer or benign masses. In the first study, a data set of 453 mitoses and 265 miscounted non-mitoses within breast cancer digital slides were considered. Different features were extracted from the objects in different channels of eight colour spaces. The findings from the first research study suggested that computer-aided image analysis can provide a better understanding of image-related features related to discrepancies among pathologists in recognition of mitoses. Two tasks done routinely by the pathologists are making diagnosis and grading the breast cancer. In the second study, a new tool for reproducible nuclear atypia scoring in breast cancer histological images was proposed. The third study proposed and tested MuDeRN (MUlti-category classification of breast histopathological image using DEep Residual Networks), which is a framework for classifying hematoxylin-eosin stained breast digital slides either as benign or cancer, and then categorizing cancer and benign cases into four different subtypes each. The studies indicated that computer-assisted analysis can aid in both nuclear grading (COMPASS) and breast cancer diagnosis (MuDeRN). The results could be used to improve current status of breast cancer prognosis estimation through reducing the inter-pathologist disagreement in counting mitotic figures and reproducible nuclear grading. It can also improve providing a second opinion to the pathologist for making a diagnosis

    Computer-aided diagnosis of low grade endometrial stromal sarcoma (LGESS)

    Get PDF
    Low grade endometrial stromal sarcoma (LGESS) accounts for about 0.2% of all uterine cancer cases. Approximately 75% of LGESS patients are initially misdiagnosed with leiomyoma, which is a type of benign tumor, also known as fibroids. In this research, uterine tissue biopsy images of potential LGESS patients are preprocessed using segmentation and stain normalization algorithms. We then apply a variety of classic machine learning and advanced deep learning models to classify tissue images as either benign or cancerous. For the classic techniques considered, the highest classification accuracy we attain is about 0.85, while our best deep learning model achieves an accuracy of approximately 0.87. These results clearly indicate that properly trained learning algorithms can aid in the diagnosis of LGESS
    • …
    corecore