38 research outputs found

    Supervised saliency map driven segmentation of lesions in dermoscopic images

    Get PDF
    Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners, and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). A DRFI method incorporates multilevel segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and postprocessing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation algorithms and its performance is comparable with most recent approaches that are based on deep convolutional neural networks

    Land change detection and effective factors on forest land use changes: application of land change modeler and multiple linear regression

    Get PDF
    Reducing forest covered areas and changing it to pasture, agricultural, urban and rural areas is performed every year and this causes great damages in natural resources in a wide range. In order to identify the effective factors on reducing the forest cover area, multiple regression was used from 1995 to 2015 in Mazandaran forests. A Multiple regressions can link the decline in forest cover (dependent variable) and its effective factors (independent variable) are well explained. In this study, Landsat TM data of 1995 and Landsat ETM+ data of 2015 were analyzed and classified in order to investigate the changes in the forest area. The images were classified in two classes of forest and non-forest areas and also forest map with spatial variables of physiography and human were analyzed by regression equation. Detection satellite images showed that during the studied period there was found a reduction of forest areas up to approximately 257331 ha. The results of regression analysis indicated that the linear combination of income per capita, rain and temperature with determined coefficient 0.4 as independent variables were capable of estimating the reduction of forest area. The results of this study can be used as an efficient tool to manage and improve forests regarding physiographical and human characteristics.Keywords: Land change Modeler, Multiple linear regression, remote sensing, Mazandaran forest

    Segmentation of Lesions in Dermoscopy Images Using Saliency Map And Contour Propagation

    Get PDF
    Melanoma is one of the most dangerous types of skin cancer and causes thousands of deaths worldwide each year. Recently dermoscopic imaging systems have been widely used as a diagnostic tool for melanoma detection. The first step in the automatic analysis of dermoscopy images is the lesion segmentation. In this article, a novel method for skin lesion segmentation that could be applied to a variety of images with different properties and deficiencies is proposed. After a multi-step preprocessing phase (hair removal and illumination correction), a supervised saliency map construction method is used to obtain an initial guess of lesion location. The construction of the saliency map is based on a random forest regressor that takes a vector of regional image features and return a saliency score based on them. This regressor is trained in a multi-level manner based on 2000 training data provided in ISIC2017 melanoma recognition challenge. In addition to obtaining an initial contour of lesion, the output saliency map can be used as a speed function alongside with image gradient to derive the initial contour toward the lesion boundary using a propagation model. The proposed algorithm has been tested on the ISIC2017 training, validation and test datasets, and gained high values for evaluation metrics

    Social network analysis of cell networks improves deep learning for prediction of molecular pathways and key mutations in colorectal cancer

    Get PDF
    Colorectal cancer (CRC) is a primary global health concern, and identifying the molecular pathways, genetic subtypes, and mutations associated with CRC is crucial for precision medicine. However, traditional measurement techniques such as gene sequencing are costly and time-consuming, while most deep learning methods proposed for this task lack interpretability. This study offers a new approach to enhance the state-of-the-art deep learning methods for molecular pathways and key mutation prediction by incorporating cell network information. We build cell graphs with nuclei as nodes and nuclei connections as edges of the network and leverage Social Network Analysis (SNA) measures to extract abstract, perceivable, and interpretable features that explicitly describe the cell network characteristics in an image. Our approach does not rely on precise nuclei segmentation or feature extraction, is computationally efficient, and is easily scalable. In this study, we utilize the TCGA-CRC-DX dataset, comprising 499 patients and 502 diagnostic slides from primary colorectal tumours, sourced from 36 distinct medical centres in the United States. By incorporating the SNA features alongside deep features in two multiple instance learning frameworks, we demonstrate improved performance for chromosomal instability (CIN), hypermutated tumour (HM), TP53 gene, BRAF gene, and Microsatellite instability (MSI) status prediction tasks (2.4%–4% and 7–8.8% improvement in AUROC and AUPRC on average). Additionally, our method achieves outstanding performance on MSI prediction in an external PAIP dataset (99% AUROC and 98% AUPRC), demonstrating its generalizability. Our findings highlight the discrimination power of SNA features and how they can be beneficial to deep learning models’ performance and provide insights into the correlation of cell network profiles with molecular pathways and key mutations

    NuClick : a deep learning framework for interactive segmentation of microscopic images

    Get PDF
    Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images

    A Fully Automated and Explainable Algorithm for the Prediction of Malignant Transformation in Oral Epithelial Dysplasia

    Full text link
    Oral epithelial dysplasia (OED) is a premalignant histopathological diagnosis given to lesions of the oral cavity. Its grading suffers from significant inter-/intra- observer variability, and does not reliably predict malignancy progression, potentially leading to suboptimal treatment decisions. To address this, we developed a novel artificial intelligence algorithm that can assign an Oral Malignant Transformation (OMT) risk score, based on histological patterns in the in Haematoxylin and Eosin stained whole slide images, to quantify the risk of OED progression. The algorithm is based on the detection and segmentation of nuclei within (and around) the epithelium using an in-house segmentation model. We then employed a shallow neural network fed with interpretable morphological/spatial features, emulating histological markers. We conducted internal cross-validation on our development cohort (Sheffield; n = 193 cases) followed by independent validation on two external cohorts (Birmingham and Belfast; n = 92 cases). The proposed OMTscore yields an AUROC = 0.74 in predicting whether an OED progresses to malignancy or not. Survival analyses showed the prognostic value of our OMTscore for predicting malignancy transformation, when compared to the manually-assigned WHO and binary grades. Analysis of the correctly predicted cases elucidated the presence of peri-epithelial and epithelium-infiltrating lymphocytes in the most predictive patches of cases that transformed (p < 0.0001). This is the first study to propose a completely automated algorithm for predicting OED transformation based on interpretable nuclear features, whilst being validated on external datasets. The algorithm shows better-than-human-level performance for prediction of OED malignant transformation and offers a promising solution to the challenges of grading OED in routine clinical practice

    Cross-linking breast tumor transcriptomic states and tissue histology

    Get PDF
    Identification of the gene expression state of a cancer patient from routine pathology imaging and characterization of its phenotypic effects have significant clinical and therapeutic implications. However, prediction of expression of individual genes from whole slide images (WSIs) is challenging due to co-dependent or correlated expression of multiple genes. Here, we use a purely data-driven approach to first identify groups of genes with co-dependent expression and then predict their status from WSIs using a bespoke graph neural network. These gene groups allow us to capture the gene expression state of a patient with a small number of binary variables that are biologically meaningful and carry histopathological insights for clinical and therapeutic use cases. Prediction of gene expression state based on these gene groups allows associating histological phenotypes (cellular composition, mitotic counts, grading, etc.) with underlying gene expression patterns and opens avenues for gaining biological insights from routine pathology imaging directly

    Mitosis detection, fast and slow : robust and efficient detection of mitotic figures

    Get PDF
    Counting of mitotic figures is a fundamental step in grading and prognostication of several cancers. However, manual mitosis counting is tedious and time-consuming. In addition, variation in the appearance of mitotic figures causes a high degree of discordance among pathologists. With advances in deep learning models, several automatic mitosis detection algorithms have been proposed but they are sensitive to domain shift often seen in histology images. We propose a robust and efficient two-stage mitosis detection framework, which comprises mitosis candidate segmentation (Detecting Fast) and candidate refinement (Detecting Slow) stages. The proposed candidate segmentation model, termed EUNet, is fast and accurate due to its architectural design. EUNet can precisely segment candidates at a lower resolution to considerably speed up candidate detection. Candidates are then refined using a deeper classifier network, EfficientNet-B7, in the second stage. We make sure both stages are robust against domain shift by incorporating domain generalization methods. We demonstrate state-of-the-art performance and generalizability of the proposed model on the three largest publicly available mitosis datasets, winning the two mitosis domain generalization challenge contests (MIDOG21 and MIDOG22). Finally, we showcase the utility of the proposed algorithm by processing the TCGA breast cancer cohort (1,124 whole-slide images) to generate and release a repository of more than 620K potential mitotic figures (not exhaustively validated). [Abstract copyright: Copyright © 2024. Published by Elsevier B.V.
    corecore