113 research outputs found

    Domain-adversarial neural networks to address the appearance variability of histopathology images

    Full text link
    Preparing and scanning histopathology slides consists of several steps, each with a multitude of parameters. The parameters can vary between pathology labs and within the same lab over time, resulting in significant variability of the tissue appearance that hampers the generalization of automatic image analysis methods. Typically, this is addressed with ad-hoc approaches such as staining normalization that aim to reduce the appearance variability. In this paper, we propose a systematic solution based on domain-adversarial neural networks. We hypothesize that removing the domain information from the model representation leads to better generalization. We tested our hypothesis for the problem of mitosis detection in breast cancer histopathology images and made a comparative analysis with two other approaches. We show that combining color augmentation with domain-adversarial training is a better alternative than standard approaches to improve the generalization of deep learning methods.Comment: MICCAI 2017 Workshop on Deep Learning in Medical Image Analysi

    Inferring a Third Spatial Dimension from 2D Histological Images

    Full text link
    Histological images are obtained by transmitting light through a tissue specimen that has been stained in order to produce contrast. This process results in 2D images of the specimen that has a three-dimensional structure. In this paper, we propose a method to infer how the stains are distributed in the direction perpendicular to the surface of the slide for a given 2D image in order to obtain a 3D representation of the tissue. This inference is achieved by decomposition of the staining concentration maps under constraints that ensure realistic decomposition and reconstruction of the original 2D images. Our study shows that it is possible to generate realistic 3D images making this method a potential tool for data augmentation when training deep learning models.Comment: IEEE International Symposium on Biomedical Imaging (ISBI), 201

    Automatic nuclei segmentation in H&E stained breast cancer histopathology images

    Get PDF
    The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. © 2013 Veta et al

    Comparing computer-generated and pathologist-generated tumour segmentations for immunohistochemical scoring of breast tissue microarrays

    Get PDF
    BACKGROUND: Tissue microarrays (TMAs) have become a valuable resource for biomarker expression in translational research. Immunohistochemical (IHC) assessment of TMAs is the principal method for analysing large numbers of patient samples, but manual IHC assessment of TMAs remains a challenging and laborious task. With advances in image analysis, computer-generated analyses of TMAs have the potential to lessen the burden of expert pathologist review. METHODS: In current commercial software computerised oestrogen receptor (ER) scoring relies on tumour localisation in the form of hand-drawn annotations. In this study, tumour localisation for ER scoring was evaluated comparing computer-generated segmentation masks with those of two specialist breast pathologists. Automatically and manually obtained segmentation masks were used to obtain IHC scores for thirty-two ER-stained invasive breast cancer TMA samples using FDA-approved IHC scoring software. RESULTS: Although pixel-level comparisons showed lower agreement between automated and manual segmentation masks (κ=0.81) than between pathologists' masks (κ=0.91), this had little impact on computed IHC scores (Allred; [Image: see text]=0.91, Quickscore; [Image: see text]=0.92). CONCLUSIONS: The proposed automated system provides consistent measurements thus ensuring standardisation, and shows promise for increasing IHC analysis of nuclear staining in TMAs from large clinical trials

    A comprehensive multi-domain dataset for mitotic figure detection

    Get PDF
    The prognostic value of mitotic figures in tumor tissue is well-established for many tumor types and automating this task is of high research interest. However, especially deep learning-based methods face performance deterioration in the presence of domain shifts, which may arise from different tumor types, slide preparation and digitization devices. We introduce the MIDOG++ dataset, an extension of the MIDOG 2021 and 2022 challenge datasets. We provide region of interest images from 503 histological specimens of seven different tumor types with variable morphology with in total labels for 11,937 mitotic figures: breast carcinoma, lung carcinoma, lymphosarcoma, neuroendocrine tumor, cutaneous mast cell tumor, cutaneous melanoma, and (sub)cutaneous soft tissue sarcoma. The specimens were processed in several laboratories utilizing diverse scanners. We evaluated the extent of the domain shift by using state-of-the-art approaches, observing notable differences in single-domain training. In a leave-one-domain-out setting, generalizability improved considerably. This mitotic figure dataset is the first that incorporates a wide domain shift based on different tumor types, laboratories, whole slide image scanners, and species

    Corneal Pachymetry by AS-OCT after Descemet's Membrane Endothelial Keratoplasty

    Get PDF
    Corneal thickness (pachymetry) maps can be used to monitor restoration of corneal endothelial function, for example after Descemet's membrane endothelial keratoplasty (DMEK). Automated delineation of the corneal interfaces in anterior segment optical coherence tomography (AS-OCT) can be challenging for corneas that are irregularly shaped due to pathology, or as a consequence of surgery, leading to incorrect thickness measurements. In this research, deep learning is used to automatically delineate the corneal interfaces and measure corneal thickness with high accuracy in post-DMEK AS-OCT B-scans. Three different deep learning strategies were developed based on 960 B-scans from 50 patients. On an independent test set of 320 B-scans, corneal thickness could be measured with an error of 13.98 to 15.50 micrometer for the central 9 mm range, which is less than 3% of the average corneal thickness. The accurate thickness measurements were used to construct detailed pachymetry maps. Moreover, follow-up scans could be registered based on anatomical landmarks to obtain differential pachymetry maps. These maps may enable a more comprehensive understanding of the restoration of the endothelial function after DMEK, where thickness often varies throughout different regions of the cornea, and subsequently contribute to a standardized postoperative regime.Comment: Fixed typo in abstract: The development set consists of 960 B-scans from 50 patients (instead of 68). The B-scans from the other 18 patients were used for testing onl

    Assessment of algorithms for mitosis detection in breast cancer histopathology images

    Get PDF
    The proliferative activity of breast tumors, which is routinely estimated by counting of mitotic figures in hematoxylin and eosin stained histology sections, is considered to be one of the most important prognostic markers. However, mitosis counting is laborious, subjective and may suffer from low inter-observer agreement. With the wider acceptance of whole slide images in pathology labs, automatic image analysis has been proposed as a potential solution for these issues. In this paper, the results from the Assessment of Mitosis Detection Algorithms 2013 (AMIDA13) challenge are described. The challenge was based on a data set consisting of 12 training and 11 testing subjects, with more than one thousand annotated mitotic figures by multiple observers. Short descriptions and results from the evaluation of eleven methods are presented. The top performing method has an error rate that is comparable to the inter-observer agreement among pathologists

    Direct Classification of Type 2 Diabetes From Retinal Fundus Images in a Population-based Sample From The Maastricht Study

    Get PDF
    Type 2 Diabetes (T2D) is a chronic metabolic disorder that can lead to blindness and cardiovascular disease. Information about early stage T2D might be present in retinal fundus images, but to what extent these images can be used for a screening setting is still unknown. In this study, deep neural networks were employed to differentiate between fundus images from individuals with and without T2D. We investigated three methods to achieve high classification performance, measured by the area under the receiver operating curve (ROC-AUC). A multi-target learning approach to simultaneously output retinal biomarkers as well as T2D works best (AUC = 0.746 [±\pm0.001]). Furthermore, the classification performance can be improved when images with high prediction uncertainty are referred to a specialist. We also show that the combination of images of the left and right eye per individual can further improve the classification performance (AUC = 0.758 [±\pm0.003]), using a simple averaging approach. The results are promising, suggesting the feasibility of screening for T2D from retinal fundus images.Comment: to be published in the proceeding of SPIE - Medical Imaging 2020, 6 pages, 1 figur

    The Impact of Meat Intake on Bladder Cancer Incidence: Is It Really a Relevant Risk?

    Get PDF
    Bladder cancer (BC) represents the second most common genitourinary malignancy. The major risk factors for BC include age, gender, smoking, occupational exposure, and infections. The BC etiology and pathogenesis have not been fully defined yet. Since catabolites are excreted through the urinary tract, the diet may play a pivotal role in bladder carcinogenesis. Meat, conventionally classified as "red", "white" or "processed", represents a significant risk factor for chronic diseases like cardiovascular disease, obesity, type 2 diabetes, and cancer. In particular, red and processed meat consumption seems to increase the risk of BC onset. The most accepted mechanism proposed for explaining the correlation between meat intake and BC involves the generation of carcinogens, such as heterocyclic amines and polycyclic aromatic hydrocarbons by high-temperature cooking. This evidence claims the consumption limitation of meat. We reviewed the current literature on potential biological mechanisms underlying the impact of meat (red, white, and processed) intake on the increased risk of BC development and progression. Toward this purpose, we performed an online search on PubMed using the term "bladder cancer" in combination with "meat", "red meat", "white meat" or "processed meat". Although some studies did not report any association between BC and meat intake, several reports highlighted a positive correlation between red or processed meat intake, especially salami, pastrami, corned beef and bacon, and BC risk. We speculate that a reduction or rather a weighting of the consumption of red and processed meat can reduce the risk of developing BC. Obviously, this remark claims future indications regarding food education (type of meat to be preferred, quantity of red meat to be eaten and how to cook it) to reduce the risk of developing BC. Further well-designed prospective studies are needed to corroborate these findings

    Deep Learning for Detection and Localization of B-Lines in Lung Ultrasound

    Full text link
    Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpretation of LUS be challenging for novice operators, but visual quantification of B-lines remains subject to observer variability. In this work, we investigate the strengths and weaknesses of multiple deep learning approaches for automated B-line detection and localization in LUS videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising 1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines. Based on this dataset, we present a benchmark of established deep learning methods applied to the task of B-line detection. To pave the way for interpretable quantification of B-lines, we propose a novel "single-point" approach to B-line localization using only the point of origin. Our results show that (a) the area under the receiver operating characteristic curve ranges from 0.864 to 0.955 for the benchmarked detection methods, (b) within this range, the best performance is achieved by models that leverage multiple successive frames as input, and (c) the proposed single-point approach for B-line localization reaches an F1-score of 0.65, performing on par with the inter-observer agreement. The dataset and developed methods can facilitate further biomedical research on automated interpretation of lung ultrasound with the potential to expand the clinical utility.Comment: 10 pages, 4 figure
    corecore