40 research outputs found

    Predicting breast tumor proliferation from whole-slide images : the TUPAC16 challenge

    Get PDF
    Tumor proliferation is an important biomarker indicative of the prognosis of breast cancer patients. Assessment of tumor proliferation in a clinical setting is a highly subjective and labor-intensive task. Previous efforts to automate tumor proliferation assessment by image analysis only focused on mitosis detection in predefined tumor regions. However, in a real-world scenario, automatic mitosis detection should be performed in whole-slide images (WSIs) and an automatic method should be able to produce a tumor proliferation score given a WSI as input. To address this, we organized the TUmor Proliferation Assessment Challenge 2016 (TUPAC16) on prediction of tumor proliferation scores from WSIs. The challenge dataset consisted of 500 training and 321 testing breast cancer histopathology WSIs. In order to ensure fair and independent evaluation, only the ground truth for the training dataset was provided to the challenge participants. The first task of the challenge was to predict mitotic scores, i.e., to reproduce the manual method of assessing tumor proliferation by a pathologist. The second task was to predict the gene expression based PAM50 proliferation scores from the WSI. The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of κ = 0.567, 95% CI [0.464, 0.671] between the predicted scores and the ground truth. For the second task, the predictions of the top method had a Spearman's correlation coefficient of r = 0.617, 95% CI [0.581 0.651] with the ground truth. This was the first comparison study that investigated tumor proliferation assessment from WSIs. The achieved results are promising given the difficulty of the tasks and weakly-labeled nature of the ground truth. However, further research is needed to improve the practical utility of image analysis methods for this task

    Deep Learning for Detection and Segmentation in High-Content Microscopy Images

    Get PDF
    High-content microscopy led to many advances in biology and medicine. This fast emerging technology is transforming cell biology into a big data driven science. Computer vision methods are used to automate the analysis of microscopy image data. In recent years, deep learning became popular and had major success in computer vision. Most of the available methods are developed to process natural images. Compared to natural images, microscopy images pose domain specific challenges such as small training datasets, clustered objects, and class imbalance. In this thesis, new deep learning methods for object detection and cell segmentation in microscopy images are introduced. For particle detection in fluorescence microscopy images, a deep learning method based on a domain-adapted Deconvolution Network is presented. In addition, a method for mitotic cell detection in heterogeneous histopathology images is proposed, which combines a deep residual network with Hough voting. The method is used for grading of whole-slide histology images of breast carcinoma. Moreover, a method for both particle detection and cell detection based on object centroids is introduced, which is trainable end-to-end. It comprises a novel Centroid Proposal Network, a layer for ensembling detection hypotheses over image scales and anchors, an anchor regularization scheme which favours prior anchors over regressed locations, and an improved algorithm for Non-Maximum Suppression. Furthermore, a novel loss function based on Normalized Mutual Information is proposed which can cope with strong class imbalance and is derived within a Bayesian framework. For cell segmentation, a deep neural network with increased receptive field to capture rich semantic information is introduced. Moreover, a deep neural network which combines both paradigms of multi-scale feature aggregation of Convolutional Neural Networks and iterative refinement of Recurrent Neural Networks is proposed. To increase the robustness of the training and improve segmentation, a novel focal loss function is presented. In addition, a framework for black-box hyperparameter optimization for biomedical image analysis pipelines is proposed. The framework has a modular architecture that separates hyperparameter sampling and hyperparameter optimization. A visualization of the loss function based on infimum projections is suggested to obtain further insights into the optimization problem. Also, a transfer learning approach is presented, which uses only one color channel for pre-training and performs fine-tuning on more color channels. Furthermore, an approach for unsupervised domain adaptation for histopathological slides is presented. Finally, Galaxy Image Analysis is presented, a platform for web-based microscopy image analysis. Galaxy Image Analysis workflows for cell segmentation in cell cultures, particle detection in mice brain tissue, and MALDI/H&E image registration have been developed. The proposed methods were applied to challenging synthetic as well as real microscopy image data from various microscopy modalities. It turned out that the proposed methods yield state-of-the-art or improved results. The methods were benchmarked in international image analysis challenges and used in various cooperation projects with biomedical researchers

    Mitosis Detection, Fast and Slow: Robust and Efficient Detection of Mitotic Figures

    Full text link
    Counting of mitotic figures is a fundamental step in grading and prognostication of several cancers. However, manual mitosis counting is tedious and time-consuming. In addition, variation in the appearance of mitotic figures causes a high degree of discordance among pathologists. With advances in deep learning models, several automatic mitosis detection algorithms have been proposed but they are sensitive to {\em domain shift} often seen in histology images. We propose a robust and efficient two-stage mitosis detection framework, which comprises mitosis candidate segmentation ({\em Detecting Fast}) and candidate refinement ({\em Detecting Slow}) stages. The proposed candidate segmentation model, termed \textit{EUNet}, is fast and accurate due to its architectural design. EUNet can precisely segment candidates at a lower resolution to considerably speed up candidate detection. Candidates are then refined using a deeper classifier network, EfficientNet-B7, in the second stage. We make sure both stages are robust against domain shift by incorporating domain generalization methods. We demonstrate state-of-the-art performance and generalizability of the proposed model on the three largest publicly available mitosis datasets, winning the two mitosis domain generalization challenge contests (MIDOG21 and MIDOG22). Finally, we showcase the utility of the proposed algorithm by processing the TCGA breast cancer cohort (1,125 whole-slide images) to generate and release a repository of more than 620K mitotic figures.Comment: Extended version of the work done for MIDOG challenge submissio

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Computer-aided Cytological Grading Systems for Fine Needle Aspiration Biopsies of Breast Cancer

    Get PDF
    According to the American Cancer Society, breast cancer is the world's most commonly diagnosed and deadliest form of cancer in women. A major determinant of the survival rate in breast cancer patients are the accuracy and speed of the malignancy grade determination. This thesis considers the classification problem related to determining the grade of a malignant tumor accurately and efficiently. A Fine Needle Aspiration (FNA) biopsy is a key mechanism for breast cancer diagnosis as well as for assigning grades to malignant cases. Carrying out a manual examination of FNA demands substantial work from the pathologist which may result in delays, human errors, and consequently lead to misclassified grades. In this context, the most common grading system for microscopic imaging for breast cancer is the Bloom and Richardson (BR) histological grading system which is based on the evaluation of tissues and cells. BR is not directly applicable to FNA biopsy slides due to distortion of tissue and even cell structures on the cytological slides. Therefore, in this thesis, to grade FNA images of breast cancer, instead of the BR grading scheme, six known cytological grading schemes, three newly proposed cytological grading schemes, and five grading systems based on convolutional neural networks were proposed to automatically determine the malignancy grade of breast cancer. First, considering traditional Machine Learning methods, six cytological grading systems (CA-CGSs) based on six cytological schemes used by pathologists for FNA biopsies of breast cancer were proposed to grade tumors. Each system was built using the cytological criteria as proposed in the original CGSs. The six considered cytological grading schemes in this thesis were Fisher's modification of Black's nuclear grading, Mouriquand's grading, Robinson's grading, Taniguchi et al's, Khan et al's and Howell's modification in mitosis count criteria. To fulfill this task, different sets of handcrafted features using customized image processing algorithms were extracted for classification purpose. The proposed systems were able efficiently to classify FNA slides into G2 (moderately malignant) or G3 (highly malignant) cases using traditional machine learning algorithms. Additionally, three new cytological grading systems were proposed by augmenting three of the original CGSs by adding the low magnification features. However, the systems were not sensitive enough with regards to G3 cases due to the low number of available data samples. Therefore, a data balancing was performed to improve the sensitivity for G3 cases. Consequently, in the second objective of this work, data sampling and RUSBoost methods were applied to the datasets to adjust the class distribution and boost the sensitivity performance of the proposed systems. This enabled a sensitivity improvement of up to 30% which highlights the significance of class balancing in the task of malignancy grading of breast cancer. Additionally, due to the considerable time and efforts required for handcrafted features-based cytological grading systems in order to achieve efficient feature engineering results, a deep learning (DL) approach was proposed to avoid the aforementioned challenges without compromising the grading accuracy. Thus, in this thesis, five different pre-trained convolutional neural network (CNN) models, namely GoogleNet Inception-v3, AlexNet, ResNet18, ResNet50, and ResNet101, combined with different techniques to deal with unbalanced data, were used to develop automated computer-aided cytological malignancy grading systems (CNN-CMGSs). According to the obtained results, the proposed CNN-CMGS based on GoogleNet Inception-v3 combined with the oversampling method provides the best accuracy performance for the problem at hand. The results demonstrated that the proposed CGSs are highly correlated since they share some of the cytological criteria. Further, the overall accuracy of the CGSs is roughly the same and overall, the handcrafted features-based CGSs performed best even in the absence of class distribution rebalancing. Overall, for case classification, the best results were obtained for computer-aided CGSs based on the modified Khan et al.’s and Robinson’s schemes with accuracies of 97.77% and 97.28%, respectively. Meanwhile, for patient classification, the overall best results were obtained for computer-aided CGSs based on the modified Khan et al.’s and modified Fisher's schemes with accuracies of 96.50% and 95.71%, respectively. These results surpass previously reported results in the literature for computer-aided CGS based on BR histologic grading. Moreover, in clinical practice, Robinson’s typically has the best diagnostic accuracy with the highest reported experimental accuracy rate of 90%. Thus, the obtained results demonstrate that computer-aided breast cancer cytological grading systems using FNA can potentially achieve accuracy rates comparable to the more invasive histopathological BR-method
    corecore