25 research outputs found

    Nuclei & Glands Instance Segmentation in Histology Images: A Narrative Review

    Full text link
    Instance segmentation of nuclei and glands in the histology images is an important step in computational pathology workflow for cancer diagnosis, treatment planning and survival analysis. With the advent of modern hardware, the recent availability of large-scale quality public datasets and the community organized grand challenges have seen a surge in automated methods focusing on domain specific challenges, which is pivotal for technology advancements and clinical translation. In this survey, 126 papers illustrating the AI based methods for nuclei and glands instance segmentation published in the last five years (2017-2022) are deeply analyzed, the limitations of current approaches and the open challenges are discussed. Moreover, the potential future research direction is presented and the contribution of state-of-the-art methods is summarized. Further, a generalized summary of publicly available datasets and a detailed insights on the grand challenges illustrating the top performing methods specific to each challenge is also provided. Besides, we intended to give the reader current state of existing research and pointers to the future directions in developing methods that can be used in clinical practice enabling improved diagnosis, grading, prognosis, and treatment planning of cancer. To the best of our knowledge, no previous work has reviewed the instance segmentation in histology images focusing towards this direction.Comment: 60 pages, 14 figure

    Using spatial-temporal ensembles of convolutional neural networks for lumen segmentation in ureteroscopy

    Full text link
    Purpose: Ureteroscopy is an efficient endoscopic minimally invasive technique for the diagnosis and treatment of upper tract urothelial carcinoma (UTUC). During ureteroscopy, the automatic segmentation of the hollow lumen is of primary importance, since it indicates the path that the endoscope should follow. In order to obtain an accurate segmentation of the hollow lumen, this paper presents an automatic method based on Convolutional Neural Networks (CNNs). Methods: The proposed method is based on an ensemble of 4 parallel CNNs to simultaneously process single and multi-frame information. Of these, two architectures are taken as core-models, namely U-Net based in residual blocks(m1m_1) and Mask-RCNN(m2m_2), which are fed with single still-frames I(t)I(t). The other two models (M1M_1, M2M_2) are modifications of the former ones consisting on the addition of a stage which makes use of 3D Convolutions to process temporal information. M1M_1, M2M_2 are fed with triplets of frames (I(t−1)I(t-1), I(t)I(t), I(t+1)I(t+1)) to produce the segmentation for I(t)I(t). Results: The proposed method was evaluated using a custom dataset of 11 videos (2,673 frames) which were collected and manually annotated from 6 patients. We obtain a Dice similarity coefficient of 0.80, outperforming previous state-of-the-art methods. Conclusion: The obtained results show that spatial-temporal information can be effectively exploited by the ensemble model to improve hollow lumen segmentation in ureteroscopic images. The method is effective also in presence of poor visibility, occasional bleeding, or specular reflections

    Active Learning Pipeline for Brain Mapping in a High Performance Computing Environment

    Full text link
    This paper describes a scalable active learning pipeline prototype for large-scale brain mapping that leverages high performance computing power. It enables high-throughput evaluation of algorithm results, which, after human review, are used for iterative machine learning model training. Image processing and machine learning are performed in a batch layer. Benchmark testing of image processing using pMATLAB shows that a 100×\times increase in throughput (10,000%) can be achieved while total processing time only increases by 9% on Xeon-G6 CPUs and by 22% on Xeon-E5 CPUs, indicating robust scalability. The images and algorithm results are provided through a serving layer to a browser-based user interface for interactive review. This pipeline has the potential to greatly reduce the manual annotation burden and improve the overall performance of machine learning-based brain mapping.Comment: 6 pages, 5 figures, submitted to IEEE HPEC 2020 proceeding

    CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation

    Full text link
    Nucleus segmentation is a challenging task due to the crowded distribution and blurry boundaries of nuclei. Recent approaches represent nuclei by means of polygons to differentiate between touching and overlapping nuclei and have accordingly achieved promising performance. Each polygon is represented by a set of centroid-to-boundary distances, which are in turn predicted by features of the centroid pixel for a single nucleus. However, using the centroid pixel alone does not provide sufficient contextual information for robust prediction. To handle this problem, we propose a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation. First, we sample a point set rather than one single pixel within each cell for distance prediction. This strategy substantially enhances contextual information and thereby improves the robustness of the prediction. Second, we propose a Confidence-based Weighting Module, which adaptively fuses the predictions from the sampled point set. Third, we introduce a novel Shape-Aware Perceptual (SAP) loss that constrains the shape of the predicted polygons. Here, the SAP loss is based on an additional network that is pre-trained by means of mapping the centroid probability map and the pixel-to-boundary distance maps to a different nucleus representation. Extensive experiments justify the effectiveness of each component in the proposed CPP-Net. Finally, CPP-Net is found to achieve state-of-the-art performance on three publicly available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper will be released

    Detection and segmentation of macrophages in Quantitative Phase Images by Deep Learning using a Mask Region-based Convolutional Neural

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceQuantitative Phase Imaging (QPI) has been demonstrated to be a versatile tool for minimally invasive label-free imaging of biological specimens and time-resolved cellular analysis. RAW 264.7 mouse macrophages were imaged by Digital Holographic Microscopy (DHM), an interferometry-based variant of QPI, in toxicological studies and cellular growth experiments. Robust detection and segmentation of cells in QPI images by Deep Learning facilitates automated data evaluation of images in high throughput microscopy. Detection, segmentation and the subsequent analysis of single cellular specimens in QPI images yields essential toxicity related physical parameters like the dry mass on the single-cell level. Deep Learning models, such as the Mask Region-based Convolutional Neural Network (Mask R-CNN), were proven to achieve robust results for object detection in fluorescence microscopy images. Thus, a Mask R-CNN was applied with the aim to obtain deeper cellular knowledge from DHM QPI images. This work shows that the combination of label-free DHM and a state-of-the-art Deep Learning model achieves reliable machine-generated data on the single-cell level and prospects to enhance the information as well as the quality of physical data that can be extracted from QPI images of biomedical experiments and label-free high throughput microscopy

    CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

    Get PDF
    Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery
    corecore