90 research outputs found
Nuclei & Glands Instance Segmentation in Histology Images: A Narrative Review
Instance segmentation of nuclei and glands in the histology images is an
important step in computational pathology workflow for cancer diagnosis,
treatment planning and survival analysis. With the advent of modern hardware,
the recent availability of large-scale quality public datasets and the
community organized grand challenges have seen a surge in automated methods
focusing on domain specific challenges, which is pivotal for technology
advancements and clinical translation. In this survey, 126 papers illustrating
the AI based methods for nuclei and glands instance segmentation published in
the last five years (2017-2022) are deeply analyzed, the limitations of current
approaches and the open challenges are discussed. Moreover, the potential
future research direction is presented and the contribution of state-of-the-art
methods is summarized. Further, a generalized summary of publicly available
datasets and a detailed insights on the grand challenges illustrating the top
performing methods specific to each challenge is also provided. Besides, we
intended to give the reader current state of existing research and pointers to
the future directions in developing methods that can be used in clinical
practice enabling improved diagnosis, grading, prognosis, and treatment
planning of cancer. To the best of our knowledge, no previous work has reviewed
the instance segmentation in histology images focusing towards this direction.Comment: 60 pages, 14 figure
Recommended from our members
Fully automated convolutional neural network-based affine algorithm improves liver registration and lesion co-localization on hepatobiliary phase T1-weighted MR images.
BackgroundLiver alignment between series/exams is challenged by dynamic morphology or variability in patient positioning or motion. Image registration can improve image interpretation and lesion co-localization. We assessed the performance of a convolutional neural network algorithm to register cross-sectional liver imaging series and compared its performance to manual image registration.MethodsThree hundred fourteen patients, including internal and external datasets, who underwent gadoxetate disodium-enhanced magnetic resonance imaging for clinical care from 2011 to 2018, were retrospectively selected. Automated registration was applied to all 2,663 within-patient series pairs derived from these datasets. Additionally, 100 within-patient series pairs from the internal dataset were independently manually registered by expert readers. Liver overlap, image correlation, and intra-observation distances for manual versus automated registrations were compared using paired t tests. Influence of patient demographics, imaging characteristics, and liver uptake function was evaluated using univariate and multivariate mixed models.ResultsCompared to the manual, automated registration produced significantly lower intra-observation distance (p < 0.001) and higher liver overlap and image correlation (p < 0.001). Intra-exam automated registration achieved 0.88 mean liver overlap and 0.44 mean image correlation for the internal dataset and 0.91 and 0.41, respectively, for the external dataset. For inter-exam registration, mean overlap was 0.81 and image correlation 0.41. Older age, female sex, greater inter-series time interval, differing uptake, and greater voxel size differences independently reduced automated registration performance (p ≤ 0.020).ConclusionA fully automated algorithm accurately registered the liver within and between examinations, yielding better liver and focal observation co-localization compared to manual registration
Deep Learning Techniques for Liver Tumor Recognition in Ultrasound Images
Cancer is one of the most severe diseases nowadays. Thus, tumor detection in a non-invasive and accurate manner is a challenging subject. Among these tumors, liver cancer is one of the most dangerous, being very common. Hepatocellular Carcinoma (HCC) is the most frequent malignant liver tumor. The golden standard for diagnosing HCC is mainly the biopsy, however invasive and risky, leading to infections, respectively to the spreading of the tumor through the body. We conceive computerized techniques for abdominal tumor recognition within medical images. Formerly, traditional, texture-based methods were involved for this purpose. Both classical texture analysis methods, as well as advanced, original texture analysis techniques, based on superior order statistics, were involved. The superior order Gray Level Cooccurrence Matrix (GLCM), as well as the Textural Microstructure Cooccurrence Matrices (TMCM) were employed and assessed. Recently, deep learning techniques based on Convolutional Neural Networks (CNN), their fusions with the conventional techniques, as well as their combinations among themselves, were assessed in the approached field. We present the most relevant aspects of this study in the current paper
Deep learning-based instance segmentation for the precise automated quantification of digital breast cancer immunohistochemistry images
After the 24 months embargo, this version of the article was accepted for publication, after peer review and does not reflect post-acceptance improvements, or any corrections. The published version is available online (2022-01-14) at: https://doi.org/10.1016/j.eswa.2021.116471.The quantification of biomarkers on immunohistochemistry breast cancer images is essential for defining appropriate therapy for breast cancer patients, as well as for extracting relevant information on disease prognosis. This is an arduous and time-consuming task that may introduce a bias in the results due to intra- and inter-observer variability which could be alleviated by making use of automatic quantification tools. However, this is not a simple processing task given the heterogeneity of breast tumors that results in non-uniformly distributed tumor cells exhibiting different staining colors and intensity, size, shape, and texture, of the nucleus, cytoplasm and membrane.
In this research work we demonstrate the feasibility of using a deep learning-based instance segmentation architecture for the automatic quantification of both nuclear and membrane biomarkers applied to IHC-stained slides. We have solved the cumbersome task of training set generation with the design and implementation of a web platform, which has served as a hub for communication and feedback between researchers and pathologists as well as a system for the validation of the automatic image processing models. Through this tool, we have collected annotations over samples of HE, ER and Ki-67 (nuclear biomarkers) and HER2 (membrane biomarker) IHC-stained images. Using the same deep learning network architecture, we have trained two models, so-called nuclei- and membrane-aware segmentation models, which, once successfully validated, have revealed to be a promising method to segment nuclei instances in IHC-stained images. The quantification method proposed in this work has been integrated into the developed web platform and is currently being used as a decision support tool by pathologists
Computational Pathology: A Survey Review and The Way Forward
Computational Pathology CPath is an interdisciplinary science that augments
developments of computational approaches to analyze and model medical
histopathology images. The main objective for CPath is to develop
infrastructure and workflows of digital diagnostics as an assistive CAD system
for clinical pathology, facilitating transformational changes in the diagnosis
and treatment of cancer that are mainly address by CPath tools. With
evergrowing developments in deep learning and computer vision algorithms, and
the ease of the data flow from digital pathology, currently CPath is witnessing
a paradigm shift. Despite the sheer volume of engineering and scientific works
being introduced for cancer image analysis, there is still a considerable gap
of adopting and integrating these algorithms in clinical practice. This raises
a significant question regarding the direction and trends that are undertaken
in CPath. In this article we provide a comprehensive review of more than 800
papers to address the challenges faced in problem design all-the-way to the
application and implementation viewpoints. We have catalogued each paper into a
model-card by examining the key works and challenges faced to layout the
current landscape in CPath. We hope this helps the community to locate relevant
works and facilitate understanding of the field's future directions. In a
nutshell, we oversee the CPath developments in cycle of stages which are
required to be cohesively linked together to address the challenges associated
with such multidisciplinary science. We overview this cycle from different
perspectives of data-centric, model-centric, and application-centric problems.
We finally sketch remaining challenges and provide directions for future
technical developments and clinical integration of CPath
(https://github.com/AtlasAnalyticsLab/CPath_Survey).Comment: Accepted in Elsevier Journal of Pathology Informatics (JPI) 202
Artificial intelligence in gastroenterology: a state-of-the-art review
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett's esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.Cellular mechanisms in basic and clinical gastroenterology and hepatolog
Modeling Single Cell Properties from Histological Images
In modern pathology, digitized images of histological sections are routinely used in defining phenotypes and characteristics for different areas of the tissue. Digital images are created from stained histological slides by specialized scanners and are further analyzed with a computer. Traditionally this type of digital pathology analysis is limited to analyzing the tissue section in local patches or in the sub-sampled section level. We propose a novel approach derived from methodology in precision digital pathology and network analysis to study the single-cell level local neighborhoods of the tissue while preserving spatial information in the form of network connections. We show that our tool can successfully be used in advanced and precise assessment of local properties combining multiple stainings and further apply this method to a multi-stained histological mouse aortic root dataset
Segmentation of Pathology Images: A Deep Learning Strategy with Annotated Data
Cancer has significantly threatened human life and health for many years. In the clinic, histopathology image segmentation is the golden stand for evaluating the prediction of patient prognosis and treatment outcome. Generally, manually labelling tumour regions in hundreds of high-resolution histopathological images is time-consuming and expensive for pathologists. Recently, the advancements in hardware and computer vision have allowed deep-learning-based methods to become mainstream to segment tumours automatically, significantly reducing the workload of pathologists. However, most current methods rely on large-scale labelled histopathological images. Therefore, this research studies label-effective tumour segmentation methods using deep-learning paradigms to relieve the annotation limitations. Chapter 3 proposes an ensemble framework for fully-supervised tumour segmentation. Usually, the performance of an individual-trained network is limited by significant morphological variances in histopathological images. We propose a fully-supervised learning ensemble fusion model that uses both shallow and deep U-Nets, trained with images of different resolutions and subsets of images, for robust predictions of tumour regions. Noise elimination is achieved with Convolutional Conditional Random Fields. Two open datasets are used to evaluate the proposed method: the ACDC@LungHP challenge at ISBI2019 and the DigestPath challenge at MICCAI2019. With a dice coefficient of 79.7 %, the proposed method takes third place in ACDC@LungHP. In DigestPath 2019, the proposed method achieves a dice coefficient 77.3 %. Well-annotated images are an indispensable part of training fully-supervised segmentation strategies. However, large-scale histopathology images are hardly annotated finely in clinical practice. It is common for labels to be of poor quality or for only a few images to be manually marked by experts. Consequently, fully-supervised methods cannot perform well in these cases. Chapter 4 proposes a self-supervised contrast learning for tumour segmentation. A self-supervised cancer segmentation framework is proposed to reduce label dependency. An innovative contrastive learning scheme is developed to represent tumour features based on unlabelled images. Unlike a normal U-Net, the backbone is a patch-based segmentation network. Additionally, data augmentation and contrastive losses are applied to improve the discriminability of tumour features. A convolutional Conditional Random Field is used to smooth and eliminate noise. Three labelled, and fourteen unlabelled images are collected from a private skin cancer dataset called BSS. Experimental results show that the proposed method achieves better tumour segmentation performance than other popular self-supervised methods. However, by evaluated on the same public dataset as chapter 3, the proposed self-supervised method is hard to handle fine-grained segmentation around tumour boundaries compared to the supervised method we proposed. Chapter 5 proposes a sketch-based weakly-supervised tumour segmentation method. To segment tumour regions precisely with coarse annotations, a sketch-supervised method is proposed, containing a dual CNN-Transformer network and a global normalised class activation map. CNN-Transformer networks simultaneously model global and local tumour features. With the global normalised class activation map, a gradient-based tumour representation can be obtained from the dual network predictions. We invited experts to mark fine and coarse annotations in the private BSS and the public PAIP2019 datasets to facilitate reproducible performance comparisons. Using the BSS dataset, the proposed method achieves 76.686 % IOU and 86.6 % Dice scores, outperforming state-of-the-art methods. Additionally, the proposed method achieves a Dice gain of 8.372 % compared with U-Net on the PAIP2019 dataset. The thesis presents three approaches to segmenting cancers from histology images: fully-supervised, unsupervised, and weakly supervised methods. This research effectively segments tumour regions based on histopathological annotations and well-designed modules. Our studies comprehensively demonstrate label-effective automatic histopathological image segmentation. Experimental results prove that our works achieve state-of-the-art segmentation performances on private and public datasets. In the future, we plan to integrate more tumour feature representation technologies with other medical modalities and apply them to clinical research
PAIP 2019: Liver cancer segmentation challenge
Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team's algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation. (C) 2020 The Authors. Published by Elsevier B.V
- …