827 research outputs found
Augmented Mitotic Cell Count using Field Of Interest Proposal
Histopathological prognostication of neoplasia including most tumor grading
systems are based upon a number of criteria. Probably the most important is the
number of mitotic figures which are most commonly determined as the mitotic
count (MC), i.e. number of mitotic figures within 10 consecutive high power
fields. Often the area with the highest mitotic activity is to be selected for
the MC. However, since mitotic activity is not known in advance, an arbitrary
choice of this region is considered one important cause for high variability in
the prognostication and grading.
In this work, we present an algorithmic approach that first calculates a
mitotic cell map based upon a deep convolutional network. This map is in a
second step used to construct a mitotic activity estimate. Lastly, we select
the image segment representing the size of ten high power fields with the
overall highest mitotic activity as a region proposal for an expert MC
determination. We evaluate the approach using a dataset of 32 completely
annotated whole slide images, where 22 were used for training of the network
and 10 for test. We find a correlation of r=0.936 in mitotic count estimate.Comment: 6 pages, submitted to BVM 2019 (bvm-workshop.org
HistoMIL: A Python package for training multiple instance learning models on histopathology slides
Hematoxylin and eosin (H&E) stained slides are widely used in disease diagnosis. Remarkable advances in deep learning have made it possible to detect complex molecular patterns in these histopathology slides, suggesting automated approaches could help inform pathologists’ decisions. Multiple instance learning (MIL) algorithms have shown promise in this context, outperforming transfer learning (TL) methods for various tasks, but their implementation and usage remains complex. We introduce HistoMIL, a Python package designed to streamline the implementation, training and inference process of MIL-based algorithms for computational pathologists and biomedical researchers. It integrates a self-supervised learning module for feature encoding, and a full pipeline encompassing TL and three MIL algorithms: ABMIL, DSMIL, and TransMIL. The PyTorch Lightning framework enables effortless customization and algorithm implementation. We illustrate HistoMIL's capabilities by building predictive models for 2,487 cancer hallmark genes on breast cancer histology slides, achieving AUROC performances of up to 85%
Whole-Slide Mitosis Detection in H&E Breast Histology Using PHH3 as a Reference to Train Distilled Stain-Invariant Convolutional Networks
Manual counting of mitotic tumor cells in tissue sections constitutes one of
the strongest prognostic markers for breast cancer. This procedure, however, is
time-consuming and error-prone. We developed a method to automatically detect
mitotic figures in breast cancer tissue sections based on convolutional neural
networks (CNNs). Application of CNNs to hematoxylin and eosin (H&E) stained
histological tissue sections is hampered by: (1) noisy and expensive reference
standards established by pathologists, (2) lack of generalization due to
staining variation across laboratories, and (3) high computational requirements
needed to process gigapixel whole-slide images (WSIs). In this paper, we
present a method to train and evaluate CNNs to specifically solve these issues
in the context of mitosis detection in breast cancer WSIs. First, by combining
image analysis of mitotic activity in phosphohistone-H3 (PHH3) restained slides
and registration, we built a reference standard for mitosis detection in entire
H&E WSIs requiring minimal manual annotation effort. Second, we designed a data
augmentation strategy that creates diverse and realistic H&E stain variations
by modifying the hematoxylin and eosin color channels directly. Using it during
training combined with network ensembling resulted in a stain invariant mitosis
detector. Third, we applied knowledge distillation to reduce the computational
requirements of the mitosis detection ensemble with a negligible loss of
performance. The system was trained in a single-center cohort and evaluated in
an independent multicenter cohort from The Cancer Genome Atlas on the three
tasks of the Tumor Proliferation Assessment Challenge (TUPAC). We obtained a
performance within the top-3 best methods for most of the tasks of the
challenge.Comment: Accepted to appear in IEEE Transactions on Medical Imagin
Artificial intelligence for breast cancer precision pathology
Breast cancer is the most common cancer type in women globally but is associated with a
continuous decline in mortality rates. The improved prognosis can be partially attributed to
effective treatments developed for subgroups of patients. However, nowadays, it remains
challenging to optimise treatment plans for each individual. To improve disease outcome and
to decrease the burden associated with unnecessary treatment and adverse drug effects, the
current thesis aimed to develop artificial intelligence based tools to improve individualised
medicine for breast cancer patients.
In study I, we developed a deep learning based model (DeepGrade) to stratify patients that
were associated with intermediate risks. The model was optimised with haematoxylin and eosin
(HE) stained whole slide images (WSIs) with grade 1 and 3 tumours and applied to stratify
grade 2 tumours into grade 1-like (DG2-low) and grade 3-like (DG2-high) subgroups. The
efficacy of the DeepGrade model was validated using recurrence free survival where the
dichotomised groups exhibited an adjusted hazard ratio (HR) of 2.94 (95% confidence interval
[CI] 1.24-6.97, P = 0.015). The observation was further confirmed in the external test cohort
with an adjusted HR of 1.91 (95% CI: 1.11-3.29, P = 0.019).
In study II, we investigated whether deep learning models were capable of predicting gene
expression levels using the morphological patterns from tumours. We optimised convolutional
neural networks (CNNs) to predict mRNA expression for 17,695 genes using HE stained WSIs
from the training set. An initial evaluation on the validation set showed that a significant
correlation between the RNA-seq measurements and model predictions was observed for
52.75% of the genes. The models were further tested in the internal and external test sets.
Besides, we compared the model's efficacy in predicting RNA-seq based proliferation scores.
Lastly, the ability of capturing spatial gene expression variations for the optimised CNNs was
evaluated and confirmed using spatial transcriptomics profiling.
In study III, we investigated the relationship between intra-tumour gene expression
heterogeneity and patient survival outcomes. Deep learning models optimised from study II
were applied to generate spatial gene expression predictions for the PAM50 gene panel. A set
of 11 texture based features and one slide average gene expression feature per gene were
extracted as input to train a Cox proportional hazards regression model with elastic net
regularisation to predict patient risk of recurrence. Through nested cross-validation, the model
dichotomised the training cohort into low and high risk groups with an adjusted HR of 2.1
(95% CI: 1.30-3.30, P = 0.002). The model was further validated on two external cohorts.
In study IV, we investigated the agreement between the Stratipath Breast, which is the
modified, commercialised DeepGrade model developed in study I, and the Prosigna® test.
Both tests sought to stratify patients with distinct prognosis. The outputs from Stratipath Breast
comprise a risk score and a two-level risk stratification whereas the outputs from Prosigna®
include the risk of recurrence score and a three-tier risk stratification. By comparing the number
of patients assigned to ‘low’ or ‘high’ risk groups, we found an overall moderate agreement
(76.09%) between the two tests. Besides, the risk scores by two tests also revealed a good
correlation (Spearman's rho = 0.59, P = 1.16E-08). In addition, a good correlation was observed
between the risk score from each test and the Ki67 index. The comparison was also carried out
in the subgroup of patients with grade 2 tumours where similar but slightly dropped correlations
were found
The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis
Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
- …