1,727 research outputs found
Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions
Breast cancer has reached the highest incidence rate worldwide among all
malignancies since 2020. Breast imaging plays a significant role in early
diagnosis and intervention to improve the outcome of breast cancer patients. In
the past decade, deep learning has shown remarkable progress in breast cancer
imaging analysis, holding great promise in interpreting the rich information
and complex context of breast imaging modalities. Considering the rapid
improvement in the deep learning technology and the increasing severity of
breast cancer, it is critical to summarize past progress and identify future
challenges to be addressed. In this paper, we provide an extensive survey of
deep learning-based breast cancer imaging research, covering studies on
mammogram, ultrasound, magnetic resonance imaging, and digital pathology images
over the past decade. The major deep learning methods, publicly available
datasets, and applications on imaging-based screening, diagnosis, treatment
response prediction, and prognosis are described in detail. Drawn from the
findings of this survey, we present a comprehensive discussion of the
challenges and potential avenues for future research in deep learning-based
breast cancer imaging.Comment: Survey, 41 page
Discriminative Representations for Heterogeneous Images and Multimodal Data
Histology images of tumor tissue are an important diagnostic and prognostic tool for pathologists. Recently developed molecular methods group tumors into subtypes to further guide treatment decisions, but they are not routinely performed on all patients. A lower cost and repeatable method to predict tumor subtypes from histology could bring benefits to more cancer patients. Further, combining imaging and genomic data types provides a more complete view of the tumor and may improve prognostication and treatment decisions. While molecular and genomic methods capture the state of a small sample of tumor, histological image analysis provides a spatial view and can identify multiple subtypes in a single tumor. This intra-tumor heterogeneity has yet to be fully understood and its quantification may lead to future insights into tumor progression. In this work, I develop methods to learn appropriate features directly from images using dictionary learning or deep learning. I use multiple instance learning to account for intra-tumor variations in subtype during training, improving subtype predictions and providing insights into tumor heterogeneity. I also integrate image and genomic features to learn a projection to a shared space that is also discriminative. This method can be used for cross-modal classification or to improve predictions from images by also learning from genomic data during training, even if only image data is available at test time.Doctor of Philosoph
Pathway-Based Multi-Omics Data Integration for Breast Cancer Diagnosis and Prognosis.
Ph.D. Thesis. University of Hawaiʻi at Mānoa 2017
Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction
Colorectal liver metastases (CLM) significantly impact colon cancer patients,
influencing survival based on systemic chemotherapy response. Traditional
methods like tumor grading scores (e.g., tumor regression grade - TRG) for
prognosis suffer from subjectivity, time constraints, and expertise demands.
Current machine learning approaches often focus on radiological data, yet the
relevance of histological images for survival predictions, capturing intricate
tumor microenvironment characteristics, is gaining recognition. To address
these limitations, we propose an end-to-end approach for automated prognosis
prediction using histology slides stained with H&E and HPS. We first employ a
Generative Adversarial Network (GAN) for slide normalization to reduce staining
variations and improve the overall quality of the images that are used as input
to our prediction pipeline. We propose a semi-supervised model to perform
tissue classification from sparse annotations, producing feature maps. We use
an attention-based approach that weighs the importance of different slide
regions in producing the final classification results. We exploit the extracted
features for the metastatic nodules and surrounding tissue to train a prognosis
model. In parallel, we train a vision Transformer (ViT) in a knowledge
distillation framework to replicate and enhance the performance of the
prognosis prediction. In our evaluation on a clinical dataset of 258 patients,
our approach demonstrates superior performance with c-indexes of 0.804 (0.014)
for OS and 0.733 (0.014) for TTR. Achieving 86.9% to 90.3% accuracy in
predicting TRG dichotomization and 78.5% to 82.1% accuracy for the 3-class TRG
classification task, our approach outperforms comparative methods. Our proposed
pipeline can provide automated prognosis for pathologists and oncologists, and
can greatly promote precision medicine progress in managing CLM patients.Comment: 16 pages, 7 figures and 7 tables. Submitted to Medical Journal
Analysis (MedIA) journa
Automated Grading of Bladder Cancer using Deep Learning
PhD thesis in Information technologyUrothelial carcinoma is the most common type of bladder cancer and is among the cancer types with the highest recurrence rate and lifetime treatment cost per patient. Diagnosed patients are stratified into risk groups, mainly based on the histological grade and stage. However, it is well known that correct grading of bladder cancer suffers from intra- and interobserver variability and inconsistent reproducibility between pathologists, potentially leading to under- or overtreatment of the patients. The economic burden, unnecessary patient suffering, and additional load on the health care system illustrate the importance of developing new tools to aid pathologists.
With the introduction of digital pathology, large amounts of data have been made available in the form of digital histological whole-slide images (WSI). However, despite the massive amount of data, annotations for the given data are lacking. Another potential problem is that the tissue samples of urothelial carcinoma contain a mixture of damaged tissue, blood, stroma, muscle, and urothelium, where it is mainly the urothelium tissue that is diagnostically relevant for grading.
A method for tissue segmentation is investigated, where the aim is to segment WSIs into the six tissue classes: urothelium, stroma, muscle, damaged tissue, blood, and background. Several methods based on convolutional neural networks (CNN) for tile-wise classification are proposed. Both single-scale and multiscale models were explored to see if including more magnification levels would improve the performance. Different techniques, such as unsupervised learning, semi-supervised learning, and domain adaptation techniques, are explored to mitigate the challenge of missing large quantities of annotated data.
It is necessary to extract tiles from the WSI since it is intractable to process the entire WSI at full resolution at once. We have proposed a method to parameterize and automate the task of extracting tiles from different scales with a region of interest (ROI) defined at one of the scales. The method is reproducible and easy to describe by reporting the parameters.
A pipeline for automated diagnostic grading is proposed, called TRIgrade. First, the tissue segmentation method is utilized to find the diagnostically relevant urothelium tissue. Then, the parameterized tile extraction method is used to extract tiles from the urothelium regions at three magnification levels from 300 WSIs. The extracted tiles form the training, validation, and test data used to train and test a diagnostic model. The final system outputs a segmented tissue image showing all the tissue regions in the WSI, a WHO grade heatmap indicating low- and high-grade carcinoma regions, and finally, a slide-level WHO grade prediction. The proposed TRIgrade pipeline correctly graded 45 of 50 WSIs, achieving an accuracy of 90%
Negative Pseudo Labeling Using Class Proportion for Semantic Segmentation in Pathology
16th European Conference, Glasgow, UK, August 23–28, 2020. Part of the Lecture Notes in Computer Science book series (LNCS, volume 12360). Also part of the Image Processing, Computer Vision, Pattern Recognition, and Graphics book sub series (LNIP, volume 12360).In pathological diagnosis, since the proportion of the adenocarcinoma subtypes is related to the recurrence rate and the survival time after surgery, the proportion of cancer subtypes for pathological images has been recorded as diagnostic information in some hospitals. In this paper, we propose a subtype segmentation method that uses such proportional labels as weakly supervised labels. If the estimated class rate is higher than that of the annotated class rate, we generate negative pseudo labels, which indicate, “input image does not belong to this negative label, ” in addition to standard pseudo labels. It can force out the low confidence samples and mitigate the problem of positive pseudo label learning which cannot label low confident unlabeled samples. Our method outperformed the state-of-the-art semi-supervised learning (SSL) methods
- …