179 research outputs found
A Compact Representation of Histopathology Images using Digital Stain Separation & Frequency-Based Encoded Local Projections
In recent years, histopathology images have been increasingly used as a
diagnostic tool in the medical field. The process of accurately diagnosing a
biopsy sample requires significant expertise in the field, and as such can be
time-consuming and is prone to uncertainty and error. With the advent of
digital pathology, using image recognition systems to highlight problem areas
or locate similar images can aid pathologists in making quick and accurate
diagnoses. In this paper, we specifically consider the encoded local
projections (ELP) algorithm, which has previously shown some success as a tool
for classification and recognition of histopathology images. We build on the
success of the ELP algorithm as a means for image classification and
recognition by proposing a modified algorithm which captures the local
frequency information of the image. The proposed algorithm estimates local
frequencies by quantifying the changes in multiple projections in local windows
of greyscale images. By doing so we remove the need to store the full
projections, thus significantly reducing the histogram size, and decreasing
computation time for image retrieval and classification tasks. Furthermore, we
investigate the effectiveness of applying our method to histopathology images
which have been digitally separated into their hematoxylin and eosin stain
components. The proposed algorithm is tested on the publicly available invasive
ductal carcinoma (IDC) data set. The histograms are used to train an SVM to
classify the data. The experiments showed that the proposed method outperforms
the original ELP algorithm in image retrieval tasks. On classification tasks,
the results are found to be comparable to state-of-the-art deep learning
methods and better than many handcrafted features from the literature.Comment: Accepted for publication in the International Conference on Image
Analysis and Recognition (ICIAR 2019
Feature Fusion of Raman Chemical Imaging and Digital Histopathology using Machine Learning for Prostate Cancer Detection
The diagnosis of prostate cancer is challenging due to the heterogeneity of
its presentations, leading to the over diagnosis and treatment of
non-clinically important disease. Accurate diagnosis can directly benefit a
patient's quality of life and prognosis. Towards addressing this issue, we
present a learning model for the automatic identification of prostate cancer.
While many prostate cancer studies have adopted Raman spectroscopy approaches,
none have utilised the combination of Raman Chemical Imaging (RCI) and other
imaging modalities. This study uses multimodal images formed from stained
Digital Histopathology (DP) and unstained RCI. The approach was developed and
tested on a set of 178 clinical samples from 32 patients, containing a range of
non-cancerous, Gleason grade 3 (G3) and grade 4 (G4) tissue microarray samples.
For each histological sample, there is a pathologist labelled DP - RCI image
pair. The hypothesis tested was whether multimodal image models can outperform
single modality baseline models in terms of diagnostic accuracy. Binary
non-cancer/cancer models and the more challenging G3/G4 differentiation were
investigated. Regarding G3/G4 classification, the multimodal approach achieved
a sensitivity of 73.8% and specificity of 88.1% while the baseline DP model
showed a sensitivity and specificity of 54.1% and 84.7% respectively. The
multimodal approach demonstrated a statistically significant 12.7% AUC
advantage over the baseline with a value of 85.8% compared to 73.1%, also
outperforming models based solely on RCI and median Raman spectra. Feature
fusion of DP and RCI does not improve the more trivial task of tumour
identification but does deliver an observed advantage in G3/G4 discrimination.
Building on these promising findings, future work could include the acquisition
of larger datasets for enhanced model generalization.Comment: 19 pages, 8 tables, 18 figure
Feature Fusion of Raman Chemical Imaging and Digital Histopathology using Machine Learning for Prostate Cancer Detection
The diagnosis of prostate cancer is challenging due to the heterogeneity of its presentations, leading to the over diagnosis and treatment of non-clinically important disease. Accurate diagnosis can directly benefit a patient’s quality of life and prognosis. Towards addressing this issue, we present a learning model for the automatic identification of prostate cancer. While many prostate cancer studies have adopted Raman spectroscopy approaches, none have utilised the combination of Raman Chemical Imaging (RCI) and other imaging modalities. This study uses multimodal images formed from stained Digital Histopathology (DP) and unstained RCI. The approach was developed and tested on a set of 178 clinical samples from 32 patients, containing a range of non-cancerous, Gleason grade 3 (G3) and grade 4 (G4) tissue microarray samples. For each histological sample, there is a pathologist labelled DP - RCI image pair. The hypothesis tested was whether multimodal image models can outperform single modality baseline models in terms of diagnostic accuracy. Binary non-cancer/cancer models and the more challenging G3/G4 differentiation were investigated. Regarding G3/G4 classification, the multimodal approach achieved a sensitivity of 73.8% and specificity of 88.1% while the baseline DP model showed a sensitivity and specificity of 54.1% and 84.7% respectively. The multimodal approach demonstrated a statistically significant 12.7% AUC advantage over the baseline with a value of 85.8% compared to 73.1%, also outperforming models based solely on RCI and median Raman spectra. Feature fusion of DP and RCI does not improve the more trivial task of tumour identification but does deliver an observed advantage in G3/G4 discrimination. Building on these promising findings, future work could include the acquisition of larger datasets for enhanced model generalization
Histopathological image classification using salient point patterns
Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2011.Thesis (Master's) -- Bilkent University, 2011.Includes bibliographical references leaves 69-79.Over the last decade, computer aided diagnosis (CAD) systems have gained great
importance to help pathologists improve the interpretation of histopathological
tissue images for cancer detection. These systems offer valuable opportunities to
reduce and eliminate the inter- and intra-observer variations in diagnosis, which
is very common in the current practice of histopathological examination. Many
studies have been dedicated to develop such systems for cancer diagnosis and
grading, especially based on textural and structural tissue image analysis. Although
the recent textural and structural approaches yield promising results for
different types of tissues, they are still unable to make use of the potential biological
information carried by different tissue components. However, these tissue
components help better represent a tissue, and hence, they help better quantify
the tissue changes caused by cancer.
This thesis introduces a new textural approach, called Salient Point Patterns
(SPP), for the utilization of tissue components in order to represent colon biopsy
images. This textural approach first defines a set of salient points that correspond
to nuclear, stromal, and luminal components of a colon tissue. Then, it
extracts some features around these salient points to quantify the images. Finally,
it classifies the tissue samples by using the extracted features. Working
with 3236 colon biopsy samples that are taken from 258 different patients, our
experiments demonstrate that Salient Point Patterns approach improves the classification
accuracy, compared to its counterparts, which do not make use of tissue
components in defining their texture descriptors. These experiments also show
that different set of features can be used within the SPP approach for better representation of a tissue image.Çığır, CelalM.S
Automatic registration of multi-modal microscopy images for integrative analysis of prostate tissue sections
Background Prostate cancer is one of the leading causes of cancer related deaths. For diagnosis, predicting the outcome of the disease, and for assessing potential new biomarkers, pathologists and researchers routinely analyze histological samples. Morphological and molecular information may be integrated by aligning microscopic histological images in a multiplex fashion. This process is usually time-consuming and results in intra- and inter-user variability. The aim of this study is to investigate the feasibility of using modern image analysis methods for automated alignment of microscopic images from differently stained adjacent paraffin sections from prostatic tissue specimens. Methods Tissue samples, obtained from biopsy or radical prostatectomy, were sectioned and stained with either hematoxylin & eosin (H&E), immunohistochemistry for p63 and AMACR or Time Resolved Fluorescence (TRF) for androgen receptor (AR). Image pairs were aligned allowing for translation, rotation and scaling. The registration was performed automatically by first detecting landmarks in both images, using the scale invariant image transform (SIFT), followed by the well-known RANSAC protocol for finding point correspondences and finally aligned by Procrustes fit. The Registration results were evaluated using both visual and quantitative criteria as defined in the text. Results Three experiments were carried out. First, images of consecutive tissue sections stained with H&E and p63/AMACR were successfully aligned in 85 of 88 cases (96.6%). The failures occurred in 3 out of 13 cores with highly aggressive cancer (Gleason score ≥ 8). Second, TRF and H&E image pairs were aligned correctly in 103 out of 106 cases (97%). The third experiment considered the alignment of image pairs with the same staining (H&E) coming from a stack of 4 sections. The success rate for alignment dropped from 93.8% in adjacent sections to 22% for sections furthest away. Conclusions The proposed method is both reliable and fast and therefore well suited for automatic segmentation and analysis of specific areas of interest, combining morphological information with protein expression data from three consecutive tissue sections. Finally, the performance of the algorithm seems to be largely unaffected by the Gleason grade of the prostate tissue samples examined, at least up to Gleason score 7
- …