2 research outputs found
Learning Autoencoded Radon Projections
Autoencoders have been recently used for encoding medical images. In this
study, we design and validate a new framework for retrieving medical images by
classifying Radon projections, compressed in the deepest layer of an
autoencoder. As the autoencoder reduces the dimensionality, a multilayer
perceptron (MLP) can be employed to classify the images. The integration of MLP
promotes a rather shallow learning architecture which makes the training
faster. We conducted a comparative study to examine the capabilities of
autoencoders for different inputs such as raw images, Histogram of Oriented
Gradients (HOG) and normalized Radon projections. Our framework is benchmarked
on IRMA dataset containing x-ray images distributed across
different classes. Experiments show an IRMA error of (equivalent to
accuracy) outperforming state-of-the-art works on retrieval from
IRMA dataset using autoencoders.Comment: To appear in proceedings of The IEEE Symposium Series on
Computational Intelligence (IEEE SSCI 2017), Honolulu, Hawaii, USA, Nov. 27
-- Dec 1, 201
Pan-Cancer Diagnostic Consensus Through Searching Archival Histopathology Images Using Artificial Intelligence
The emergence of digital pathology has opened new horizons for histopathology
and cytology. Artificial-intelligence algorithms are able to operate on
digitized slides to assist pathologists with diagnostic tasks. Whereas machine
learning involving classification and segmentation methods have obvious
benefits for image analysis in pathology, image search represents a fundamental
shift in computational pathology. Matching the pathology of new patients with
already diagnosed and curated cases offers pathologist a novel approach to
improve diagnostic accuracy through visual inspection of similar cases and
computational majority vote for consensus building. In this study, we report
the results from searching the largest public repository (The Cancer Genome
Atlas [TCGA] program by National Cancer Institute, USA) of whole slide images
from almost 11,000 patients depicting different types of malignancies. For the
first time, we successfully indexed and searched almost 30,000 high-resolution
digitized slides constituting 16 terabytes of data comprised of 20 million
1000x1000 pixels image patches. The TCGA image database covers 25 anatomic
sites and contains 32 cancer subtypes. High-performance storage and GPU power
were employed for experimentation. The results were assessed with conservative
"majority voting" to build consensus for subtype diagnosis through vertical
search and demonstrated high accuracy values for both frozen sections slides
(e.g., bladder urothelial carcinoma 93%, kidney renal clear cell carcinoma 97%,
and ovarian serous cystadenocarcinoma 99%) and permanent histopathology slides
(e.g., prostate adenocarcinoma 98%, skin cutaneous melanoma 99%, and thymoma
100%). The key finding of this validation study was that computational
consensus appears to be possible for rendering diagnoses if a sufficiently
large number of searchable cases are available for each cancer subtype