12,772 research outputs found
G2C: A Generator-to-Classifier Framework Integrating Multi-Stained Visual Cues for Pathological Glomerulus Classification
Pathological glomerulus classification plays a key role in the diagnosis of
nephropathy. As the difference between different subcategories is subtle,
doctors often refer to slides from different staining methods to make
decisions. However, creating correspondence across various stains is
labor-intensive, bringing major difficulties in collecting data and training a
vision-based algorithm to assist nephropathy diagnosis. This paper provides an
alternative solution for integrating multi-stained visual cues for glomerulus
classification. Our approach, named generator-to-classifier (G2C), is a
two-stage framework. Given an input image from a specified stain, several
generators are first applied to estimate its appearances in other staining
methods, and a classifier follows to combine visual cues from different stains
for prediction (whether it is pathological, or which type of pathology it has).
We optimize these two stages in a joint manner. To provide a reasonable
initialization, we pre-train the generators in an unlabeled reference set under
an unpaired image-to-image translation task, and then fine-tune them together
with the classifier. We conduct experiments on a glomerulus type classification
dataset collected by ourselves (there are no publicly available datasets for
this purpose). Although joint optimization slightly harms the authenticity of
the generated patches, it boosts classification performance, suggesting more
effective visual cues are extracted in an automatic way. We also transfer our
model to a public dataset for breast cancer classification, and outperform the
state-of-the-arts significantly.Comment: Accepted by AAAI 201
Abnormality Detection in Mammography using Deep Convolutional Neural Networks
Breast cancer is the most common cancer in women worldwide. The most common
screening technology is mammography. To reduce the cost and workload of
radiologists, we propose a computer aided detection approach for classifying
and localizing calcifications and masses in mammogram images. To improve on
conventional approaches, we apply deep convolutional neural networks (CNN) for
automatic feature learning and classifier building. In computer-aided
mammography, deep CNN classifiers cannot be trained directly on full mammogram
images because of the loss of image details from resizing at input layers.
Instead, our classifiers are trained on labelled image patches and then adapted
to work on full mammogram images for localizing the abnormalities.
State-of-the-art deep convolutional neural networks are compared on their
performance of classifying the abnormalities. Experimental results indicate
that VGGNet receives the best overall accuracy at 92.53\% in classifications.
For localizing abnormalities, ResNet is selected for computing class activation
maps because it is ready to be deployed without structural change or further
training. Our approach demonstrates that deep convolutional neural network
classifiers have remarkable localization capabilities despite no supervision on
the location of abnormalities is provided.Comment: 6 page
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
A New Computer-Aided Diagnosis System with Modified Genetic Feature Selection for BI-RADS Classification of Breast Masses in Mammograms
Mammography remains the most prevalent imaging tool for early breast cancer
screening. The language used to describe abnormalities in mammographic reports
is based on the breast Imaging Reporting and Data System (BI-RADS). Assigning a
correct BI-RADS category to each examined mammogram is a strenuous and
challenging task for even experts. This paper proposes a new and effective
computer-aided diagnosis (CAD) system to classify mammographic masses into four
assessment categories in BI-RADS. The mass regions are first enhanced by means
of histogram equalization and then semiautomatically segmented based on the
region growing technique. A total of 130 handcrafted BI-RADS features are then
extrcated from the shape, margin, and density of each mass, together with the
mass size and the patient's age, as mentioned in BI-RADS mammography. Then, a
modified feature selection method based on the genetic algorithm (GA) is
proposed to select the most clinically significant BI-RADS features. Finally, a
back-propagation neural network (BPN) is employed for classification, and its
accuracy is used as the fitness in GA. A set of 500 mammogram images from the
digital database of screening mammography (DDSM) is used for evaluation. Our
system achieves classification accuracy, positive predictive value, negative
predictive value, and Matthews correlation coefficient of 84.5%, 84.4%, 94.8%,
and 79.3%, respectively. To our best knowledge, this is the best current result
for BI-RADS classification of breast masses in mammography, which makes the
proposed system promising to support radiologists for deciding proper patient
management based on the automatically assigned BI-RADS categories
Pre and Post-hoc Diagnosis and Interpretation of Malignancy from Breast DCE-MRI
We propose a new method for breast cancer screening from DCE-MRI based on a
post-hoc approach that is trained using weakly annotated data (i.e., labels are
available only at the image level without any lesion delineation). Our proposed
post-hoc method automatically diagnosis the whole volume and, for positive
cases, it localizes the malignant lesions that led to such diagnosis.
Conversely, traditional approaches follow a pre-hoc approach that initially
localises suspicious areas that are subsequently classified to establish the
breast malignancy -- this approach is trained using strongly annotated data
(i.e., it needs a delineation and classification of all lesions in an image).
Another goal of this paper is to establish the advantages and disadvantages of
both approaches when applied to breast screening from DCE-MRI. Relying on
experiments on a breast DCE-MRI dataset that contains scans of 117 patients,
our results show that the post-hoc method is more accurate for diagnosing the
whole volume per patient, achieving an AUC of 0.91, while the pre-hoc method
achieves an AUC of 0.81. However, the performance for localising the malignant
lesions remains challenging for the post-hoc method due to the weakly labelled
dataset employed during training.Comment: Submitted to Medical Image Analysi
- …