842 research outputs found
Abnormality Detection in Mammography using Deep Convolutional Neural Networks
Breast cancer is the most common cancer in women worldwide. The most common
screening technology is mammography. To reduce the cost and workload of
radiologists, we propose a computer aided detection approach for classifying
and localizing calcifications and masses in mammogram images. To improve on
conventional approaches, we apply deep convolutional neural networks (CNN) for
automatic feature learning and classifier building. In computer-aided
mammography, deep CNN classifiers cannot be trained directly on full mammogram
images because of the loss of image details from resizing at input layers.
Instead, our classifiers are trained on labelled image patches and then adapted
to work on full mammogram images for localizing the abnormalities.
State-of-the-art deep convolutional neural networks are compared on their
performance of classifying the abnormalities. Experimental results indicate
that VGGNet receives the best overall accuracy at 92.53\% in classifications.
For localizing abnormalities, ResNet is selected for computing class activation
maps because it is ready to be deployed without structural change or further
training. Our approach demonstrates that deep convolutional neural network
classifiers have remarkable localization capabilities despite no supervision on
the location of abnormalities is provided.Comment: 6 page
Breast density classification with deep convolutional neural networks
Breast density classification is an essential part of breast cancer
screening. Although a lot of prior work considered this problem as a task for
learning algorithms, to our knowledge, all of them used small and not
clinically realistic data both for training and evaluation of their models. In
this work, we explore the limits of this task with a data set coming from over
200,000 breast cancer screening exams. We use this data to train and evaluate a
strong convolutional neural network classifier. In a reader study, we find that
our model can perform this task comparably to a human expert
Intelligent Breast Cancer Diagnosis with Heuristic-assisted Trans-Res-U-Net and Multiscale DenseNet using Mammogram Images
Breast cancer (BC) significantly contributes to cancer-related mortality in
women, underscoring the criticality of early detection for optimal patient
outcomes. A mammography is a key tool for identifying and diagnosing breast
abnormalities; however, accurately distinguishing malignant mass lesions
remains challenging. To address this issue, we propose a novel deep learning
approach for BC screening utilizing mammography images. Our proposed model
comprises three distinct stages: data collection from established benchmark
sources, image segmentation employing an Atrous Convolution-based Attentive and
Adaptive Trans-Res-UNet (ACA-ATRUNet) architecture, and BC identification via
an Atrous Convolution-based Attentive and Adaptive Multi-scale DenseNet
(ACA-AMDN) model. The hyperparameters within the ACA-ATRUNet and ACA-AMDN
models are optimised using the Modified Mussel Length-based Eurasian
Oystercatcher Optimization (MML-EOO) algorithm. Performance evaluation,
leveraging multiple metrics, is conducted, and a comparative analysis against
conventional methods is presented. Our experimental findings reveal that the
proposed BC detection framework attains superior precision rates in early
disease detection, demonstrating its potential to enhance mammography-based
screening methodologies.Comment: 22 pages, 17 figures, 4 Tables and Appendix A: Supplementary Materia
Ordinal HyperPlane Loss
This research presents the development of a new framework for analyzing ordered class data, commonly called “ordinal class” data. The focus of the work is the development of classifiers (predictive models) that predict classes from available data. Ratings scales, medical classification scales, socio-economic scales, meaningful groupings of continuous data, facial emotional intensity and facial age estimation are examples of ordinal data for which data scientists may be asked to develop predictive classifiers. It is possible to treat ordinal classification like any other classification problem that has more than two classes. Specifying a model with this strategy does not fully utilize the ordering information of classes. Alternatively, the researcher may choose to treat the ordered classes as though they are continuous values. This strategy imposes a strong assumption that the real “distance” between two adjacent classes is equal to the distance between two other adjacent classes (e.g., a rating of ‘0’ versus ‘1,’ on an 11-point scale is the same distance as a ‘9’ versus a ‘10’). For Deep Neural Networks (DNNs), the problem of predicting k ordinal classes is typically addressed by performing k-1 binary classifications. These models may be estimated within a single DNN and require an evaluation strategy to determine the class prediction. Another common option is to treat ordinal classes as continuous values for regression and then adjust the cutoff points that represent class boundaries that differentiate one class from another. This research reviews a novel loss function called Ordinal Hyperplane Loss (OHPL) that is particularly designed for data with ordinal classes. OHPLnet has been demonstrated to be a significant advancement in predicting ordinal classes for industry standard structured datasets. The loss function also enables deep learning techniques to be applied to the ordinal classification problem of unstructured data. By minimizing OHPL, a deep neural network learns to map data to an optimal space in which the distance between points and their class centroids are minimized while a nontrivial ordering relationship among classes are maintained. The research reported in this document advances OHPL loss, from a minimally viable loss function, to a more complete deep learning methodology. New analysis strategies were developed and tested that improve model performance as well as algorithm consistency in developing classification models. In the applications chapters, a new algorithm variant is introduced that enables OHPLall to be used when large data records cause a severe limitation on batch size when developing a related Deep Neural Network
- …