642 research outputs found
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
Deep learning cardiac motion analysis for human survival prediction
Motion analysis is used in computer vision to understand the behaviour of
moving objects in sequences of images. Optimising the interpretation of dynamic
biological systems requires accurate and precise motion tracking as well as
efficient representations of high-dimensional motion trajectories so that these
can be used for prediction tasks. Here we use image sequences of the heart,
acquired using cardiac magnetic resonance imaging, to create time-resolved
three-dimensional segmentations using a fully convolutional network trained on
anatomical shape priors. This dense motion model formed the input to a
supervised denoising autoencoder (4Dsurvival), which is a hybrid network
consisting of an autoencoder that learns a task-specific latent code
representation trained on observed outcome data, yielding a latent
representation optimised for survival prediction. To handle right-censored
survival outcomes, our network used a Cox partial likelihood loss function. In
a study of 302 patients the predictive accuracy (quantified by Harrell's
C-index) was significantly higher (p < .0001) for our model C=0.73 (95 CI:
0.68 - 0.78) than the human benchmark of C=0.59 (95 CI: 0.53 - 0.65). This
work demonstrates how a complex computer vision task using high-dimensional
medical image data can efficiently predict human survival
Authentication Based on Periocular Biometrics and Skin Tone
Face images with masks have a major effect on the identification and authentication of people with masks covering key facial features such as noses and mouths. In this paper, we propose to use periocular region and skin tone for authenticating users with masked faces. We first extract the periocular region of faces with masks, then detect the skin tone for each face. We then train models using machine learning algorithms Random Forest, XGBoost, and Decision Trees using skin tone information and perform classification on two datasets. Experiment results show these models had good performance
Denoising Adversarial Autoencoders: Classifying Skin Lesions Using Limited Labelled Training Data
We propose a novel deep learning model for classifying medical images in the
setting where there is a large amount of unlabelled medical data available, but
labelled data is in limited supply. We consider the specific case of
classifying skin lesions as either malignant or benign. In this setting, the
proposed approach -- the semi-supervised, denoising adversarial autoencoder --
is able to utilise vast amounts of unlabelled data to learn a representation
for skin lesions, and small amounts of labelled data to assign class labels
based on the learned representation. We analyse the contributions of both the
adversarial and denoising components of the model and find that the combination
yields superior classification performance in the setting of limited labelled
training data.Comment: Under consideration for the IET Computer Vision Journal special issue
on "Computer Vision in Cancer Data Analysis
A new Stack Autoencoder: Neighbouring Sample Envelope Embedded Stack Autoencoder Ensemble Model
Stack autoencoder (SAE), as a representative deep network, has unique and
excellent performance in feature learning, and has received extensive attention
from researchers. However, existing deep SAEs focus on original samples without
considering the hierarchical structural information between samples. To address
this limitation, this paper proposes a new SAE model-neighbouring envelope
embedded stack autoencoder ensemble (NE_ESAE). Firstly, the neighbouring sample
envelope learning mechanism (NSELM) is proposed for preprocessing of input of
SAE. NSELM constructs sample pairs by combining neighbouring samples. Besides,
the NSELM constructs a multilayer sample spaces by multilayer iterative mean
clustering, which considers the similar samples and generates layers of
envelope samples with hierarchical structural information. Second, an embedded
stack autoencoder (ESAE) is proposed and trained in each layer of sample space
to consider the original samples during training and in the network structure,
thereby better finding the relationship between original feature samples and
deep feature samples. Third, feature reduction and base classifiers are
conducted on the layers of envelope samples respectively, and output
classification results of every layer of samples. Finally, the classification
results of the layers of envelope sample space are fused through the ensemble
mechanism. In the experimental section, the proposed algorithm is validated
with over ten representative public datasets. The results show that our method
significantly has better performance than existing traditional feature learning
methods and the representative deep autoencoders.Comment: 17 pages,6 figure
- …