1,364 research outputs found
Deep neural network or dermatologist?
Deep learning techniques have proven high accuracy for identifying melanoma
in digitised dermoscopic images. A strength is that these methods are not
constrained by features that are pre-defined by human semantics. A down-side is
that it is difficult to understand the rationale of the model predictions and
to identify potential failure modes. This is a major barrier to adoption of
deep learning in clinical practice. In this paper we ask if two existing local
interpretability methods, Grad-CAM and Kernel SHAP, can shed light on
convolutional neural networks trained in the context of melanoma detection. Our
contributions are (i) we first explore the domain space via a reproducible,
end-to-end learning framework that creates a suite of 30 models, all trained on
a publicly available data set (HAM10000), (ii) we next explore the reliability
of GradCAM and Kernel SHAP in this context via some basic sanity check
experiments (iii) finally, we investigate a random selection of models from our
suite using GradCAM and Kernel SHAP. We show that despite high accuracy, the
models will occasionally assign importance to features that are not relevant to
the diagnostic task. We also show that models of similar accuracy will produce
different explanations as measured by these methods. This work represents first
steps in bridging the gap between model accuracy and interpretability in the
domain of skin cancer classification
Benchmarking Deep Learning Architectures for Predicting Readmission to the ICU and Describing Patients-at-Risk
Objective: To compare different deep learning architectures for predicting
the risk of readmission within 30 days of discharge from the intensive care
unit (ICU). The interpretability of attention-based models is leveraged to
describe patients-at-risk. Methods: Several deep learning architectures making
use of attention mechanisms, recurrent layers, neural ordinary differential
equations (ODEs), and medical concept embeddings with time-aware attention were
trained using publicly available electronic medical record data (MIMIC-III)
associated with 45,298 ICU stays for 33,150 patients. Bayesian inference was
used to compute the posterior over weights of an attention-based model. Odds
ratios associated with an increased risk of readmission were computed for
static variables. Diagnoses, procedures, medications, and vital signs were
ranked according to the associated risk of readmission. Results: A recurrent
neural network, with time dynamics of code embeddings computed by neural ODEs,
achieved the highest average precision of 0.331 (AUROC: 0.739, F1-Score:
0.372). Predictive accuracy was comparable across neural network architectures.
Groups of patients at risk included those suffering from infectious
complications, with chronic or progressive conditions, and for whom standard
medical care was not suitable. Conclusions: Attention-based networks may be
preferable to recurrent networks if an interpretable model is required, at only
marginal cost in predictive accuracy
Objective auditory brainstem response classification using machine learning
The objective of this study was to use machine learning in the form of a deep neural network to objectively classify paired auditory brainstem response waveforms into either: ‘clear response’, ‘inconclusive’ or ‘response absent’. A deep convolutional neural network was constructed and fine-tuned using stratified 10-fold cross-validation on 190 paired ABR waveforms. The final model was evaluated on a test set of 42 paired waveforms. The full dataset comprised 232 paired ABR waveforms recorded from eight normal-hearing individuals. The dataset was obtained from the PhysioBank database. The paired waveforms were independently labelled by two audiological scientists in order to train the network and evaluate its performance. The trained neural network was able to classify paired ABR waveforms with 92.9% accuracy. The sensitivity and the specificity were 92.9% and 96.4%, respectively. This neural network may have clinical utility in assisting clinicians with waveform classification for the purpose of hearing threshold estimation. Further evaluation using a large clinically obtained dataset would provide further validation with regard to the clinical potential of the neural network in diagnostic adult testing, newborn testing and in automated newborn hearing screening
Skin lesion classification from dermoscopic images using deep learning techniques
The recent emergence of deep learning methods for medical image analysis has enabled the development of intelligent medical imaging-based diagnosis systems that can assist the human expert in making better decisions about a patient’s health. In this paper we focus on the problem of skin lesion classification, particularly early melanoma detection, and present a deep-learning based approach to solve the problem of classifying a dermoscopic image containing a skin lesion as malignant or benign. The proposed solution is built around the VGGNet convolutional neural network architecture and uses the transfer learning paradigm. Experimental results are encouraging: on the ISIC Archive dataset, the proposed method achieves a sensitivity value of 78.66%, which is significantly higher than the current state of the art on that dataset.Postprint (author's final draft
Survey on Therapy Prediction using Deep Learning for Pores and Skin Diseases
Introduction: Prediction and detection of skin ailments have generally been a hard and important task for health care specialists. In the cutting-edge situation majority of the pores and skin care practitioners are the uses of traditional techniques to diagnose the ailment which may also take a large amount of time. Skin Diseases are excessive troubles in recent times as it is a consider form of environmental factors, socioeconomic elements, loss of entire weight loss program, and so on. Identifying the particular skin disease by computer vision is introduced as a novel task. Based on skin or pore disease, certain therapy can be suggested. In proposed study there are different applications based on deep learning are studied with computer vision task for better performance of proposed application. Famous deep learning algorithms may include CNN (convolutional neural network) , RNN (Recurrent Neural network), etc.
Objective: To diagnose skin disease with dermoscopic images automatically. Developing automated strategies to improve the accuracy of analysis for multiple psoriasis and skin diseases
Methods: In existing techniques many machine learning models are used which is having high complexity and require more time for analysis. So, in this study different deep learning models are studied for understanding performance difference between different models. This paper is a comparative check about skin illnesses related to ordinary skin issues in addition to cosmetology. Image selection, segmentation of skin disease detection and classification are the important steps can be used for oily, dry, and ordinary pores.
Result: The field of dermatology has seen promising results from studies on various Convolutional Neural Network (CNN) algorithms for classifying skin diseases based on clinical images. These studies have concentrated on utilizing the strength of deep learning and computer vision techniques to classify and diagnose different skin conditions using facial images precisely.
Conclusion: A survey of numerous papers is achieved on basis of technologies used, outcomes with accuracy, moral behavior, and number of illnesses diagnosed, datasets. Different existing research methodologies are compared with present deep learning architectures for understanding superior performance of deep learning models. Using deep learning, we can predict pore and skin diseases. In proposed study, introduction to different algorithms of deep learning which are combined with computer vision tasks to find the skin disease and pore disease are studied. Therapy can be predicted based on type of skin or pore disease
A Novel Fuzzy Multilayer Perceptron (F-MLP) for the Detection of Irregularity in Skin Lesion Border Using Dermoscopic Images
Skin lesion border irregularity, which represents the B feature in the ABCD rule, is considered one of the most significant factors in melanoma diagnosis. Since signs that clinicians rely on in melanoma diagnosis involve subjective judgment including visual signs such as border irregularity, this deems it necessary to develop an objective approach to finding border irregularity. Increased research in neural networks has been carried out in recent years mainly driven by the advances of deep learning. Artificial neural networks (ANNs) or multilayer perceptrons have been shown to perform well in supervised learning tasks. However, such networks usually don't incorporate information pertaining the ambiguity of the inputs when training the network, which in turn could affect how the weights are being updated in the learning process and eventually degrading the performance of the network when applied on test data. In this paper, we propose a fuzzy multilayer perceptron (F-MLP) that takes the ambiguity of the inputs into consideration and subsequently reduces the effects of ambiguous inputs on the learning process. A new optimization function, the fuzzy gradient descent, has been proposed to reflect those changes. Moreover, a type-II fuzzy sigmoid activation function has also been proposed which enables finding the range of performance the fuzzy neural network is able to attain. The fuzzy neural network was used to predict the skin lesion border irregularity, where the lesion was firstly segmented from the skin, the lesion border extracted, border irregularity measured using a proposed measure vector, and using the extracted border irregularity measures to train the neural network. The proposed approach outperformed most of the state-of-the-art classification methods in general and its standard neural network counterpart in particular. However, the proposed fuzzy neural network was more time-consuming when training the network
Analyzing Digital Image by Deep Learning for Melanoma Diagnosis
Image classi cation is an important task in many medical
applications, in order to achieve an adequate diagnostic of di erent le-
sions. Melanoma is a frequent kind of skin cancer, which most of them
can be detected by visual exploration. Heterogeneity and database size
are the most important di culties to overcome in order to obtain a good
classi cation performance. In this work, a deep learning based method
for accurate classi cation of wound regions is proposed. Raw images are
fed into a Convolutional Neural Network (CNN) producing a probability
of being a melanoma or a non-melanoma. Alexnet and GoogLeNet were
used due to their well-known e ectiveness. Moreover, data augmentation
was used to increase the number of input images. Experiments show that
the compared models can achieve high performance in terms of mean ac-
curacy with very few data and without any preprocessing.Universidad de Málaga. Campus de Excelencia Internacional AndalucÃa Tech
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
- …