1,193 research outputs found
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Spatio-Temporal Hybrid Fusion of CAE and SWIn Transformers for Lung Cancer Malignancy Prediction
The paper proposes a novel hybrid discovery Radiomics framework that
simultaneously integrates temporal and spatial features extracted from non-thin
chest Computed Tomography (CT) slices to predict Lung Adenocarcinoma (LUAC)
malignancy with minimum expert involvement. Lung cancer is the leading cause of
mortality from cancer worldwide and has various histologic types, among which
LUAC has recently been the most prevalent. LUACs are classified as
pre-invasive, minimally invasive, and invasive adenocarcinomas. Timely and
accurate knowledge of the lung nodules malignancy leads to a proper treatment
plan and reduces the risk of unnecessary or late surgeries. Currently, chest CT
scan is the primary imaging modality to assess and predict the invasiveness of
LUACs. However, the radiologists' analysis based on CT images is subjective and
suffers from a low accuracy compared to the ground truth pathological reviews
provided after surgical resections. The proposed hybrid framework, referred to
as the CAET-SWin, consists of two parallel paths: (i) The Convolutional
Auto-Encoder (CAE) Transformer path that extracts and captures informative
features related to inter-slice relations via a modified Transformer
architecture, and; (ii) The Shifted Window (SWin) Transformer path, which is a
hierarchical vision transformer that extracts nodules' related spatial features
from a volumetric CT scan. Extracted temporal (from the CAET-path) and spatial
(from the Swin path) are then fused through a fusion path to classify LUACs.
Experimental results on our in-house dataset of 114 pathologically proven
Sub-Solid Nodules (SSNs) demonstrate that the CAET-SWin significantly improves
reliability of the invasiveness prediction task while achieving an accuracy of
82.65%, sensitivity of 83.66%, and specificity of 81.66% using 10-fold
cross-validation.Comment: arXiv admin note: substantial text overlap with arXiv:2110.0872
- …