8,263 research outputs found
Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization
We aimed to evaluate computer-aided diagnosis (CADx) system for lung nodule
classification focusing on (i) usefulness of gradient tree boosting (XGBoost)
and (ii) effectiveness of parameter optimization using Bayesian optimization
(Tree Parzen Estimator, TPE) and random search. 99 lung nodules (62 lung
cancers and 37 benign lung nodules) were included from public databases of CT
images. A variant of local binary pattern was used for calculating feature
vectors. Support vector machine (SVM) or XGBoost was trained using the feature
vectors and their labels. TPE or random search was used for parameter
optimization of SVM and XGBoost. Leave-one-out cross-validation was used for
optimizing and evaluating the performance of our CADx system. Performance was
evaluated using area under the curve (AUC) of receiver operating characteristic
analysis. AUC was calculated 10 times, and its average was obtained. The best
averaged AUC of SVM and XGBoost were 0.850 and 0.896, respectively; both were
obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters
for achieving high AUC were obtained with fewer numbers of trials when using
TPE, compared with random search. In conclusion, XGBoost was better than SVM
for classifying lung nodules. TPE was more efficient than random search for
parameter optimization.Comment: 29 pages, 4 figure
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
An Interpretable Deep Hierarchical Semantic Convolutional Neural Network for Lung Nodule Malignancy Classification
While deep learning methods are increasingly being applied to tasks such as
computer-aided diagnosis, these models are difficult to interpret, do not
incorporate prior domain knowledge, and are often considered as a "black-box."
The lack of model interpretability hinders them from being fully understood by
target users such as radiologists. In this paper, we present a novel
interpretable deep hierarchical semantic convolutional neural network (HSCNN)
to predict whether a given pulmonary nodule observed on a computed tomography
(CT) scan is malignant. Our network provides two levels of output: 1) low-level
radiologist semantic features, and 2) a high-level malignancy prediction score.
The low-level semantic outputs quantify the diagnostic features used by
radiologists and serve to explain how the model interprets the images in an
expert-driven manner. The information from these low-level tasks, along with
the representations learned by the convolutional layers, are then combined and
used to infer the high-level task of predicting nodule malignancy. This unified
architecture is trained by optimizing a global loss function including both
low- and high-level tasks, thereby learning all the parameters within a joint
framework. Our experimental results using the Lung Image Database Consortium
(LIDC) show that the proposed method not only produces interpretable lung
cancer predictions but also achieves significantly better results compared to
common 3D CNN approaches
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Dimensionality Reduction in Deep Learning for Chest X-Ray Analysis of Lung Cancer
Efficiency of some dimensionality reduction techniques, like lung
segmentation, bone shadow exclusion, and t-distributed stochastic neighbor
embedding (t-SNE) for exclusion of outliers, is estimated for analysis of chest
X-ray (CXR) 2D images by deep learning approach to help radiologists identify
marks of lung cancer in CXR. Training and validation of the simple
convolutional neural network (CNN) was performed on the open JSRT dataset
(dataset #01), the JSRT after bone shadow exclusion - BSE-JSRT (dataset #02),
JSRT after lung segmentation (dataset #03), BSE-JSRT after lung segmentation
(dataset #04), and segmented BSE-JSRT after exclusion of outliers by t-SNE
method (dataset #05). The results demonstrate that the pre-processed dataset
obtained after lung segmentation, bone shadow exclusion, and filtering out the
outliers by t-SNE (dataset #05) demonstrates the highest training rate and best
accuracy in comparison to the other pre-processed datasets.Comment: 6 pages, 14 figure
- …