155 research outputs found
Use of deep learning to detect cardiomegaly on thoracic radiographs in dogs
The purpose of this study was to develop a computer-aided detection (CAD) device based on convolutional neural networks (CNNs) to detect cardiomegaly from plain radiographs in dogs. Right lateral chest radiographs (n = 1465) were retrospectively selected from archives. The radiographs were classified as having a normal cardiac silhouette (No-vertebral heart scale [VHS]-Cardiomegaly) or an enlarged cardiac silhouette (VHS-Cardiomegaly) based on the breed-specific VHS. The database was divided into a training set (1153 images) and a test set (315 images). The diagnostic accuracy of four different CNN models in the detection of cardiomegaly was calculated using the test set. All tested models had an area under the curve >0.9, demonstrating high diagnostic accuracy. There was a statistically significant difference between Model C and the remainder models (Model A vs. Model C, P = 0.0298; Model B vs. Model C, P = 0.003; Model C vs. Model D, P = 0.0018), but there were no significant differences between other combinations of models (Model A vs. Model B, P = 0.395; Model A vs. Model D, P = 0.128; Model B vs. Model D, P = 0.373). Convolutional neural networks could therefore assist veterinarians in detecting cardiomegaly in dogs from plain radiographs
Analysis of radiograph and detection of cardiomegaly
The paper presents the procedure which automatically and reliably determines the presence of heart enlargement, also known as cardiomegaly, from a chest radiograph. We took advantage of some well-established image processing methods and adapted a few of them to meet our needs. Methods which were used include image filtering with convolution masks, segmentation with thresholding and edge detection. The procedure to detect heart and chest cavity boundaries and the corresponding boundary points using modified and custom image processing methods is presented. The final result represents the confirmation or rejection of cardiomegaly
Rethinking annotation granularity for overcoming deep shortcut learning: A retrospective study on chest radiographs
Deep learning has demonstrated radiograph screening performances that are
comparable or superior to radiologists. However, recent studies show that deep
models for thoracic disease classification usually show degraded performance
when applied to external data. Such phenomena can be categorized into shortcut
learning, where the deep models learn unintended decision rules that can fit
the identically distributed training and test set but fail to generalize to
other distributions. A natural way to alleviate this defect is explicitly
indicating the lesions and focusing the model on learning the intended
features. In this paper, we conduct extensive retrospective experiments to
compare a popular thoracic disease classification model, CheXNet, and a
thoracic lesion detection model, CheXDet. We first showed that the two models
achieved similar image-level classification performance on the internal test
set with no significant differences under many scenarios. Meanwhile, we found
incorporating external training data even led to performance degradation for
CheXNet. Then, we compared the models' internal performance on the lesion
localization task and showed that CheXDet achieved significantly better
performance than CheXNet even when given 80% less training data. By further
visualizing the models' decision-making regions, we revealed that CheXNet
learned patterns other than the target lesions, demonstrating its shortcut
learning defect. Moreover, CheXDet achieved significantly better external
performance than CheXNet on both the image-level classification task and the
lesion localization task. Our findings suggest improving annotation granularity
for training deep learning systems as a promising way to elevate future deep
learning-based diagnosis systems for clinical usage.Comment: 22 pages of main text, 18 pages of supplementary table
End-to-End Deep Diagnosis of X-ray Images
In this work, we present an end-to-end deep learning framework for X-ray
image diagnosis. As the first step, our system determines whether a submitted
image is an X-ray or not. After it classifies the type of the X-ray, it runs
the dedicated abnormality classification network. In this work, we only focus
on the chest X-rays for abnormality classification. However, the system can be
extended to other X-ray types easily. Our deep learning classifiers are based
on DenseNet-121 architecture. The test set accuracy obtained for 'X-ray or
Not', 'X-ray Type Classification', and 'Chest Abnormality Classification' tasks
are 0.987, 0.976, and 0.947, respectively, resulting into an end-to-end
accuracy of 0.91. For achieving better results than the state-of-the-art in the
'Chest Abnormality Classification', we utilize the new RAdam optimizer. We also
use Gradient-weighted Class Activation Mapping for visual explanation of the
results. Our results show the feasibility of a generalized online projectional
radiography diagnosis system.Comment: 4 pages, 5 figure
Can Deep Learning Reliably Recognize Abnormality Patterns on Chest X-rays? A Multi-Reader Study Examining One Month of AI Implementation in Everyday Radiology Clinical Practice
In this study, we developed a deep-learning-based automatic detection
algorithm (DLAD, Carebot AI CXR) to detect and localize seven specific
radiological findings (atelectasis (ATE), consolidation (CON), pleural effusion
(EFF), pulmonary lesion (LES), subcutaneous emphysema (SCE), cardiomegaly
(CMG), pneumothorax (PNO)) on chest X-rays (CXR). We collected 956 CXRs and
compared the performance of the DLAD with that of six individual radiologists
who assessed the images in a hospital setting. The proposed DLAD achieved high
sensitivity (ATE 1.000 (0.624-1.000), CON 0.864 (0.671-0.956), EFF 0.953
(0.887-0.983), LES 0.905 (0.715-0.978), SCE 1.000 (0.366-1.000), CMG 0.837
(0.711-0.917), PNO 0.875 (0.538-0.986)), even when compared to the radiologists
(LOWEST: ATE 0.000 (0.000-0.376), CON 0.182 (0.070-0.382), EFF 0.400
(0.302-0.506), LES 0.238 (0.103-0.448), SCE 0.000 (0.000-0.634), CMG 0.347
(0.228-0.486), PNO 0.375 (0.134-0.691), HIGHEST: ATE 1.000 (0.624-1.000), CON
0.864 (0.671-0.956), EFF 0.953 (0.887-0.983), LES 0.667 (0.456-0.830), SCE
1.000 (0.366-1.000), CMG 0.980 (0.896-0.999), PNO 0.875 (0.538-0.986)). The
findings of the study demonstrate that the suggested DLAD holds potential for
integration into everyday clinical practice as a decision support system,
effectively mitigating the false negative rate associated with junior and
intermediate radiologists
Weakly Supervised Deep Learning for Thoracic Disease Classification and Localization on Chest X-rays
Chest X-rays is one of the most commonly available and affordable
radiological examinations in clinical practice. While detecting thoracic
diseases on chest X-rays is still a challenging task for machine intelligence,
due to 1) the highly varied appearance of lesion areas on X-rays from patients
of different thoracic disease and 2) the shortage of accurate pixel-level
annotations by radiologists for model training. Existing machine learning
methods are unable to deal with the challenge that thoracic diseases usually
happen in localized disease-specific areas. In this article, we propose a
weakly supervised deep learning framework equipped with squeeze-and-excitation
blocks, multi-map transfer, and max-min pooling for classifying thoracic
diseases as well as localizing suspicious lesion regions. The comprehensive
experiments and discussions are performed on the ChestX-ray14 dataset. Both
numerical and visual results have demonstrated the effectiveness of the
proposed model and its better performance against the state-of-the-art
pipelines.Comment: 10 pages. Accepted by the ACM BCB 201
- …