503 research outputs found
Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection using Chest X-ray
Pneumonia is a life-threatening disease, which occurs in the lungs caused by
either bacterial or viral infection. It can be life-endangering if not acted
upon in the right time and thus an early diagnosis of pneumonia is vital. The
aim of this paper is to automatically detect bacterial and viral pneumonia
using digital x-ray images. It provides a detailed report on advances made in
making accurate detection of pneumonia and then presents the methodology
adopted by the authors. Four different pre-trained deep Convolutional Neural
Network (CNN)- AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for
transfer learning. 5247 Bacterial, viral and normal chest x-rays images
underwent preprocessing techniques and the modified images were trained for the
transfer learning based classification task. In this work, the authors have
reported three schemes of classifications: normal vs pneumonia, bacterial vs
viral pneumonia and normal, bacterial and viral pneumonia. The classification
accuracy of normal and pneumonia images, bacterial and viral pneumonia images,
and normal, bacterial and viral pneumonia were 98%, 95%, and 93.3%
respectively. This is the highest accuracy in any scheme than the accuracies
reported in the literature. Therefore, the proposed study can be useful in
faster-diagnosing pneumonia by the radiologist and can help in the fast airport
screening of pneumonia patients.Comment: 13 Figures, 5 tables. arXiv admin note: text overlap with
arXiv:2003.1314
Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
In clinical radiology reports, doctors capture important information about
the patient's health status. They convey their observations from raw medical
imaging data about the inner structures of a patient. As such, formulating
reports requires medical experts to possess wide-ranging knowledge about
anatomical regions with their normal, healthy appearance as well as the ability
to recognize abnormalities. This explicit grasp on both the patient's anatomy
and their appearance is missing in current medical image-processing systems as
annotations are especially difficult to gather. This renders the models to be
narrow experts e.g. for identifying specific diseases. In this work, we recover
this missing link by adding human anatomy into the mix and enable the
association of content in medical reports to their occurrence in associated
imagery (medical phrase grounding). To exploit anatomical structures in this
scenario, we present a sophisticated automatic pipeline to gather and integrate
human bodily structures from computed tomography datasets, which we incorporate
in our PAXRay: A Projected dataset for the segmentation of Anatomical
structures in X-Ray data. Our evaluation shows that methods that take advantage
of anatomical information benefit heavily in visually grounding radiologists'
findings, as our anatomical segmentations allow for up to absolute 50% better
grounding results on the OpenI dataset as compared to commonly used region
proposals. The PAXRay dataset is available at
https://constantinseibold.github.io/paxray/.Comment: 33rd British Machine Vision Conference (BMVC 2022
The Effectiveness of Transfer Learning Systems on Medical Images
Deep neural networks have revolutionized the performances of many machine learning tasks such as medical image classification and segmentation. Current deep learning (DL) algorithms, specifically convolutional neural networks are increasingly becoming the methodological choice for most medical image analysis. However, training these deep neural networks requires high computational resources and very large amounts of labeled data which is often expensive and laborious. Meanwhile, recent studies have shown the transfer learning (TL) paradigm as an attractive choice in providing promising solutions to challenges of shortage in the availability of labeled medical images. Accordingly, TL enables us to leverage the knowledge learned from related data to solve a new problem.
The objective of this dissertation is to examine the effectiveness of TL systems on medical images. First, a comprehensive systematic literature review was performed to provide an up-to-date status of TL systems on medical images. Specifically, we proposed a novel conceptual framework to organize the review. Second, a novel DL network was pretrained on natural images and utilized to evaluate the effectiveness of TL on a very large medical image dataset, specifically Chest X-rays images. Lastly, domain adaptation using an autoencoder was evaluated on the medical image dataset and the results confirmed the effectiveness of TL through fine-tuning strategies.
We make several contributions to TL systems on medical image analysis: Firstly, we present a novel survey of TL on medical images and propose a new conceptual framework to organize the findings. Secondly, we propose a novel DL architecture to improve learned representations of medical images while mitigating the problem of vanishing gradients. Additionally, we identified the optimal cut-off layer (OCL) that provided the best model performance. We found that the higher layers in the proposed deep model give a better feature representation of our medical image task. Finally, we analyzed the effect of domain adaptation by fine-tuning an autoencoder on our medical images and provide theoretical contributions on the application of the transductive TL approach. The contributions herein reveal several research gaps to motivate future research and contribute to the body of literature in this active research area of TL systems on medical image analysis
Deep Learning Framework for Covid-19 Detection and Severity Classification towards Clinical Decision Support System
Chest CT scans are widely used for COVID-19 diagnosis. Existing methods focused more on the detection of the disease. However, there is need for detection of severity towards making decisions for suitable course of action. Towards this end, we proposed a deep learning framework for automatic COVID-19 diagnosis and severity detection. Our framework is based on enhanced Convolutional Neural Network (CNN) model which is found efficient for medical image analysis. We proposed two algorithms to realize the framework. The first algorithm is known as Deep Learning based Automatic COVID-19 Diagnosis (DL-ACD). This algorithm is meant for diagnosis of COVID-19 with learning based phenomena. The second algorithm is known as Automatic COVID-19 Severity Detection (ACSD). It is designed to know severity of the disease which helps in making treatment appropriate. Our framework is evaluated against existing deep learning models and found to have superior performance over the existing models
Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models
Advances in deep neural networks (DNNs) have shown tremendous promise in the
medical domain. However, the deep learning tools that are helping the domain,
can also be used against it. Given the prevalence of fraud in the healthcare
domain, it is important to consider the adversarial use of DNNs in manipulating
sensitive data that is crucial to patient healthcare. In this work, we present
the design and implementation of a DNN-based image translation attack on
biomedical imagery. More specifically, we propose Jekyll, a neural style
transfer framework that takes as input a biomedical image of a patient and
translates it to a new image that indicates an attacker-chosen disease
condition. The potential for fraudulent claims based on such generated 'fake'
medical images is significant, and we demonstrate successful attacks on both
X-rays and retinal fundus image modalities. We show that these attacks manage
to mislead both medical professionals and algorithmic detection schemes.
Lastly, we also investigate defensive measures based on machine learning to
detect images generated by Jekyll.Comment: Published in proceedings of the 5th European Symposium on Security
and Privacy (EuroS&P '20
Deep Learning for Distinguishing Normal versus Abnormal Chest Radiographs and Generalization to Unseen Diseases
Chest radiography (CXR) is the most widely-used thoracic clinical imaging
modality and is crucial for guiding the management of cardiothoracic
conditions. The detection of specific CXR findings has been the main focus of
several artificial intelligence (AI) systems. However, the wide range of
possible CXR abnormalities makes it impractical to build specific systems to
detect every possible condition. In this work, we developed and evaluated an AI
system to classify CXRs as normal or abnormal. For development, we used a
de-identified dataset of 248,445 patients from a multi-city hospital network in
India. To assess generalizability, we evaluated our system using 6
international datasets from India, China, and the United States. Of these
datasets, 4 focused on diseases that the AI was not trained to detect: 2
datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our
results suggest that the AI system generalizes to new patient populations and
abnormalities. In a simulated workflow where the AI system prioritized abnormal
cases, the turnaround time for abnormal cases reduced by 7-28%. These results
represent an important step towards evaluating whether AI can be safely used to
flag cases in a general setting where previously unseen abnormalities exist
- …