353 research outputs found
Artificial intelligence/machine learning in respiratory medicine and potential role in asthma and COPD diagnosis
Acknowledgments We thank Ian Wright, PhD, of Novartis Ireland Ltd, for providing medical writing support in accordance with Good Publication Practice (GPP3) guidelines (http://www.ismpp.org/gpp3).63Peer reviewedPublisher PD
Deep Learning to Quantify Pulmonary Edema in Chest Radiographs
Purpose: To develop a machine learning model to classify the severity grades
of pulmonary edema on chest radiographs.
Materials and Methods: In this retrospective study, 369,071 chest radiographs
and associated radiology reports from 64,581 (mean age, 51.71; 54.51% women)
patients from the MIMIC-CXR chest radiograph dataset were included. This
dataset was split into patients with and without congestive heart failure
(CHF). Pulmonary edema severity labels from the associated radiology reports
were extracted from patients with CHF as four different ordinal levels: 0, no
edema; 1, vascular congestion; 2, interstitial edema; and 3, alveolar edema.
Deep learning models were developed using two approaches: a semi-supervised
model using a variational autoencoder and a pre-trained supervised learning
model using a dense neural network. Receiver operating characteristic curve
analysis was performed on both models.
Results: The area under the receiver operating characteristic curve (AUC) for
differentiating alveolar edema from no edema was 0.99 for the semi-supervised
model and 0.87 for the pre-trained models. Performance of the algorithm was
inversely related to the difficulty in categorizing milder states of pulmonary
edema (shown as AUCs for semi-supervised model and pre-trained model,
respectively): 2 versus 0, 0.88 and 0.81; 1 versus 0, 0.79 and 0.66; 3 versus
1, 0.93 and 0.82; 2 versus 1, 0.69 and 0.73; and, 3 versus 2, 0.88 and 0.63.
Conclusion: Deep learning models were trained on a large chest radiograph
dataset and could grade the severity of pulmonary edema on chest radiographs
with high performance.Comment: The two first authors contributed equall
PadChest: A large chest x-ray image dataset with multi-label annotated reports
We present a labeled large-scale, high resolution chest x-ray dataset for the
automated exploration of medical images along with their associated reports.
This dataset includes more than 160,000 images obtained from 67,000 patients
that were interpreted and reported by radiologists at Hospital San Juan
Hospital (Spain) from 2009 to 2017, covering six different position views and
additional information on image acquisition and patient demography. The reports
were labeled with 174 different radiographic findings, 19 differential
diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and
mapped onto standard Unified Medical Language System (UMLS) terminology. Of
these reports, 27% were manually annotated by trained physicians and the
remaining set was labeled using a supervised method based on a recurrent neural
network with attention mechanisms. The labels generated were then validated in
an independent test set achieving a 0.93 Micro-F1 score. To the best of our
knowledge, this is one of the largest public chest x-ray database suitable for
training supervised models concerning radiographs, and the first to contain
radiographic reports in Spanish. The PadChest dataset can be downloaded from
http://bimcv.cipf.es/bimcv-projects/padchest/
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Computer-aided diagnosis tool for the detection of cancerous nodules in X-ray images
This thesis involves development of a computer-aided diagnosis (CAD) tool for the detection of cancerous nodules in X-ray images. Both cancerous and non-cancerous regions appear with little distinction on an X-ray image. For accurate detection of cancerous nodules, we need to differentiate the cancerous nodules from the non-cancerous. We developed an artificial neural network to differentiate them. Artificial neural networks (ANN) find a large application in the area of medical imaging. They work in a manner rather similar to the brain and have good decision making criteria when trained appropriately. We trained the neural network by the backpropagation algorithm and tested it with different images from a database of thoracic radiographs (chest X-rays) of dogs from the LSU Veterinary Medical Center. If we give X-ray images directly as input to the ANN, it incurs substantial complexity and training time for the network to process the images. A pre-processing stage involving some image enhancement techniques helps to solve the problem to a certain extent. The CAD tool developed in this thesis works in two stages. We pre-process the digitized images (by contrast enhancement, thresholding, filtering, and blob analysis) obtained after scanning the X-rays and then separate the suspected nodule areas (SNA) from the image by a segmentation process. We then input enhanced SNAs to the backpropagation-trained ANN. When given these enhanced SNAs, the neural network recognition accuracy, compared to unprocessed images as inputs, improved from 70% to 83.33%
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
COVID-XNet: a custom Deep Learning system to diagnose and locate COVID-19 in chest X-ray images
The COVID-19 pandemic caused by the new coronavirus SARS-CoV-2 has changed the world as we know it. An early diagnosis is crucial in order to prevent new outbreaks and control its rapid spread. Medical imaging techniques, such as X-ray or chest computed tomography, are commonly used for this purpose due to their reliability for COVID-19 diagnosis. Computer-aided diagnosis systems could play an essential role in aiding radiologists in the screening process. In this work, a novel Deep Learning-based system, called COVID-XNet, is presented for COVID-19 diagnosis in chest X-ray images. The proposed system performs a set of preprocessing algorithms to the input images for variability reduction and contrast enhancement, which are then fed to a custom Convolutional Neural Network in order to extract relevant features and perform the classification between COVID-19 and normal cases. The system is trained and validated using a 5-fold cross-validation scheme, achieving an average accuracy of 94.43% and an AUC of 0.988. The output of the system can be visualized using Class Activation Maps, highlighting the main findings for COVID-19 in X-ray images. These promising results indicate that COVID-XNet could be used as a tool to aid radiologists and contribute to the fight against COVID-19.European Regional Development Fund COFNET TEC2016-77785-PAndalusian Regional (Spain) / FEDER Project PAIDI2020Andalusian Regional /FEDER PROMETEO AT17-5410-US
- …