1,261 research outputs found
POCOVID-Net: Automatic Detection of COVID-19 From a New Lung Ultrasound Imaging Dataset (POCUS)
With the rapid development of COVID-19 into a global pandemic, there is an
ever more urgent need for cheap, fast and reliable tools that can assist
physicians in diagnosing COVID-19. Medical imaging such as CT can take a key
role in complementing conventional diagnostic tools from molecular biology,
and, using deep learning techniques, several automatic systems were
demonstrated promising performances using CT or X-ray data. Here, we advocate a
more prominent role of point-of-care ultrasound imaging to guide COVID-19
detection. Ultrasound is non-invasive and ubiquitous in medical facilities
around the globe. Our contribution is threefold. First, we gather a lung
ultrasound (POCUS) dataset consisting of 1103 images (654 COVID-19, 277
bacterial pneumonia and 172 healthy controls), sampled from 64 videos. This
dataset was assembled from various online sources, processed specifically for
deep learning models and is intended to serve as a starting point for an
open-access initiative. Second, we train a deep convolutional neural network
(POCOVID-Net) on this 3-class dataset and achieve an accuracy of 89% and, by a
majority vote, a video accuracy of 92% . For detecting COVID-19 in particular,
the model performs with a sensitivity of 0.96, a specificity of 0.79 and
F1-score of 0.92 in a 5-fold cross validation. Third, we provide an open-access
web service (POCOVIDScreen) that is available at: https://pocovidscreen.org.
The website deploys the predictive model, allowing to perform predictions on
ultrasound lung images. In addition, it grants medical staff the option to
(bulk) upload their own screenings in order to contribute to the growing public
database of pathological lung ultrasound images.
Dataset and code are available from:
https://github.com/jannisborn/covid19_pocus_ultrasound.
NOTE: This preprint is superseded by our paper in Applied Sciences:
https://doi.org/10.3390/app11020672Comment: 7 pages, 4 figure
COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images
The COVID-19 pandemic continues to have a devastating effect on the health
and well-being of the global population. A critical step in the fight against
COVID-19 is effective screening of infected patients, with one of the key
screening approaches being radiology examination using chest radiography.
Motivated by this and inspired by the open source efforts of the research
community, in this study we introduce COVID-Net, a deep convolutional neural
network design tailored for the detection of COVID-19 cases from chest X-ray
(CXR) images that is open source and available to the general public. To the
best of the authors' knowledge, COVID-Net is one of the first open source
network designs for COVID-19 detection from CXR images at the time of initial
release. We also introduce COVIDx, an open access benchmark dataset that we
generated comprising of 13,975 CXR images across 13,870 patient patient cases,
with the largest number of publicly available COVID-19 positive cases to the
best of the authors' knowledge. Furthermore, we investigate how COVID-Net makes
predictions using an explainability method in an attempt to not only gain
deeper insights into critical factors associated with COVID cases, which can
aid clinicians in improved screening, but also audit COVID-Net in a responsible
and transparent manner to validate that it is making decisions based on
relevant information from the CXR images. By no means a production-ready
solution, the hope is that the open access COVID-Net, along with the
description on constructing the open source COVIDx dataset, will be leveraged
and build upon by both researchers and citizen data scientists alike to
accelerate the development of highly accurate yet practical deep learning
solutions for detecting COVID-19 cases and accelerate treatment of those who
need it the most.Comment: 12 page
Diagnosis of Coronavirus Disease 2019 (COVID-19) with Structured Latent Multi-View Representation Learning
Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread
rapidly across the world. Due to the large number of affected patients and
heavy labor for doctors, computer-aided diagnosis with machine learning
algorithm is urgently needed, and could largely reduce the efforts of
clinicians and accelerate the diagnosis process. Chest computed tomography (CT)
has been recognized as an informative tool for diagnosis of the disease. In
this study, we propose to conduct the diagnosis of COVID-19 with a series of
features extracted from CT images. To fully explore multiple features
describing CT images from different views, a unified latent representation is
learned which can completely encode information from different aspects of
features and is endowed with promising class structure for separability.
Specifically, the completeness is guaranteed with a group of backward neural
networks (each for one type of features), while by using class labels the
representation is enforced to be compact within COVID-19/community-acquired
pneumonia (CAP) and also a large margin is guaranteed between different types
of pneumonia. In this way, our model can well avoid overfitting compared to the
case of directly projecting highdimensional features into classes. Extensive
experimental results show that the proposed method outperforms all comparison
methods, and rather stable performances are observed when varying the numbers
of training data
Deep Learning COVID-19 Features on CXR using Limited Training Data Sets
Under the global pandemic of COVID-19, the use of artificial intelligence to
analyze chest X-ray (CXR) image for COVID-19 diagnosis and patient triage is
becoming important. Unfortunately, due to the emergent nature of the COVID-19
pandemic, a systematic collection of the CXR data set for deep neural network
training is difficult. To address this problem, here we propose a patch-based
convolutional neural network approach with a relatively small number of
trainable parameters for COVID-19 diagnosis. The proposed method is inspired by
our statistical analysis of the potential imaging biomarkers of the CXR
radiographs. Experimental results show that our method achieves
state-of-the-art performance and provides clinically interpretable saliency
maps, which are useful for COVID-19 diagnosis and patient triage.Comment: Accepted for IEEE Trans. on Medical Imaging Special Issue on
Imaging-based Diagnosis of COVID-1
Hypergraph Learning for Identification of COVID-19 with CT Imaging
The coronavirus disease, named COVID-19, has become the largest global public
health crisis since it started in early 2020. CT imaging has been used as a
complementary tool to assist early screening, especially for the rapid
identification of COVID-19 cases from community acquired pneumonia (CAP) cases.
The main challenge in early screening is how to model the confusing cases in
the COVID-19 and CAP groups, with very similar clinical manifestations and
imaging features. To tackle this challenge, we propose an Uncertainty
Vertex-weighted Hypergraph Learning (UVHL) method to identify COVID-19 from CAP
using CT images. In particular, multiple types of features (including regional
features and radiomics features) are first extracted from CT image for each
case. Then, the relationship among different cases is formulated by a
hypergraph structure, with each case represented as a vertex in the hypergraph.
The uncertainty of each vertex is further computed with an uncertainty score
measurement and used as a weight in the hypergraph. Finally, a learning process
of the vertex-weighted hypergraph is used to predict whether a new testing case
belongs to COVID-19 or not. Experiments on a large multi-center pneumonia
dataset, consisting of 2,148 COVID-19 cases and 1,182 CAP cases from five
hospitals, are conducted to evaluate the performance of the proposed method.
Results demonstrate the effectiveness and robustness of our proposed method on
the identification of COVID-19 in comparison to state-of-the-art methods
Robust Screening of COVID-19 from Chest X-ray via Discriminative Cost-Sensitive Learning
This paper addresses the new problem of automated screening of coronavirus
disease 2019 (COVID-19) based on chest X-rays, which is urgently demanded
toward fast stopping the pandemic. However, robust and accurate screening of
COVID-19 from chest X-rays is still a globally recognized challenge because of
two bottlenecks: 1) imaging features of COVID-19 share some similarities with
other pneumonia on chest X-rays, and 2) the misdiagnosis rate of COVID-19 is
very high, and the misdiagnosis cost is expensive. While a few pioneering works
have made much progress, they underestimate both crucial bottlenecks. In this
paper, we report our solution, discriminative cost-sensitive learning (DCSL),
which should be the choice if the clinical needs the assisted screening of
COVID-19 from chest X-rays. DCSL combines both advantages from fine-grained
classification and cost-sensitive learning. Firstly, DCSL develops a
conditional center loss that learns deep discriminative representation.
Secondly, DCSL establishes score-level cost-sensitive learning that can
adaptively enlarge the cost of misclassifying COVID-19 examples into other
classes. DCSL is so flexible that it can apply in any deep neural network. We
collected a large-scale multi-class dataset comprised of 2,239 chest X-ray
examples: 239 examples from confirmed COVID-19 cases, 1,000 examples with
confirmed bacterial or viral pneumonia cases, and 1,000 examples of healthy
people. Extensive experiments on the three-class classification show that our
algorithm remarkably outperforms state-of-the-art algorithms. It achieves an
accuracy of 97.01%, a precision of 97%, a sensitivity of 97.09%, and an
F1-score of 96.98%. These results endow our algorithm as an efficient tool for
the fast large-scale screening of COVID-19.Comment: Under revie
A Review of Automated Diagnosis of COVID-19 Based on Scanning Images
The pandemic of COVID-19 has caused millions of infections, which has led to
a great loss all over the world, socially and economically. Due to the
false-negative rate and the time-consuming of the conventional Reverse
Transcription Polymerase Chain Reaction (RT-PCR) tests, diagnosing based on
X-ray images and Computed Tomography (CT) images has been widely adopted.
Therefore, researchers of the computer vision area have developed many
automatic diagnosing models based on machine learning or deep learning to
assist the radiologists and improve the diagnosing accuracy. In this paper, we
present a review of these recently emerging automatic diagnosing models. 70
models proposed from February 14, 2020, to July 21, 2020, are involved. We
analyzed the models from the perspective of preprocessing, feature extraction,
classification, and evaluation. Based on the limitation of existing models, we
pointed out that domain adaption in transfer learning and interpretability
promotion would be the possible future directions.Comment: In ICRAI 2020: 2020 6th International Conference on Robotics and
Artificial Intelligenc
Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community Acquired Pneumonia
The coronavirus disease (COVID-19) is rapidly spreading all over the world,
and has infected more than 1,436,000 people in more than 200 countries and
territories as of April 9, 2020. Detecting COVID-19 at early stage is essential
to deliver proper healthcare to the patients and also to protect the uninfected
population. To this end, we develop a dual-sampling attention network to
automatically diagnose COVID- 19 from the community acquired pneumonia (CAP) in
chest computed tomography (CT). In particular, we propose a novel online
attention module with a 3D convolutional network (CNN) to focus on the
infection regions in lungs when making decisions of diagnoses. Note that there
exists imbalanced distribution of the sizes of the infection regions between
COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom
onset. Therefore, we develop a dual-sampling strategy to mitigate the
imbalanced learning. Our method is evaluated (to our best knowledge) upon the
largest multi-center CT data for COVID-19 from 8 hospitals. In the
training-validation stage, we collect 2186 CT scans from 1588 patients for a
5-fold cross-validation. In the testing stage, we employ another independent
large-scale testing dataset including 2796 CT scans from 2057 patients. Results
show that our algorithm can identify the COVID-19 images with the area under
the receiver operating characteristic curve (AUC) value of 0.944, accuracy of
87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With
this performance, the proposed algorithm could potentially aid radiologists
with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19
outbreak.Comment: accepted by IEEE Transactions on Medical Imaging, 202
M3Lung-Sys: A Deep Learning System for Multi-Class Lung Pneumonia Screening from CT Imaging
To counter the outbreak of COVID-19, the accurate diagnosis of suspected
cases plays a crucial role in timely quarantine, medical treatment, and
preventing the spread of the pandemic. Considering the limited training cases
and resources (e.g, time and budget), we propose a Multi-task Multi-slice Deep
Learning System (M3Lung-Sys) for multi-class lung pneumonia screening from CT
imaging, which only consists of two 2D CNN networks, i.e., slice- and
patient-level classification networks. The former aims to seek the feature
representations from abundant CT slices instead of limited CT volumes, and for
the overall pneumonia screening, the latter one could recover the temporal
information by feature refinement and aggregation between different slices. In
addition to distinguish COVID-19 from Healthy, H1N1, and CAP cases, our M 3
Lung-Sys also be able to locate the areas of relevant lesions, without any
pixel-level annotation. To further demonstrate the effectiveness of our model,
we conduct extensive experiments on a chest CT imaging dataset with a total of
734 patients (251 healthy people, 245 COVID-19 patients, 105 H1N1 patients, and
133 CAP patients). The quantitative results with plenty of metrics indicate the
superiority of our proposed model on both slice- and patient-level
classification tasks. More importantly, the generated lesion location maps make
our system interpretable and more valuable to clinicians.Comment: IEEE Journal of Biomedical and Health Informatics (JBHI), 202
Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19
(This paper was submitted as an invited paper to IEEE Reviews in Biomedical
Engineering on April 6, 2020.) The pandemic of coronavirus disease 2019
(COVID-19) is spreading all over the world. Medical imaging such as X-ray and
computed tomography (CT) plays an essential role in the global fight against
COVID-19, whereas the recently emerging artificial intelligence (AI)
technologies further strengthen the power of the imaging tools and help medical
specialists. We hereby review the rapid responses in the community of medical
imaging (empowered by AI) toward COVID-19. For example, AI-empowered image
acquisition can significantly help automate the scanning procedure and also
reshape the workflow with minimal contact to patients, providing the best
protection to the imaging technicians. Also, AI can improve work efficiency by
accurate delination of infections in X-ray and CT images, facilitating
subsequent quantification. Moreover, the computer-aided platforms help
radiologists make clinical decisions, i.e., for disease diagnosis, tracking,
and prognosis. In this review paper, we thus cover the entire pipeline of
medical imaging and analysis techniques involved with COVID-19, including image
acquisition, segmentation, diagnosis, and follow-up. We particularly focus on
the integration of AI with X-ray and CT, both of which are widely used in the
frontline hospitals, in order to depict the latest progress of medical imaging
and radiology fighting against COVID-19.Comment: Added journal submission inf
- …