1 research outputs found
3D Tomographic Pattern Synthesis for Enhancing the Quantification of COVID-19
The Coronavirus Disease (COVID-19) has affected 1.8 million people and
resulted in more than 110,000 deaths as of April 12, 2020. Several studies have
shown that tomographic patterns seen on chest Computed Tomography (CT), such as
ground-glass opacities, consolidations, and crazy paving pattern, are
correlated with the disease severity and progression. CT imaging can thus
emerge as an important modality for the management of COVID-19 patients.
AI-based solutions can be used to support CT based quantitative reporting and
make reading efficient and reproducible if quantitative biomarkers, such as the
Percentage of Opacity (PO), can be automatically computed. However, COVID-19
has posed unique challenges to the development of AI, specifically concerning
the availability of appropriate image data and annotations at scale. In this
paper, we propose to use synthetic datasets to augment an existing COVID-19
database to tackle these challenges. We train a Generative Adversarial Network
(GAN) to inpaint COVID-19 related tomographic patterns on chest CTs from
patients without infectious diseases. Additionally, we leverage location priors
derived from manually labeled COVID-19 chest CTs patients to generate
appropriate abnormality distributions. Synthetic data are used to improve both
lung segmentation and segmentation of COVID-19 patterns by adding 20% of
synthetic data to the real COVID-19 training data. We collected 2143 chest CTs,
containing 327 COVID-19 positive cases, acquired from 12 sites across 7
countries. By testing on 100 COVID-19 positive and 100 control cases, we show
that synthetic data can help improve both lung segmentation (+6.02% lesion
inclusion rate) and abnormality segmentation (+2.78% dice coefficient), leading
to an overall more accurate PO computation (+2.82% Pearson coefficient)