Restoration of poor quality images with a blended set of artifacts plays a
vital role for a reliable diagnosis. Existing studies have focused on specific
restoration problems such as image deblurring, denoising, and exposure
correction where there is usually a strong assumption on the artifact type and
severity. As a pioneer study in blind X-ray restoration, we propose a joint
model for generic image restoration and classification: Restore-to-Classify
Generative Adversarial Networks (R2C-GANs). Such a jointly optimized model
keeps any disease intact after the restoration. Therefore, this will naturally
lead to a higher diagnosis performance thanks to the improved X-ray image
quality. To accomplish this crucial objective, we define the restoration task
as an Image-to-Image translation problem from poor quality having noisy,
blurry, or over/under-exposed images to high quality image domain. The proposed
R2C-GAN model is able to learn forward and inverse transforms between the two
domains using unpaired training samples. Simultaneously, the joint
classification preserves the disease label during restoration. Moreover, the
R2C-GANs are equipped with operational layers/neurons reducing the network
depth and further boosting both restoration and classification performances.
The proposed joint model is extensively evaluated over the QaTa-COV19 dataset
for Coronavirus Disease 2019 (COVID-19) classification. The proposed
restoration approach achieves over 90% F1-Score which is significantly higher
than the performance of any deep model. Moreover, in the qualitative analysis,
the restoration performance of R2C-GANs is approved by a group of medical
doctors. We share the software implementation at
https://github.com/meteahishali/R2C-GAN