1 research outputs found
SRTGAN: Triplet Loss based Generative Adversarial Network for Real-World Super-Resolution
Many applications such as forensics, surveillance, satellite imaging, medical
imaging, etc., demand High-Resolution (HR) images. However, obtaining an HR
image is not always possible due to the limitations of optical sensors and
their costs. An alternative solution called Single Image Super-Resolution
(SISR) is a software-driven approach that aims to take a Low-Resolution (LR)
image and obtain the HR image. Most supervised SISR solutions use ground truth
HR image as a target and do not include the information provided in the LR
image, which could be valuable. In this work, we introduce Triplet Loss-based
Generative Adversarial Network hereafter referred as SRTGAN for Image
Super-Resolution problem on real-world degradation. We introduce a new
triplet-based adversarial loss function that exploits the information provided
in the LR image by using it as a negative sample. Allowing the patch-based
discriminator with access to both HR and LR images optimizes to better
differentiate between HR and LR images; hence, improving the adversary.
Further, we propose to fuse the adversarial loss, content loss, perceptual
loss, and quality loss to obtain Super-Resolution (SR) image with high
perceptual fidelity. We validate the superior performance of the proposed
method over the other existing methods on the RealSR dataset in terms of
quantitative and qualitative metrics.Comment: Affiliated with the Sardar Vallabhbhai National Institute of
Technology (SVNIT), India and Norwegian University of Science and Technology
(NTNU), Norway. Presented at the 7th International Conference on Computer
Vision and Image Processing (CVIP) 202