Most of the existing video face super-resolution (VFSR) methods are trained
and evaluated on VoxCeleb1, which is designed specifically for speaker
identification and the frames in this dataset are of low quality. As a
consequence, the VFSR models trained on this dataset can not output
visual-pleasing results. In this paper, we develop an automatic and scalable
pipeline to collect a high-quality video face dataset (VFHQ), which contains
over 16,000 high-fidelity clips of diverse interview scenarios. To verify the
necessity of VFHQ, we further conduct experiments and demonstrate that VFSR
models trained on our VFHQ dataset can generate results with sharper edges and
finer textures than those trained on VoxCeleb1. In addition, we show that the
temporal information plays a pivotal role in eliminating video consistency
issues as well as further improving visual performance. Based on VFHQ, by
analyzing the benchmarking study of several state-of-the-art algorithms under
bicubic and blind settings. See our project page:
https://liangbinxie.github.io/projects/vfhqComment: Project webpage available at
https://liangbinxie.github.io/projects/vfh