487 research outputs found
CTCNet: A CNN-Transformer Cooperation Network for Face Image Super-Resolution
Recently, deep convolution neural networks (CNNs) steered face
super-resolution methods have achieved great progress in restoring degraded
facial details by jointly training with facial priors. However, these methods
have some obvious limitations. On the one hand, multi-task joint learning
requires additional marking on the dataset, and the introduced prior network
will significantly increase the computational cost of the model. On the other
hand, the limited receptive field of CNN will reduce the fidelity and
naturalness of the reconstructed facial images, resulting in suboptimal
reconstructed images. In this work, we propose an efficient CNN-Transformer
Cooperation Network (CTCNet) for face super-resolution tasks, which uses the
multi-scale connected encoder-decoder architecture as the backbone.
Specifically, we first devise a novel Local-Global Feature Cooperation Module
(LGCM), which is composed of a Facial Structure Attention Unit (FSAU) and a
Transformer block, to promote the consistency of local facial detail and global
facial structure restoration simultaneously. Then, we design an efficient Local
Feature Refinement Module (LFRM) to enhance the local facial structure
information. Finally, to further improve the restoration of fine facial
details, we present a Multi-scale Feature Fusion Unit (MFFU) to adaptively fuse
the features from different stages in the encoder procedure. Comprehensive
evaluations on various datasets have assessed that the proposed CTCNet can
outperform other state-of-the-art methods significantly.Comment: 12 pages, 10 figures, 8 table
A Unified Framework to Super-Resolve Face Images of Varied Low Resolutions
The existing face image super-resolution (FSR) algorithms usually train a
specific model for a specific low input resolution for optimal results. By
contrast, we explore in this work a unified framework that is trained once and
then used to super-resolve input face images of varied low resolutions. For
that purpose, we propose a novel neural network architecture that is composed
of three anchor auto-encoders, one feature weight regressor and a final image
decoder. The three anchor auto-encoders are meant for optimal FSR for three
pre-defined low input resolutions, or named anchor resolutions, respectively.
An input face image of an arbitrary low resolution is firstly up-scaled to the
target resolution by bi-cubic interpolation and then fed to the three
auto-encoders in parallel. The three encoded anchor features are then fused
with weights determined by the feature weight regressor. At last, the fused
feature is sent to the final image decoder to derive the super-resolution
result. As shown by experiments, the proposed algorithm achieves robust and
state-of-the-art performance over a wide range of low input resolutions by a
single framework. Code and models will be made available after the publication
of this work
Face Restoration via Plug-and-Play 3D Facial Priors
State-of-the-art face restoration methods employ deep convolutional neural networks (CNNs) to learn a mapping between degraded and sharp facial patterns by exploring local appearance knowledge. However, most of these methods do not well exploit facial structures and identity information, and only deal with task-specific face restoration (e.g.,face super-resolution or deblurring). In this paper, we propose cross-tasks and cross-models plug-and-play 3D facial priors to explicitly embed the network with the sharp facial structures for general face restoration tasks. Our 3D priors are the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes (e.g., identity, facial expression, texture, illumination, and face pose). Furthermore, the priors can easily be incorporated into any network and are very efficient in improving the performance and accelerating the convergence speed. Firstly, a 3D face rendering branch is set up to obtain 3D priors of salient facial structures and identity knowledge. Secondly, for better exploiting this hierarchical information (i.e., intensity similarity, 3D facial structure, and identity content), a spatial attention module is designed for image restoration problems. Extensive face restoration experiments including face super-resolution and deblurring demonstrate that the proposed 3D priors achieve superior face restoration results over the state-of-the-art algorithm
A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur, Artifact Removal
Face Restoration (FR) aims to restore High-Quality (HQ) faces from
Low-Quality (LQ) input images, which is a domain-specific image restoration
problem in the low-level computer vision area. The early face restoration
methods mainly use statistic priors and degradation models, which are difficult
to meet the requirements of real-world applications in practice. In recent
years, face restoration has witnessed great progress after stepping into the
deep learning era. However, there are few works to study deep learning-based
face restoration methods systematically. Thus, this paper comprehensively
surveys recent advances in deep learning techniques for face restoration.
Specifically, we first summarize different problem formulations and analyze the
characteristic of the face image. Second, we discuss the challenges of face
restoration. Concerning these challenges, we present a comprehensive review of
existing FR methods, including prior based methods and deep learning-based
methods. Then, we explore developed techniques in the task of FR covering
network architectures, loss functions, and benchmark datasets. We also conduct
a systematic benchmark evaluation on representative methods. Finally, we
discuss future directions, including network designs, metrics, benchmark
datasets, applications,etc. We also provide an open-source repository for all
the discussed methods, which is available at
https://github.com/TaoWangzj/Awesome-Face-Restoration.Comment: 21 pages, 19 figure
Dual Associated Encoder for Face Restoration
Restoring facial details from low-quality (LQ) images has remained a
challenging problem due to its ill-posedness induced by various degradations in
the wild. The existing codebook prior mitigates the ill-posedness by leveraging
an autoencoder and learned codebook of high-quality (HQ) features, achieving
remarkable quality. However, existing approaches in this paradigm frequently
depend on a single encoder pre-trained on HQ data for restoring HQ images,
disregarding the domain gap between LQ and HQ images. As a result, the encoding
of LQ inputs may be insufficient, resulting in suboptimal performance. To
tackle this problem, we propose a novel dual-branch framework named DAEFR. Our
method introduces an auxiliary LQ branch that extracts crucial information from
the LQ inputs. Additionally, we incorporate association training to promote
effective synergy between the two branches, enhancing code prediction and
output quality. We evaluate the effectiveness of DAEFR on both synthetic and
real-world datasets, demonstrating its superior performance in restoring facial
details.Comment: Technical Repor
Fant\^omas: Understanding Face Anonymization Reversibility
Face images are a rich source of information that can be used to identify
individuals and infer private information about them. To mitigate this privacy
risk, anonymizations employ transformations on clear images to obfuscate
sensitive information, all while retaining some utility. Albeit published with
impressive claims, they sometimes are not evaluated with convincing
methodology.
Reversing anonymized images to resemble their real input -- and even be
identified by face recognition approaches -- represents the strongest indicator
for flawed anonymization. Some recent results indeed indicate that this is
possible for some approaches. It is, however, not well understood, which
approaches are reversible, and why. In this paper, we provide an exhaustive
investigation in the phenomenon of face anonymization reversibility. Among
other things, we find that 11 out of 15 tested face anonymizations are at least
partially reversible and highlight how both reconstruction and inversion are
the underlying processes that make reversal possible
- …