6,988 research outputs found
FSRNet: End-to-End Learning Face Super-Resolution with Facial Priors
Face Super-Resolution (SR) is a domain-specific super-resolution problem. The
specific facial prior knowledge could be leveraged for better super-resolving
face images. We present a novel deep end-to-end trainable Face Super-Resolution
Network (FSRNet), which makes full use of the geometry prior, i.e., facial
landmark heatmaps and parsing maps, to super-resolve very low-resolution (LR)
face images without well-aligned requirement. Specifically, we first construct
a coarse SR network to recover a coarse high-resolution (HR) image. Then, the
coarse HR image is sent to two branches: a fine SR encoder and a prior
information estimation network, which extracts the image features, and
estimates landmark heatmaps/parsing maps respectively. Both image features and
prior information are sent to a fine SR decoder to recover the HR image. To
further generate realistic faces, we propose the Face Super-Resolution
Generative Adversarial Network (FSRGAN) to incorporate the adversarial loss
into FSRNet. Moreover, we introduce two related tasks, face alignment and
parsing, as the new evaluation metrics for face SR, which address the
inconsistency of classic metrics w.r.t. visual perception. Extensive benchmark
experiments show that FSRNet and FSRGAN significantly outperforms state of the
arts for very LR face SR, both quantitatively and qualitatively. Code will be
made available upon publication.Comment: Chen and Tai contributed equally to this pape
Generative Face Completion
In this paper, we propose an effective face completion algorithm using a deep
generative model. Different from well-studied background completion, the face
completion task is more challenging as it often requires to generate
semantically new pixels for the missing key components (e.g., eyes and mouths)
that contain large appearance variations. Unlike existing nonparametric
algorithms that search for patches to synthesize, our algorithm directly
generates contents for missing regions based on a neural network. The model is
trained with a combination of a reconstruction loss, two adversarial losses and
a semantic parsing loss, which ensures pixel faithfulness and local-global
contents consistency. With extensive experimental results, we demonstrate
qualitatively and quantitatively that our model is able to deal with a large
area of missing pixels in arbitrary shapes and generate realistic face
completion results.Comment: Accepted by CVPR 201
An Efficient Implementation of the Head-Corner Parser
This paper describes an efficient and robust implementation of a
bi-directional, head-driven parser for constraint-based grammars. This parser
is developed for the OVIS system: a Dutch spoken dialogue system in which
information about public transport can be obtained by telephone.
After a review of the motivation for head-driven parsing strategies, and
head-corner parsing in particular, a non-deterministic version of the
head-corner parser is presented. A memoization technique is applied to obtain a
fast parser. A goal-weakening technique is introduced which greatly improves
average case efficiency, both in terms of speed and space requirements.
I argue in favor of such a memoization strategy with goal-weakening in
comparison with ordinary chart-parsers because such a strategy can be applied
selectively and therefore enormously reduces the space requirements of the
parser, while no practical loss in time-efficiency is observed. On the
contrary, experiments are described in which head-corner and left-corner
parsers implemented with selective memoization and goal weakening outperform
`standard' chart parsers. The experiments include the grammar of the OVIS
system and the Alvey NL Tools grammar.
Head-corner parsing is a mix of bottom-up and top-down processing. Certain
approaches towards robust parsing require purely bottom-up processing.
Therefore, it seems that head-corner parsing is unsuitable for such robust
parsing techniques. However, it is shown how underspecification (which arises
very naturally in a logic programming environment) can be used in the
head-corner parser to allow such robust parsing techniques. A particular robust
parsing model is described which is implemented in OVIS.Comment: 31 pages, uses cl.st
Apportioning Development Effort in a Probabilistic LR Parsing System through Evaluation
We describe an implemented system for robust domain-independent syntactic
parsing of English, using a unification-based grammar of part-of-speech and
punctuation labels coupled with a probabilistic LR parser. We present
evaluations of the system's performance along several different dimensions;
these enable us to assess the contribution that each individual part is making
to the success of the system as a whole, and thus prioritise the effort to be
devoted to its further enhancement. Currently, the system is able to parse
around 80% of sentences in a substantial corpus of general text containing a
number of distinct genres. On a random sample of 250 such sentences the system
has a mean crossing bracket rate of 0.71 and recall and precision of 83% and
84% respectively when evaluated against manually-disambiguated analyses.Comment: 10 pages, 1 Postscript figure. To Appear in Proceedings of the
Conference on Empirical Methods in Natural Language Processing, University of
Pennsylvania, May 199
- …