130,322 research outputs found
Conversation Style Transfer using Few-Shot Learning
Conventional text style transfer approaches for natural language focus on
sentence-level style transfer without considering contextual information, and
the style is described with attributes (e.g., formality). When applying style
transfer on conversations such as task-oriented dialogues, existing approaches
suffer from these limitations as context can play an important role and the
style attributes are often difficult to define in conversations. In this paper,
we introduce conversation style transfer as a few-shot learning problem, where
the model learns to perform style transfer by observing only the target-style
dialogue examples. We propose a novel in-context learning approach to solve the
task with style-free dialogues as a pivot. Human evaluation shows that by
incorporating multi-turn context, the model is able to match the target style
while having better appropriateness and semantic correctness compared to
utterance-level style transfer. Additionally, we show that conversation style
transfer can also benefit downstream tasks. Results on multi-domain intent
classification tasks show improvement in F1 scores after transferring the style
of training data to match the style of test data
Adversarial nets with perceptual losses for text-to-image synthesis
Recent approaches in generative adversarial networks (GANs) can automatically
synthesize realistic images from descriptive text. Despite the overall fair
quality, the generated images often expose visible flaws that lack structural
definition for an object of interest. In this paper, we aim to extend state of
the art for GAN-based text-to-image synthesis by improving perceptual quality
of generated images. Differentiated from previous work, our synthetic image
generator optimizes on perceptual loss functions that measure pixel, feature
activation, and texture differences against a natural image. We present
visually more compelling synthetic images of birds and flowers generated from
text descriptions in comparison to some of the most prominent existing work
Contextual-based Image Inpainting: Infer, Match, and Translate
We study the task of image inpainting, which is to fill in the missing region
of an incomplete image with plausible contents. To this end, we propose a
learning-based approach to generate visually coherent completion given a
high-resolution image with missing components. In order to overcome the
difficulty to directly learn the distribution of high-dimensional image data,
we divide the task into inference and translation as two separate steps and
model each step with a deep neural network. We also use simple heuristics to
guide the propagation of local textures from the boundary to the hole. We show
that, by using such techniques, inpainting reduces to the problem of learning
two image-feature translation functions in much smaller space and hence easier
to train. We evaluate our method on several public datasets and show that we
generate results of better visual quality than previous state-of-the-art
methods.Comment: ECCV 2018 camera read
- …