281 research outputs found
Few shot font generation via transferring similarity guided global style and quantization local style
Automatic few-shot font generation (AFFG), aiming at generating new fonts
with only a few glyph references, reduces the labor cost of manually designing
fonts. However, the traditional AFFG paradigm of style-content disentanglement
cannot capture the diverse local details of different fonts. So, many
component-based approaches are proposed to tackle this problem. The issue with
component-based approaches is that they usually require special pre-defined
glyph components, e.g., strokes and radicals, which is infeasible for AFFG of
different languages. In this paper, we present a novel font generation approach
by aggregating styles from character similarity-guided global features and
stylized component-level representations. We calculate the similarity scores of
the target character and the referenced samples by measuring the distance along
the corresponding channels from the content features, and assigning them as the
weights for aggregating the global style features. To better capture the local
styles, a cross-attention-based style transfer module is adopted to transfer
the styles of reference glyphs to the components, where the components are
self-learned discrete latent codes through vector quantization without manual
definition. With these designs, our AFFG method could obtain a complete set of
component-level style representations, and also control the global glyph
characteristics. The experimental results reflect the effectiveness and
generalization of the proposed method on different linguistic scripts, and also
show its superiority when compared with other state-of-the-art methods. The
source code can be found at https://github.com/awei669/VQ-Font.Comment: Accepted by ICCV 202
Few-shot Font Generation with Localized Style Representations and Factorization
Automatic few-shot font generation is a practical and widely studied problem
because manual designs are expensive and sensitive to the expertise of
designers. Existing few-shot font generation methods aim to learn to
disentangle the style and content element from a few reference glyphs, and
mainly focus on a universal style representation for each font style. However,
such approach limits the model in representing diverse local styles, and thus
makes it unsuitable to the most complicated letter system, e.g., Chinese, whose
characters consist of a varying number of components (often called "radical")
with a highly complex structure. In this paper, we propose a novel font
generation method by learning localized styles, namely component-wise style
representations, instead of universal styles. The proposed style
representations enable us to synthesize complex local details in text designs.
However, learning component-wise styles solely from reference glyphs is
infeasible in the few-shot font generation scenario, when a target script has a
large number of components, e.g., over 200 for Chinese. To reduce the number of
reference glyphs, we simplify component-wise styles by a product of component
factor and style factor, inspired by low-rank matrix factorization. Thanks to
the combination of strong representation and a compact factorization strategy,
our method shows remarkably better few-shot font generation results (with only
8 reference glyph images) than other state-of-the-arts, without utilizing
strong locality supervision, e.g., location of each component, skeleton, or
strokes. The source code is available at https://github.com/clovaai/lffont.Comment: Accepted at AAAI 2021, 12 pages, 11 figures, the first two authors
contributed equall
GenText: Unsupervised Artistic Text Generation via Decoupled Font and Texture Manipulation
Automatic artistic text generation is an emerging topic which receives
increasing attention due to its wide applications. The artistic text can be
divided into three components, content, font, and texture, respectively.
Existing artistic text generation models usually focus on manipulating one
aspect of the above components, which is a sub-optimal solution for
controllable general artistic text generation. To remedy this issue, we propose
a novel approach, namely GenText, to achieve general artistic text style
transfer by separably migrating the font and texture styles from the different
source images to the target images in an unsupervised manner. Specifically, our
current work incorporates three different stages, stylization, destylization,
and font transfer, respectively, into a unified platform with a single powerful
encoder network and two separate style generator networks, one for font
transfer, the other for stylization and destylization. The destylization stage
first extracts the font style of the font reference image, then the font
transfer stage generates the target content with the desired font style.
Finally, the stylization stage renders the resulted font image with respect to
the texture style in the reference image. Moreover, considering the difficult
data acquisition of paired artistic text images, our model is designed under
the unsupervised setting, where all stages can be effectively optimized from
unpaired data. Qualitative and quantitative results are performed on artistic
text benchmarks, which demonstrate the superior performance of our proposed
model. The code with models will become publicly available in the future
DiffUTE: Universal Text Editing Diffusion Model
Diffusion model based language-guided image editing has achieved great
success recently. However, existing state-of-the-art diffusion models struggle
with rendering correct text and text style during generation. To tackle this
problem, we propose a universal self-supervised text editing diffusion model
(DiffUTE), which aims to replace or modify words in the source image with
another one while maintaining its realistic appearance. Specifically, we build
our model on a diffusion model and carefully modify the network structure to
enable the model for drawing multilingual characters with the help of glyph and
position information. Moreover, we design a self-supervised learning framework
to leverage large amounts of web data to improve the representation ability of
the model. Experimental results show that our method achieves an impressive
performance and enables controllable editing on in-the-wild images with high
fidelity. Our code will be avaliable in
\url{https://github.com/chenhaoxing/DiffUTE}
- …