Generating synthetic images of handwritten text in a writer-specific style is
a challenging task, especially in the case of unseen styles and new words, and
even more when these latter contain characters that are rarely encountered
during training. While emulating a writer's style has been recently addressed
by generative models, the generalization towards rare characters has been
disregarded. In this work, we devise a Transformer-based model for Few-Shot
styled handwritten text generation and focus on obtaining a robust and
informative representation of both the text and the style. In particular, we
propose a novel representation of the textual content as a sequence of dense
vectors obtained from images of symbols written as standard GNU Unifont glyphs,
which can be considered their visual archetypes. This strategy is more suitable
for generating characters that, despite having been seen rarely during
training, possibly share visual details with the frequently observed ones. As
for the style, we obtain a robust representation of unseen writers' calligraphy
by exploiting specific pre-training on a large synthetic dataset. Quantitative
and qualitative results demonstrate the effectiveness of our proposal in
generating words in unseen styles and with rare characters more faithfully than
existing approaches relying on independent one-hot encodings of the characters.Comment: Accepted at CVPR202