13 research outputs found

    STEFANN: Scene Text Editor using Font Adaptive Neural Network

    Full text link
    Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.Comment: Accepted in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 202

    Multi-Content GAN for Few-Shot Font Style Transfer

    Full text link
    In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs

    Chinese Font Style Transfer with Neural Network

    Get PDF
    Font design is an important area in digital art. However, designers have to design character one by one manually. At the same time, Chinese contains more than 20,000 characters. Chinese offical dataset GB 18030-2000 has 27,533 characters. ZhongHuaZiHai, an official Chinese dictionary, contains 85,568 characters. And JinXiWenZiJing, an dataset published by AINet company, includes about 160,000 chinese characters. Thus Chinese font design is a hard task. In the paper, we introduce a method to help designers finish the process faster. With the method, designers only need to design a small set of Chinese characters. Other characters will be generated automatically. Deep neural network develops fast these years and is very powerful. We tried many kinds of deep neural network with different structure and finally use the one we introduce here. The generated characters have similar style as the ones designed by designer as shown in experiment part

    Few shot font generation via transferring similarity guided global style and quantization local style

    Full text link
    Automatic few-shot font generation (AFFG), aiming at generating new fonts with only a few glyph references, reduces the labor cost of manually designing fonts. However, the traditional AFFG paradigm of style-content disentanglement cannot capture the diverse local details of different fonts. So, many component-based approaches are proposed to tackle this problem. The issue with component-based approaches is that they usually require special pre-defined glyph components, e.g., strokes and radicals, which is infeasible for AFFG of different languages. In this paper, we present a novel font generation approach by aggregating styles from character similarity-guided global features and stylized component-level representations. We calculate the similarity scores of the target character and the referenced samples by measuring the distance along the corresponding channels from the content features, and assigning them as the weights for aggregating the global style features. To better capture the local styles, a cross-attention-based style transfer module is adopted to transfer the styles of reference glyphs to the components, where the components are self-learned discrete latent codes through vector quantization without manual definition. With these designs, our AFFG method could obtain a complete set of component-level style representations, and also control the global glyph characteristics. The experimental results reflect the effectiveness and generalization of the proposed method on different linguistic scripts, and also show its superiority when compared with other state-of-the-art methods. The source code can be found at https://github.com/awei669/VQ-Font.Comment: Accepted by ICCV 202

    Tracing the origins of incunabula through the automatic identification of fonts in digitised documents

    Get PDF
    Incunabula are the texts printed mainly during the second half of 15th century that are a key cultural element in a revolutionary period of the history and evolution of the book and the printing. In these books, the identification of their origin largely affects its academic, cultural, patrimonial, and economical value. This paper proposes a process to automate the identification of the origin of a digitised incunable document using the Proctor/Haebler method, a commonly established procedure in the field. This process has been validated with a selected dataset obtained from the incunabula collection at the digital repository of the University of Zaragoza
    corecore