77,727 research outputs found

    Multi-Content GAN for Few-Shot Font Style Transfer

    Full text link
    In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface. To generate a set of multi-content images following a consistent style from very few examples, we propose an end-to-end stacked conditional GAN model considering content along channels and style along network layers. Our proposed network transfers the style of given glyphs to the contents of unseen ones, capturing highly stylized fonts found in the real-world such as those on movie posters or infographics. We seek to transfer both the typographic stylization (ex. serifs and ears) as well as the textual stylization (ex. color gradients and effects.) We base our experiments on our collected data set including 10,000 fonts with different styles and demonstrate effective generalization from a very small number of observed glyphs

    Weakly-supervised Caricature Face Parsing through Domain Adaptation

    Full text link
    A caricature is an artistic form of a person's picture in which certain striking characteristics are abstracted or exaggerated in order to create a humor or sarcasm effect. For numerous caricature related applications such as attribute recognition and caricature editing, face parsing is an essential pre-processing step that provides a complete facial structure understanding. However, current state-of-the-art face parsing methods require large amounts of labeled data on the pixel-level and such process for caricature is tedious and labor-intensive. For real photos, there are numerous labeled datasets for face parsing. Thus, we formulate caricature face parsing as a domain adaptation problem, where real photos play the role of the source domain, adapting to the target caricatures. Specifically, we first leverage a spatial transformer based network to enable shape domain shifts. A feed-forward style transfer network is then utilized to capture texture-level domain gaps. With these two steps, we synthesize face caricatures from real photos, and thus we can use parsing ground truths of the original photos to learn the parsing model. Experimental results on the synthetic and real caricatures demonstrate the effectiveness of the proposed domain adaptation algorithm. Code is available at: https://github.com/ZJULearning/CariFaceParsing .Comment: Accepted in ICIP 2019, code and model are available at https://github.com/ZJULearning/CariFaceParsin

    A micromechanics-inspired constitutive model for shape-memory alloys

    Get PDF
    This paper presents a three-dimensional constitutive model for shape-memory alloys that generalizes the one-dimensional model presented earlier (Sadjadpour and Bhattacharya 2007 Smart Mater. Struct. 16 S51–62). These models build on recent micromechanical studies of the underlying microstructure of shape-memory alloys, and a key idea is that of an effective transformation strain of the martensitic microstructure. This paper explains the thermodynamic setting of the model, demonstrates it through examples involving proportional and non-proportional loading, and shows that the model can be fitted to incorporate the effect of texture in polycrystalline shape-memory alloys

    Advances in martensitic transformations in Cu-based shape memory alloys achieved by in situ neutron and synchrotron X-ray diffraction methods

    Get PDF
    This article deals with the application of several X-ray and neutron diffraction methods to investigate the mechanics of a stress induced martensitic transformation in Cu-based shape memory alloy polycrystals. It puts experimental results obtained by two different research groups on different length scales into context with the mechanics of stress induced martensitic transformation in polycrystalline environment
    corecore