855 research outputs found

    The Evaluation of Stylized Facial Expressions

    No full text
    Stylized rendering aims to abstract information in an image making it useful not only for artistic but also for visualization purposes. Recent advances in computer graphics techniques have made it possible to render many varieties of stylized imagery efficiently. So far, however, few attempts have been made to characterize the perceptual impact and effectiveness of stylization. In this paper, we report several experiments that evaluate three different stylization techniques in the context of dynamic facial expressions. Going beyond the usual questionnaire approach, the experiments compare the techniques according to several criteria ranging from introspective measures (subjective preference) to task-dependent measures (recognizability, intensity). Our results shed light on how stylization of image contents affects the perception and subjective evaluation of facial expressions

    Realtime Fewshot Portrait Stylization Based On Geometric Alignment

    Full text link
    This paper presents a portrait stylization method designed for real-time mobile applications with limited style examples available. Previous learning based stylization methods suffer from the geometric and semantic gaps between portrait domain and style domain, which obstacles the style information to be correctly transferred to the portrait images, leading to poor stylization quality. Based on the geometric prior of human facial attributions, we propose to utilize geometric alignment to tackle this issue. Firstly, we apply Thin-Plate-Spline (TPS) on feature maps in the generator network and also directly to style images in pixel space, generating aligned portrait-style image pairs with identical landmarks, which closes the geometric gaps between two domains. Secondly, adversarial learning maps the textures and colors of portrait images to the style domain. Finally, geometric aware cycle consistency preserves the content and identity information unchanged, and deformation invariant constraint suppresses artifacts and distortions. Qualitative and quantitative comparison validate our method outperforms existing methods, and experiments proof our method could be trained with limited style examples (100 or less) in real-time (more than 40 FPS) on mobile devices. Ablation study demonstrates the effectiveness of each component in the framework.Comment: 10 pages, 10 figure

    AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation

    Full text link
    This paper presents a method that can quickly adapt dynamic 3D avatars to arbitrary text descriptions of novel styles. Among existing approaches for avatar stylization, direct optimization methods can produce excellent results for arbitrary styles but they are unpleasantly slow. Furthermore, they require redoing the optimization process from scratch for every new input. Fast approximation methods using feed-forward networks trained on a large dataset of style images can generate results for new inputs quickly, but tend not to generalize well to novel styles and fall short in quality. We therefore investigate a new approach, AlteredAvatar, that combines those two approaches using the meta-learning framework. In the inner loop, the model learns to optimize to match a single target style well; while in the outer loop, the model learns to stylize efficiently across many styles. After training, AlteredAvatar learns an initialization that can quickly adapt within a small number of update steps to a novel style, which can be given using texts, a reference image, or a combination of both. We show that AlteredAvatar can achieve a good balance between speed, flexibility and quality, while maintaining consistency across a wide range of novel views and facial expressions.Comment: 10 main pages, 14 figures. Project page: https://alteredavatar.github.i

    Digital Manipulation of Human Faces: Effects on Emotional Perception and Brain Activity

    Get PDF
    The study of human face-processing has granted insight into key adaptions across various social and biological functions. However, there is an overall lack of consistency regarding digital alteration styles of human-face stimuli. In order to investigate this, two independent studies were conducted examining unique effects of image construction and presentation. In the first study, three primary forms of stimuli presentation styles (color, black and white, cutout) were used across iterations of non-thatcherized/thatcherized and non-inverted/inverted presentations. Outcome measures included subjective reactions measured via ratings of perceived “grotesqueness,” and objective outcomes of N170 event-related potentials (ERPs) measured via encephalography. Results of subjective measures indicated that thatcherized images were associated with an increased level of grotesque perception, regardless of overall condition variant and inversion status. A significantly larger N170 component was found in response to cutout-style images of human faces, thatcherized images, and inverted images. Results suggest that cutout image morphology may be considered a well-suited image presentation style when examining ERPs and facial processing of otherwise unaltered human faces. Moreover, less emphasis can be placed on decision making regarding main condition morphology of human face stimuli as it relates to negatively valent reactions. The second study explored commonalities between thatcherized and uncanny images. The purpose of the study was to explore commonalities between these two styles of digital manipulation and establish a link between previously disparate areas of human-face processing research. Subjective reactions to stimuli were measured via participant ratings of “off-putting.” ERP data were gathered in order to explore if any unique effects emerged via N170 and N400 presentations. Two main “morph continuums” of stimuli, provided by Eduard Zell (see Zell et al., 2015), with uncanny features were utilized. A novel approach of thatcherizing images along these continuums was used. thatcherized images across both continuums were regarded as more off-putting than non-thatcherized images, indicating a robust subjective effect of thatcherization that was relatively unimpacted by additional manipulation of key featural components. Conversely, results from brain activity indicated no significant differences of N170 between level of shape stylization and their thatcherized counterparts. Unique effects between continuums and exploratory N400 results are discussed

    DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields

    Full text link
    In this paper, we address the challenging problem of 3D toonification, which involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture. Although fine-tuning a pre-trained 3D GAN on the artistic domain can produce reasonable performance, this strategy has limitations in the 3D domain. In particular, fine-tuning can deteriorate the original GAN latent space, which affects subsequent semantic editing, and requires independent optimization and storage for each new style, limiting flexibility and efficient deployment. To overcome these challenges, we propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN. Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space. Specifically, we devise a novel StyleField that predicts conditional 3D deformation to align a real-space NeRF to the style space for geometry stylization. Thanks to the StyleField formulation, which already handles geometry stylization well, texture stylization can be achieved conveniently via adaptive style mixing that injects information of the artistic domain into the decoder of the pre-trained 3D GAN. Due to the unique design, our method enables flexible style degree control and shape-texture-specific style swap. Furthermore, we achieve efficient training without any real-world 2D-3D training pairs but proxy samples synthesized from off-the-shelf 2D toonification models.Comment: ICCV 2023. Code: https://github.com/junzhezhang/DeformToon3D Project page: https://www.mmlab-ntu.com/project/deformtoon3d

    Hybridization of silhouette rendering and pen-and-ink illustration of non-photorealistic rendering technique for 3D object

    Get PDF
    This study proposes a hybrid of Non-photorealistic Rendering techniques. Nonphotorealistic Rendering (NPR) covers one part in computer graphics that caters towards generating many kinds of 2D digital art style from 3D data, for instance output that looks like painting and drawing. NPR includes the painterly, interpretative, expressive and artistic styles, among others. NPR research deal with different issues such as the stylization that are driven by human perception, the science and art that were brought together and being harmonized with techniques used. Some of approaches used in NPR were discussed such as cartoon rendering, watercolour painting, silhouette rendering, penand- ink illustration and so on. A plan for hybridization of NPR techniques is proposed between silhouette rendering techniques and pen-and-ink illustration for this study. The integration process of these rendering techniques takes on the lighting mapping and also the construction of colour region of the model in order to ensure the pen-and-ink illustration texture can be implemented into the object. The evaluation process is based on the visualization of the image from the hybridization process. Based on findings, the hybridization of NPR technique was able to create interesting results and considered as an alternative in producing new variety of visualization image in NPR
    corecore