930 research outputs found

    Self-similar texture for coherent line stylization

    Get PDF
    Oral Session: Lines and StrokesInternational audienceStylized line rendering for animation has traditionally traded-off between two undesirable artifacts: stroke texture sliding and stroke texture stretching. This paper proposes a new stroke texture representation, the self-similar line artmap (SLAM), which avoids both these artifacts. SLAM textures provide continuous, infinite zoom while maintaining approximately constant appearance in screen-space, and can be produced automatically from a single exemplar. SLAMs can be used as drop-in replacements for conventional stroke textures in 2D illustration and animation. Furthermore, SLAMs enable a new, simple approach to temporally coherent rendering of 3D paths that is suitable for interactive applications. We demonstrate results for 2D and 3D animations

    Implicit Brushes for Stylized Line-based Rendering

    Get PDF
    International audienceWe introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in real-time with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image-space fitting techniques that not only extract their location, but also their profile, which permits to distinguish between sharp and smooth features. Profile parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line-based styles

    Human Emotional Care Purposed Automatic Remote Portrait Drawing Generation and Display System Using Wearable Heart Rate Sensor and Smartphone Camera with Depth Perception

    Get PDF
    We propose a system that automatically generates portrait drawings for the purpose of human emotional care. Our system comprises two parts: a smartphone application and a server. The smartphone application enables the user to take photographs throughout the day while acquiring heart rates from the smartwatch worn by the user. The server collects the photographs and heart rates and displays portrait drawings automatically stylized from the photograph for the most exciting moment of the day. In the system, the user can recall the exciting and happy moment of the day through admiring the drawings and heal the emotion accordingly. To stylize photographs as portrait drawings, we employ nonphotorealistic rendering (NPR) methods, including a portrait etude stylization proposed in this paper. Finally, the effectiveness of our system is demonstrated through user studies

    Perceptually Inspired Real-time Artistic Style Transfer for Video Stream

    Get PDF
    This study presents a real-time texture transfer method for artistic style transfer for video stream. We propose a parallel framework using a T-shaped kernel to enhance the computational performance. With regard to accelerated motion estimation, which is necessarily required for maintaining temporal coherence, we present a method using a downscaled motion field to successfully achieve high real-time performance for texture transfer of video stream. In addition, to enhance the artistic quality, we calculate the level of abstraction using visual saliency and integrate it with the texture transfer algorithm. Thus, our algorithm can stylize video with perceptual enhancements

    Controllable Neural Synthesis for Natural Images and Vector Art

    Get PDF
    Neural image synthesis approaches have become increasingly popular over the last years due to their ability to generate photorealistic images useful for several applications, such as digital entertainment, mixed reality, synthetic dataset creation, computer art, to name a few. Despite the progress over the last years, current approaches lack two important aspects: (a) they often fail to capture long-range interactions in the image, and as a result, they fail to generate scenes with complex dependencies between their different objects or parts. (b) they often ignore the underlying 3D geometry of the shape/scene in the image, and as a result, they frequently lose coherency and details.My thesis proposes novel solutions to the above problems. First, I propose a neural transformer architecture that captures long-range interactions and context for image synthesis at high resolutions, leading to synthesizing interesting phenomena in scenes, such as reflections of landscapes onto water or flora consistent with the rest of the landscape, that was not possible to generate reliably with previous ConvNet- and other transformer-based approaches. The key idea of the architecture is to sparsify the transformer\u27s attention matrix at high resolutions, guided by dense attention extracted at lower image resolution. I present qualitative and quantitative results, along with user studies, demonstrating the effectiveness of the method, and its superiority compared to the state-of-the-art. Second, I propose a method that generates artistic images with the guidance of input 3D shapes. In contrast to previous methods, the use of a geometric representation of 3D shape enables the synthesis of more precise stylized drawings with fewer artifacts. My method outputs the synthesized images in a vector representation, enabling richer downstream analysis or editing in interactive applications. I also show that the method produces substantially better results than existing image-based methods, in terms of predicting artists’ drawings and in user evaluation of results

    Temporally Coherent Video Stylization

    Get PDF
    International audienceThe transformation of video clips into stylized animations remains an active research topic in Computer Graphics. A key challenge is to reproduce the look of traditional artistic styles whilst minimizing distracting flickering and sliding artifacts; i.e. with temporal coherence. This chapter surveys the spectrum of available video stylization techniques, focusing on algorithms encouraging the temporally coherent placement of rendering marks, and discusses the trade-offs necessary to achieve coherence. We begin with flow-based adaptations of stroke based rendering (SBR) and texture advection capable of painting video. We then chart the development of the field, and its fusion with Computer Vision, to deliver coherent mid-level scene representations. These representations enable the rotoscoping of rendering marks on to temporally coherent video regions, enhancing the diversity and temporal coherence of stylization. In discussing coherence, we formalize the problem of temporal coherence in terms of three defined criteria, and compare and contrast video stylization using these

    Learning to Warp for Style Transfer

    Get PDF

    StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized Tokenizer of a Large-Scale Generative Model

    Full text link
    Despite the progress made in the style transfer task, most previous work focus on transferring only relatively simple features like color or texture, while missing more abstract concepts such as overall art expression or painter-specific traits. However, these abstract semantics can be captured by models like DALL-E or CLIP, which have been trained using huge datasets of images and textual documents. In this paper, we propose StylerDALLE, a style transfer method that exploits both of these models and uses natural language to describe abstract art styles. Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation, i.e., from input content image to output stylized image, in the discrete latent space of a large-scale pretrained vector-quantized tokenizer, e.g., the discrete variational auto-encoder (dVAE) of DALL-E. To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision that ensures stylization and content preservation simultaneously. Experimental results demonstrate the superiority of our method, which can effectively transfer art styles using language instructions at different granularities. Code is available at https://github.com/zipengxuc/StylerDALLE.Comment: ICCV 202
    corecore