3,185 research outputs found

    Exploring the structure of a real-time, arbitrary neural artistic stylization network

    Full text link
    In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content/style image pair. We build upon recent work leveraging conditional instance normalization for multi-style transfer networks by learning to predict the conditional instance normalization parameters directly from a style image. The model is successfully trained on a corpus of roughly 80,000 paintings and is able to generalize to paintings previously unobserved. We demonstrate that the learned embedding space is smooth and contains a rich structure and organizes semantic information associated with paintings in an entirely unsupervised manner.Comment: Accepted as an oral presentation at British Machine Vision Conference (BMVC) 201

    A framework for digital sunken relief generation based on 3D geometric models

    Get PDF
    Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose

    Human Emotional Care Purposed Automatic Remote Portrait Drawing Generation and Display System Using Wearable Heart Rate Sensor and Smartphone Camera with Depth Perception

    Get PDF
    We propose a system that automatically generates portrait drawings for the purpose of human emotional care. Our system comprises two parts: a smartphone application and a server. The smartphone application enables the user to take photographs throughout the day while acquiring heart rates from the smartwatch worn by the user. The server collects the photographs and heart rates and displays portrait drawings automatically stylized from the photograph for the most exciting moment of the day. In the system, the user can recall the exciting and happy moment of the day through admiring the drawings and heal the emotion accordingly. To stylize photographs as portrait drawings, we employ nonphotorealistic rendering (NPR) methods, including a portrait etude stylization proposed in this paper. Finally, the effectiveness of our system is demonstrated through user studies

    Implicit Brushes for Stylized Line-based Rendering

    Get PDF
    International audienceWe introduce a new technique called Implicit Brushes to render animated 3D scenes with stylized lines in real-time with temporal coherence. An Implicit Brush is defined at a given pixel by the convolution of a brush footprint along a feature skeleton; the skeleton itself is obtained by locating surface features in the pixel neighborhood. Features are identified via image-space ïŹtting techniques that not only extract their location, but also their proïŹle, which permits to distinguish between sharp and smooth features. ProïŹle parameters are then mapped to stylistic parameters such as brush orientation, size or opacity to give rise to a wide range of line-based styles

    ColorSketch: A Drawing Assistant for Generating Color Sketches from Photos

    Get PDF
    postprin

    AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation

    Full text link
    This paper presents a method that can quickly adapt dynamic 3D avatars to arbitrary text descriptions of novel styles. Among existing approaches for avatar stylization, direct optimization methods can produce excellent results for arbitrary styles but they are unpleasantly slow. Furthermore, they require redoing the optimization process from scratch for every new input. Fast approximation methods using feed-forward networks trained on a large dataset of style images can generate results for new inputs quickly, but tend not to generalize well to novel styles and fall short in quality. We therefore investigate a new approach, AlteredAvatar, that combines those two approaches using the meta-learning framework. In the inner loop, the model learns to optimize to match a single target style well; while in the outer loop, the model learns to stylize efficiently across many styles. After training, AlteredAvatar learns an initialization that can quickly adapt within a small number of update steps to a novel style, which can be given using texts, a reference image, or a combination of both. We show that AlteredAvatar can achieve a good balance between speed, flexibility and quality, while maintaining consistency across a wide range of novel views and facial expressions.Comment: 10 main pages, 14 figures. Project page: https://alteredavatar.github.i
    • 

    corecore