16 research outputs found

    Unifying Color and Texture Transfer for Predictive Appearance Manipulation

    Get PDF
    International audienceRecent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from ``sunny'' to ``overcast''. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season -- e.g., leaves on bare trees or piles of snow on a street -- and flooding

    Video Motion Stylization by 2D Rigidification

    Get PDF
    International audienceThis paper introduces a video stylization method that increases the apparent rigidity of motion. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. In contrast, traditional hand-drawn animations often exhibit simplified in-plane motion, such as in the case of cutout animations where the animator moves pieces of paper from frame to frame. Inspired by this technique, we propose to modify a video such that its content undergoes 2D rigid transforms. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. The output of our method is a new video and its optical flow, which can be fed to any existing video stylization algorithm

    Non-photorealistic rendering of portraits

    Get PDF
    We describe an image-based non-photorealistic rendering pipeline for creating portraits in two styles: The first is a somewhat “puppet” like rendering, that treats the face like a relatively uniform smooth surface, with the geometry being emphasised by shading. The second style is inspired by the artist Julian Opie, in which the human face is reduced to its essentials, i.e. homogeneous skin, thick black lines, and facial features such as eyes and the nose represented in a cartoon manner. Our method is able to automatically generate these stylisations without requiring the input images to be tightly cropped, direct frontal view, and moreover perform abstraction while maintaining the distinctiveness of the portraits (i.e. they should remain recognisable)

    PatchTable: efficient patch queries for large datasets and applications

    Get PDF
    This paper presents a data structure that reduces approximate nearest neighbor query times for image patches in large datasets. Previous work in texture synthesis has demonstrated real-time synthesis from small exemplar textures. However, high performance has proved elusive for modern patch-based optimization techniques which frequently use many exemplar images in the tens of megapixels or above. Our new algorithm, PatchTable, offloads as much of the computation as possible to a pre-computation stage that takes modest time, so patch queries can be as efficient as possible. There are three key insights behind our algorithm: (1) a lookup table similar to locality sensitive hashing can be precomputed, and used to seed sufficiently good initial patch correspondences during querying, (2) missing entries in the table can be filled during pre-computation with our fast Voronoi transform, and (3) the initially seeded correspondences can be improved with a precomputed k-nearest neighbors mapping. We show experimentally that this accelerates the patch query operation by up to 9x over k-coherence, up to 12x over TreeCANN, and up to 200x over PatchMatch. Our fast algorithm allows us to explore efficient and practical imaging and computational photography applications. We show results for artistic video stylization, light field super-resolution, and multi-image inpainting

    An Approach To Painterly Rendering

    Get PDF
    An often overlooked key component of 3D animations is the rendering engine. However, some rendering techniques are hard to implement or are too restrictive in terms of the imagery they can produce. The goal of this thesis is to make easy-to-use software that artists can use to create stylistic animations and that also minimizes technical constraints placed on the art. For this project, I present a tool that allows artists to create temporally coherent, painterly animations using Autodesk Maya and Corel Painter. I then use that tool to create proof of concept animations. This new rendering technique offers artists a different avenue through which they can showcase their art and also offers certain freedoms that current computer graphics techniques lack. Accompanying this paper are some animations demonstrating possible outcomes, and they are located on the Texas A&M online library catalog system. The painting system used for this project expands upon an algorithm designed by Barbara Meier of the Disney Research Group that involves spreading particles across a surface and using those particles to define brush strokes. The first step is to infer the general syntax of Painter’s commands by using Painter and its ability to record a painting made by an artist. The next step is to use the commands and syntax that Painter uses in the automated creation of scripts to generate paintings used for the animation. As this thesis is designed to showcase a rendering technique, I found animations made by fellow candidates for the Master of Science and Master of Fine Arts degrees in Visualization bearing qualities accented by a painterly treatment and rendered them using this technique
    corecore