199 research outputs found

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    A Process to Create Dynamic Landscape Paintings Using Barycentric Shading with Control Paintings

    Get PDF
    In this work, we present a process that uses a Barycentric shading method to create dynamic landscape paintings that change based on the time of day. Our process allows for the creation of dynamic paintings for any time of the day using simply a limited number of control paintings. To create a proof of concept, we have used landscape paintings of Edgar Payne, one of the leading landscape painters of the American West. His specific style of painting that blends Impressionism with the style of other painters of the AmericanWest is particularly appropriate for the demonstration of the power of our Barycentric shading method

    Deadly Pleasure

    Get PDF
    In this work, a girl’s father is one of the victims in the mass political persecution by Emperor Yongzheng, 18 A.D., and he is killed in front of his family. The Emperor is present during his murder and he notices the girl and her gorgeous, long hair. Because of this the girl survives the massacre of her entire family and is sent away and trained to become a concubine. It becomes the goal of the girl to work hard to become a chosen concubine by the Emperor so she can avenge the murder of her family with her gorgeous, magical hair. Deadly Pleasure is my Master’s of Fine Arts thesis film with a total runtime of five minutes and thirteen seconds. It is a 2D animation that was produced primarily in TVpaint, Photoshop, and After Effects. This paper outlines the entire creation process of making this 2D animated film. It details the very beginning of the story ideas to the final screening version. It describes all my intentions, obstacles, challenges and successes, as well as the problem-solution process

    A Process to Create Dynamic Landscape Paintings Using Barycentric Shading with Control Paintings

    Get PDF
    In this work, we present a process that uses a Barycentric shading method to create dynamic landscape paintings that change based on the time of day. Our process allows for the creation of dynamic paintings for any time of the day using simply a limited number of control paintings. To create a proof of concept, we have used landscape paintings of Edgar Payne, one of the leading landscape painters of the American West. His specific style of painting that blends Impressionism with the style of other painters of the AmericanWest is particularly appropriate for the demonstration of the power of our Barycentric shading method

    Shading with Painterly Filtered Layers: A Process to Obtain Painterly Portraits

    Get PDF
    In this thesis, I study how color data from different styles of paintings can be extracted from photography with the end result maintaining the artistic integrity of the art style and having the look and feel of skin. My inspiration for this work came from the impasto style portraitures of painters such as Rembrandt and Greg Cartmell. I analyzed and studied the important visual characteristics of both Rembrandt’s and Cartmell’s styles of painting.These include how the artist develops shadow and shading, creates the illusion of subsurface scattering, and applies color to the canvas, which will be used as references to help develop the final renders in computer graphics. I also examined how color information can be extracted from portrait photography in order to gather accurate dark, medium, and light skin shades. Based on this analysis, I have developed a process for creating portrait paintings from 3D facial models. My process consists of four stages: (1) Modeling a 3D portrait of the subject, (2) data collection by photographing the subjects, (3) Barycentric shader development using photographs, and (4) Compositing with filtered layers. My contributions has been in stages (3) and (4) as follows: Development of an impasto-style Barycentric shader by extracting color information from gathered photographic images. This shader can result in realistic looking skin rendering. Development of a compositing technique that involves filtering layers of images that correspond to different effects such as diffuse, specular and ambient. To demonstrate proof-of-concept, I have created a few animations of the impasto style portrait painting for a single subject. For these animations, I have also sculpted high polygon count 3D model of the torso and head of my subject. Using my shading and compositing techniques, I have created rigid body animations that demonstrate the power of my techniques to obtain impasto style portraiture during animation under different lighting conditions

    Art Directed Shader for Real Time Rendering - Interactive 3D Painting

    Get PDF
    In this work, I develop an approach to include Global Illumination (GI) effects in non-photorealistic real-time rendering; real-time rendering is one of the main areas of focus in the gaming industry and the booming virtual reality(VR) and augmented reality(AR) industries. My approach is based on adapting the Barycentric shader to create a wide variety of painting effects. This shader helps achieve the look of a 2D painting in an interactively rendered 3D scene. The shader accommodates robust computation to obtain artistic reflection and refraction. My contributions can be summarized as follows: Development of a generalized Barycentric shader that can provide artistic control, integration of this generalized Barycentric shader into an interactive ray tracer, and interactive rendering of a 3D scene that closely represent the reference painting

    Wholetoning: Synthesizing Abstract Black-and-White Illustrations

    Get PDF
    Black-and-white imagery is a popular and interesting depiction technique in the visual arts, in which varying tints and shades of a single colour are used. Within the realm of black-and-white images, there is a set of black-and-white illustrations that only depict salient features by ignoring details, and reduce colour to pure black and white, with no intermediate tones. These illustrations hold tremendous potential to enrich decoration, human communication and entertainment. Producing abstract black-and-white illustrations by hand relies on a time consuming and difficult process that requires both artistic talent and technical expertise. Previous work has not explored this style of illustration in much depth, and simple approaches such as thresholding are insufficient for stylization and artistic control. I use the word wholetoning to refer to illustrations that feature a high degree of shape and tone abstraction. In this thesis, I explore computer algorithms for generating wholetoned illustrations. First, I offer a general-purpose framework, “artistic thresholding”, to control the generation of wholetoned illustrations in an intuitive way. The basic artistic thresholding algorithm is an optimization framework based on simulated annealing to get the final bi-level result. I design an extensible objective function from our observations of a lot of wholetoned images. The objective function is a weighted sum over terms that encode features common to wholetoned illustrations. Based on the framework, I then explore two specific wholetoned styles: papercutting and representational calligraphy. I define a paper-cut design as a wholetoned image with connectivity constraints that ensure that it can be cut out from only one piece of paper. My computer generated papercutting technique can convert an original wholetoned image into a paper-cut design. It can also synthesize stylized and geometric patterns often found in traditional designs. Representational calligraphy is defined as a wholetoned image with the constraint that all depiction elements must be letters. The procedure of generating representational calligraphy designs is formalized as a “calligraphic packing” problem. I provide a semi-automatic technique that can warp a sequence of letters to fit a shape while preserving their readability

    TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models

    Full text link
    We present TexFusion (Texture Diffusion), a new method to synthesize textures for given 3D geometries, using large-scale text-guided image diffusion models. In contrast to recent works that leverage 2D text-to-image diffusion models to distill 3D objects using a slow and fragile optimization process, TexFusion introduces a new 3D-consistent generation technique specifically designed for texture synthesis that employs regular diffusion model sampling on different 2D rendered views. Specifically, we leverage latent diffusion models, apply the diffusion model's denoiser on a set of 2D renders of the 3D object, and aggregate the different denoising predictions on a shared latent texture map. Final output RGB textures are produced by optimizing an intermediate neural color field on the decodings of 2D renders of the latent texture. We thoroughly validate TexFusion and show that we can efficiently generate diverse, high quality and globally coherent textures. We achieve state-of-the-art text-guided texture synthesis performance using only image diffusion models, while avoiding the pitfalls of previous distillation-based methods. The text-conditioning offers detailed control and we also do not rely on any ground truth 3D textures for training. This makes our method versatile and applicable to a broad range of geometry and texture types. We hope that TexFusion will advance AI-based texturing of 3D assets for applications in virtual reality, game design, simulation, and more.Comment: Videos and more results on https://research.nvidia.com/labs/toronto-ai/texfusion

    The Wellesley News (04-29-1926)

    Get PDF
    https://repository.wellesley.edu/news/1722/thumbnail.jp
    corecore