124 research outputs found
Painterly rendering techniques: A state-of-the-art review of current approaches
In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd
Stroke Based Painterly Rendering
International audienceMany traditional art forms are produced by an artist sequentially placing a set of marks, such as brush strokes, on a canvas. Stroke based Rendering (SBR) is inspired by this process, and underpins many early and contemporary Artistic Stylization algorithms. This Chapter outlines the origins of SBR, and describes key algorithms for placement of brush strokes to create painterly renderings from source images. The chapter explores both local greedy, and global optimization based approaches to stroke placement. The issue of creative control in SBR is also briefly discussed
Efficient Example-Based Painting and Synthesis of 2D Directional Texture
We present a new method for converting a photo or image to a synthesized painting following the painting style of an example painting. Treating painting styles of brush strokes as sample textures, we reduce the problem of learning an example painting to a texture synthesis problem. The proposed method uses a hierarchical patch-based approach to the synthesis of directional textures. The key features of our method are: 1) Painting styles are represented as one or more blocks of sample textures selected by the user from the example painting; 2) image segmentation and brush stroke directions defined by the medial axis are used to better represent and communicate shapes and objects present in the synthesized painting; 3) image masks and a hierarchy of texture patches are used to efficiently synthesize high-quality directional textures. The synthesis process is further accelerated through texture direction quantization and the use of Gaussian pyramids. Our method has the following advantages: First, the synthesized stroke textures can follow a direction field determined by the shapes of regions to be painted. Second, the method is very efficient; the generation time of a synthesized painting ranges from a few seconds to about one minute, rather than hours, as required by other existing methods, on a commodity PC. Furthermore, the technique presented here provides a new and efficient solution to the problem of synthesizing a 2D directional texture. We use a number of test examples to demonstrate the efficiency of the proposed method and the high quality of results produced by the method.published_or_final_versio
A Van Gogh inspired 3D Shader Methodology
This study develops an approach to developing surface shading for computer-generated 3D head models that adapts aesthetics from the post-impressionist portrait painting style of Vincent Van Gogh. This research is an attempt to reconcile a 2D expressionist style of painting and 3D digital computer generated imagery. The focus of this research is on developing a surface shading methodology for creating 3D impasto painterly renderings informed by Van Goghâs self-portrait paintings.
Visual analysis of several of Van Goghâs self-portraits reveal the characteristics of his overall rendering style that are essential in designing methods for shading and texturing 3D head models. A method for shading is proposed using existing surfacing and rendering tools to create 3D digital heads rendered in Van Goghâs style. The designed shading methodology describes procedures that generate brushstroke patterns. User controls for brushstroke profile, size, color and direction are provided to allow variations in the brushstroke patterns. These patterns are used to define thick oil paint surface properties for 3D digital models.
A discussion of the range of results achieved using the designed shading methodology reveal the variations in the rendering style that can be achieved, which reflects a wide range of expressive 3D portrait rendering styles. Therefore, this study is useful in understanding Van Goghâs expressive portrait painting style and in applying the essence of his work to synthesized 3D portraits
Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples
This paper presents several methods for shading meshes from scanned paint samples that represent dark to light transitions. Our techniques emphasize artistic control of brush stroke texture and color. We ïŹrst demonstrate how the texture of the paint sample can be separated from its color gradient. We demonstrate three methods, two real-time and one off-line for producing rendered, shaded images from the texture samples. All three techniques use texture synthesis to generate additional paint samples. Finally, we develop metrics for evaluating how well each method achieves our goal in terms of texture similarity, shading correctness and temporal coherence
Higher level techniques for the artistic rendering of images and video
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Recommended from our members
Video painting with space-time-varying style parameters
Artists use different means of stylization to control the
focus on different objects in the scene. This allows them to portray complex meaning and achieve certain artistic effects. Most
prior work on painterly rendering of videos, however, uses only
a single painting style, with fixed global parameters, irrespective of objects and their layout in the images. This often leads to inadequate artistic control. Moreover, brush stroke orientation is typically assumed to follow an everywhere continuous directional field. In this paper, we propose a video painting system that accounts for the spatial support of objects in the images or video, and uses this information to specify style parameters and stroke orientation for painterly rendering. Since objects occupy distinct image locations and move relatively smoothly from one video frame to another, our object-based painterly rendering approach is characterized by style parameters that coherently vary in space and time. Spatiotemporal coherence of varying style parameters enables more artistic freedom, such as emphasis/deemphasis, increase or decrease of contrast, exaggeration or trivialization of different objects in the scene in a temporally coherent fashion.
Given a video that has been segmented into temporally moving
or deforming objects as well as the background, the user can
specify style parameters, such as stroke color, size, and opacity
for each target object in some keyframes. Due to spatiotemporal
coherence of the video object segmentation, this information can
be propagated to the target object in all other frames. The user
may easily reselect new objects and their style parameters in any
frame, which will be then automatically propagated to the entire
video.
Similarly, brush stroke orientations can be specified per object,
and thus can be spatially discontinuous. In addition, brush
stroke orientations can be also be specified for a target object
in some keyframes and propagated to other frames. Unlike
style parameters, brush stroke orientations are transported by
taking into account the underlying objectâs movement such as
translation and rotation.
To generate the painterly processed video, we have developed
a novel image-based renderer that supports object-based style parameters and orientation fields. Our renderer is inspired by flow visualization techniques. Given a frame, it first places the
seeds of brush strokes such as solid disks on the canvas. The
canvas is then advected according to the brush stroke orientation
field. By iteratively blending the advected image with the original canvas of stroke seeds, we can interactively obtain a painted image in which contrast between neighboring strokes is less obvious than traditional renderers in which each curved strokes are constructed explicitly. This helps alleviate the flickering effect often associated with video painting.
The utility of our approach is demonstrated through a number
of artistic operations on images and videos resulting in their
high-quality, multi-style rendering.Index TermsâNon-photorealistic rendering, video painting,
multi-style painting, tensor field desig
- âŠ