15 research outputs found
Painterly rendering techniques: A state-of-the-art review of current approaches
In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd
Benchmarking non-photorealistic rendering of portraits
We present a set of images for helping NPR practitioners evaluate their image-based portrait stylisation algorithms. Using a standard set both facilitates comparisons with other methods and helps ensure that presented results are representative. We give two levels of difficulty, each consisting of 20 images selected systematically so as to provide good coverage of several possible portrait characteristics. We applied three existing portrait-specific stylisation algorithms, two general-purpose stylisation algorithms, and one general learning based stylisation algorithm to the first level of the benchmark, corresponding to the type of constrained images that have often been used in portrait-specific work. We found that the existing methods are generally effective on this new image set, demonstrating that level one of the benchmark is tractable; challenges remain at level two. Results revealed several advantages conferred by portrait-specific algorithms over general-purpose algorithms: portrait-specific algorithms can use domain-specific information to preserve key details such as eyes and to eliminate extraneous details, and they have more scope for semantically meaningful abstraction due to the underlying face model. Finally, we provide some thoughts on systematically extending the benchmark to higher levels of difficulty
Stroke Based Painterly Rendering
International audienceMany traditional art forms are produced by an artist sequentially placing a set of marks, such as brush strokes, on a canvas. Stroke based Rendering (SBR) is inspired by this process, and underpins many early and contemporary Artistic Stylization algorithms. This Chapter outlines the origins of SBR, and describes key algorithms for placement of brush strokes to create painterly renderings from source images. The chapter explores both local greedy, and global optimization based approaches to stroke placement. The issue of creative control in SBR is also briefly discussed
Transforming photos to comics using convolutional neural networks
In this paper, inspired by Gatysâs recent work, we propose
a novel approach that transforms photos to comics using
deep convolutional neural networks (CNNs). While Gatysâs
method that uses a pre-trained VGG network generally works
well for transferring artistic styles such as painting from a
style image to a content image, for more minimalist styles
such as comics, the method often fails to produce satisfactory
results. To address this, we further introduce a dedicated
comic style CNN, which is trained for classifying comic images
and photos. This new network is effective in capturing
various comic styles and thus helps to produce better comic
stylization results. Even with a grayscale style image, Gatysâs
method can still produce colored output, which is not desirable
for comics. We develop a modified optimization framework
such that a grayscale image is guaranteed to be synthesized.
To avoid converging to poor local minima, we further
initialize the output image using grayscale version of the content
image. Various examples show that our method synthesizes
better comic images than the state-of-the-art method