704 research outputs found

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Towards Simulation of Handmade Painterly Animation

    Get PDF
    International audienceThis poster presents research in progress towards the simulation of handmade painterly animation. As researches on painterly animation mostly focus on temporal coherence, the generation of an animation that could have been done by hand remains an open challenging problem. To produce an hand made painterly animation, the artist paints on a retro-lighted glass canvas. He creates successive frames one over the other. The artist erase, smears and adds paint to produce each frame. We propose to render painterly animations with a visual aspect tending towards hand made results.To this end, we combines two techniques: an automatic strokes generator and a paint simulator. We observe that previous approaches could not adapt as-is to our goals for the two following reasons:Paint appearance in painterly animation is back lighted, most of paint contrast comes from paint thickness which is usually not compute by paint simulation approaches. Automatic strokes generation make the assumption that a stroke covers underlying strokes without having thickness accumulation, in our case paint thickness quickly increase if we do not apply special treatment. We also investigate new kind of strokes that are only possible with a paint simulation approach as smear and erase strokes

    Automatic painting with economized strokes

    Get PDF
    Journal ArticleWe present a method that takes a raster image as input and produces a painting-like image composed of strokes rather than pixels. Unlike previous automatic painting methods, we attempt to use very few brush-strokes. This is accomplished by first segmenting the image into features, finding the medial axes points of these features, converting the medial axes points into ordered lists of image tokens, and finally rendering these lists as brush strokes. Our process creates images reminiscent of modern realist painters who often want an abstract or sketchy quality in their work

    Hierarchical Motion Brushes for Animation Instancing

    Get PDF
    International audienceOur work on "motion brushes" provides a new workflow for the creation and reuse of 3D animation with a focus on stylized movement and depiction. Conceptually, motion brushes expand existing brush models by incorporating hierarchies of 3D animated content including geometry, appearance information, and motion data as core brush primitives that are instantiated using a painting interface. Because motion brushes can encompass all the richness of detail and movement offered by animation software, they accommodate complex, varied effects that are not easily created by other means. To support reuse and provide an effective means for managing complexity, we propose a hierarchical representation that allows simple brushes to be combined into more complex ones. Our system provides stroke-based control over motion-brush parameters, including tools to effectively manage the temporal nature of the motion brush instances. We demonstrate the flexibility and richness of our system with motion brushes for splashing rain, footsteps appearing in the snow, and stylized visual effects

    Artistic vision: painterly rendering using computer vision techniques

    Get PDF
    Journal ArticleWe present a method that takes a raster image as input and produces a painting-like image composed of strokes rather than pixels. Unlike previous automatic painting methods, we attempt to keep the number of brush-stroke small. This is accomplished by first segmenting the image into features, finding the medial axes points of these features, converting the medial axes points into ordered lists of image tokens, and finally rendering these lists as brush strokes. Our process creates images reminiscent of modern realist painters who often want an abstract or sketchy quality in their work

    Painterly rendering using human vision

    Get PDF
    Painterly rendering has been linked to computer vision, but we propose to link it to human vision because perception and painting are two processes that are interwoven. Recent progress in developing computational models allows to establish this link. We show that completely automatic rendering can be obtained by applying four image representations in the visual system: (1) colour constancy can be used to correct colours, (2) coarse background brightness in combination with colour coding in cytochrome-oxidase blobs can be used to create a background with a big brush, (3) the multi-scale line and edge representation provides a very natural way to render fi ner brush strokes, and (4) the multi-scale keypoint representation serves to create saliency maps for Focus-of-Attention, and FoA can be used to render important structures. Basic processes are described, renderings are shown, and important ideas for future research are discussed

    A system for image-based modeling and photo editing

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2002.Includes bibliographical references (p. 169-178).Traditionally in computer graphics, a scene is represented by geometric primitives composed of various materials and a collection of lights. Recently, techniques for modeling and rendering scenes from a set of pre-acquired images have emerged as an alternative approach, known as image-based modeling and rendering. Much of the research in this field has focused on reconstructing and rerendering from a set of photographs, while little work has been done to address the problem of editing and modifying these scenes. On the other hand, photo-editing systems, such as Adobe Photoshop, provide a powerful, intuitive, and practical means to edit images. However, these systems are limited by their two-dimensional nature. In this thesis, we present a system that extends photo editing to 3D. Starting from a single input image, the system enables the user to reconstruct a 3D representation of the captured scene, and edit it with the ease and versatility of 2D photo editing. The scene is represented as layers of images with depth, where each layer is an image that encodes both color and depth. A suite of user-assisted tools are employed, based on a painting metaphor, to extract layers and assign depths. The system enables editing from different viewpoints, extracting and grouping of image-based objects, and modifying the shape, color, and illumination of these objects. As part of the system, we introduce three powerful new editing tools. These include two new clone brushing tools: the non-distorted clone brush and the structure-preserving clone brush. They permit copying of parts of an image to another via a brush interface, but alleviate distortions due to perspective foreshortening and object geometry.(cont.) The non-distorted clone brush works on arbitrary 3D geometry, while the structure-preserving clone brush, a 2D version, assumes a planar surface, but has the added advantage of working directly in 2D photo-editing systems that lack depth information. The third tool, a texture-illuminance decoupling filter, discounts the effect of illumination on uniformly textured areas by decoupling large- and small-scale features via bilateral filtering. This tool is crucial for relighting and changing the materials of the scene. There are many applications for such a system, for example architectural, lighting and landscape design, entertainment and special effects, games, and virtual TV sets. The system allows the user to superimpose scaled architectural models into real environments, or to quickly paint a desired lighting scheme of an interior, while being able to navigate within the scene for a fully immersive 3D experience. We present examples and results of complex architectural scenes, 360-degree panoramas, and even paintings, where the user can change viewpoints, edit the geometry and materials, and relight the environment.by Byong Mok Oh.Ph.D

    Looking through the eyes of the painter: from visual perception to non-photorealistic rendering

    Get PDF
    In this paper we present a brief overview of the processing in the primary visual cortex, the multi-scale line/edge and keypoint representations, and a model of brightness perception. This model, which is being extended from 1D to 2D, is based on a symbolic line and edge interpretation: lines are represented by scaled Gaussians and edges by scaled, Gaussian-windowed error functions. We show that this model, in combination with standard techniques from graphics, provides a very fertile basis for non-photorealistic image rendering
    • 

    corecore