560 research outputs found

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Glass Patterns and Artistic Imaging

    Get PDF

    Image preprocessing for artistic robotic painting

    Get PDF
    Artistic robotic painting implies creating a picture on canvas according to a brushstroke map preliminarily computed from a source image. To make the painting look closer to the human artwork, the source image should be preprocessed to render the effects usually created by artists. In this paper, we consider three preprocessing effects: aerial perspective, gamut compression and brushstroke coherence. We propose an algorithm for aerial perspective amplification based on principles of light scattering using a depth map, an algorithm for gamut compression using nonlinear hue transformation and an algorithm for image gradient filtering for obtaining a well-coherent brushstroke map with a reduced number of brushstrokes, required for practical robotic painting. The described algorithms allow interactive image correction and make the final rendering look closer to a manually painted artwork. To illustrate our proposals, we render several test images on a computer and paint a monochromatic image on canvas with a painting robot

    A pointillism style for the non-photorealistic display of augmented reality scenes

    Get PDF
    The ultimate goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects. Practically all augmented reality systems rely on standard real-time rendering methods for generating the images of virtual scene elements. Although such conventional computer graphics algorithms are fast, they often fail to produce sufficiently realistic renderings. The use of simple lighting and shading methods, as well as the lack of knowledge about actual lighting conditions in the real surroundings, cause virtual objects to appear artificial. We have recently proposed a novel approach for generating augmented reality images. Our method is based on the idea of applying stylization techniques for reducing the visual realism of both the camera image and the virtual graphical objects. Special non-photorealistic image filters are applied to the camera video stream. The virtual scene elements are rendered using non-photorealistic rendering methods. Since both the camera image and the virtual objects are stylized in a corresponding way, they appear very similar. As a result, graphical objects can become indistinguishable from the real surroundings. Here, we present a new method for the stylization of augmented reality images. This approach generates a painterly "brush stroke" rendering. The resulting stylized augmented reality video frames look similar to paintings created in the "pointillism" style. We describe the implementation of the camera image filter and the non-photorealistic renderer for virtual objects. These components have been newly designed or adapted for this purpose. They are fast enough for generating augmented reality images in real-time and are customizable. The results obtained using our approach are very promising and show that it improves immersion in augmented reality

    Design of 2D Time-Varying Vector Fields

    Get PDF
    published_or_final_versio

    Design of 2D time-varying vector fields

    Get PDF
    pre-printDesign of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects
    corecore