35 research outputs found

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Interactive toon shading using mesh smoothing

    Get PDF
    Toon shading mimics the style of few colour bands and hence offers an effective way to convey the cartoon-style rendering. Despite an increasing amount of research on toon shading, little research has been reported on generation of toon shading style with more simplicity. In this paper, we present a method to create a simplified form of toon shading using mesh smoothing from 3D objects. The proposed method exploits the Laplacian smoothing to emphasise the simplicity of 3D objects. Motivated by simplified form of Phong lighting model, we create non-photorealistic style capable of enhancing the cartoonish appearance. An enhanced toon shading algorithm is applied on the simple 3D objects in order to convey more simple visual cues of tone. The experimental result reveals the ability of proposed method to produce more cartoonish simplistic effects

    A Natural Image Pointillism with Controlled Ellipse Dots

    Get PDF
    This paper presents an image-based artistic rendering algorithm for the automatic Pointillism style. At first, ellipse dot locations are randomly generated based on a source image; then dot orientations are precalculated with help of a direction map; a saliency map of the source image decides long and short radius of the ellipse dot. At last, the rendering runs layer-by-layer from large size dots to small size dots so as to reserve the detailed parts of the image. Although only ellipse dot shape is adopted, the final Pointillism style performs well because of variable characteristics of the dot

    Real-Time Stylized Rendering for Large-Scale 3D Scenes

    Get PDF
    While modern digital entertainment has seen a major shift toward photorealism in animation, there is still significant demand for stylized rendering tools. Stylized, or non-photorealistic rendering (NPR), applications generally sacrifice physical accuracy for artistic or functional visual output. Oftentimes, NPR applications focus on extracting specific features from a 3D environment and highlighting them in a unique manner. One application of interest involves recreating 2D hand-drawn art styles in a 3D-modeled environment. This task poses challenges in the form of spatial coherence, feature extraction, and stroke line rendering. Previous research on this topic has also struggled to overcome specific performance bottlenecks, which have limited use of this technology in real-time applications. Specifically, many stylized rendering techniques have difficulty operating on large-scale scenes, such as open-world terrain environments. In this paper, we describe various novel rendering techniques for mimicking hand-drawn art styles in a large-scale 3D environment, including modifications to existing methods for stroke rendering and hatch-line texturing. Our system focuses on providing various complex styles while maintaining real-time performance, to maximize user-interactability. Our results demonstrate improved performance over existing real-time methods, and offer a few unique style options for users, though the system still suffers from some visual inconsistencies

    A workflow for designing stylized shading effects

    Get PDF
    In this report, we describe a workflow for designing stylized shading effects on a 3D object, targeted at technical artists. Shading design, the process of making the illumination of an object in a 3D scene match an artist vision, is usually a time-consuming task because of the complex interactions between materials, geometry, and lighting environment. Physically based methods tend to provide an intuitive and coherent workflow for artists, but they are of limited use in the context of non-photorealistic shading styles. On the other hand, existing stylized shading techniques are either too specialized or require considerable hand-tuning of unintuitive parameters to give a satisfactory result. Our contribution is to separate the design process of individual shading effects in three independent stages: control of its global behavior on the object, addition of procedural details, and colorization. Inspired by the formulation of existing shading models, we expose different shading behaviors to the artist through parametrizations, which have a meaningful visual interpretation. Multiple shading effects can then be composited to obtain complex dynamic appearances. The proposed workflow is fully interactive, with real-time feedback, and allows the intuitive exploration of stylized shading effects, while keeping coherence under varying viewpoints and light configurations. Furthermore, our method makes use of the deferred shading technique, making it easily integrable in existing rendering pipelines.Dans ce rapport, nous dĂ©crivons un outil de crĂ©ation de modĂšles d'illumination adaptĂ© Ă  la stylisation de scĂšnes 3D. Contrairement aux modĂšles d'illumination photorĂ©alistes, qui suivent des contraintes physiques, les modĂšles d'illumination stylisĂ©s rĂ©pondent Ă  des contraintes artistiques, souvent inspirĂ©es de la reprĂ©sentation de la lumiĂšre en illustration. Pour cela, la conception de ces modĂšles stylisĂ©s est souvent complexe et coĂ»teuse en temps. De plus, ils doivent produire un rĂ©sultat cohĂ©rent sous une multitude d'angles de vue et d'Ă©clairages. Nous proposons une mĂ©thode qui facilite la crĂ©ation d'effets d'illumination stylisĂ©s, en dĂ©composant le processus en trois parties indĂ©pendantes: contrĂŽle du comportement global de l'illumination, ajout de dĂ©tails procĂ©duraux, et colorisation.DiffĂ©rents comportements d'illumination sont accessibles Ă  travers des paramĂ©trisations, qui ont une interprĂ©tation visuelle, et qui peuvent ĂȘtre combinĂ©es pour obtenir des apparences plus complexes. La mĂ©thode proposĂ©e est interactive, et permet l'exploration efficace de modĂšles d'illumination stylisĂ©s. La mĂ©thode est implĂ©mentĂ©e avec la technique de deferred shading, ce qui la rend facilement utilisable dans des pipelines de rendu existants

    Three-dimensional interactive maps: theory and practice

    Get PDF

    Studying and solving visual artifacts occurring when procedural texturing with paradoxical requirements

    Get PDF
    International audienceTextures are images widely used by computer graphics artists to add visual detail to their work. Textures may come from different sources, such as pictures of real-world surfaces, manually created images using graphics editors, or algorithmic processes. “Procedural texturing” refers to the creation of textures using algorithmic processes.Procedural textures offer many advantages, including the ability to manipulate their appearance through parameters. Many applications rely on changing those parameters to evolve the look of those textures over time or space. This often introduces requirements contradictory with the structure of the unaltered texture, often resulting in visible rendering artifacts. As an example, to animate a lava flow the rendered texture should be an effective representation of the simulated flow, but features such as rocks floating over should not be distorted, nor brutally appearor disappear thus disrupting the illusion. Informally, we want our lava texture to “change, but stay the same”. This example is an instance of the consistency problem that arises when changing parameters of a texture, resulting in noticeable artifacts in the rendered result.In this project, we seek to classify these artifacts depending on their causes and their effects on textures, but also how we can objectively detect and explain their presence, and so predict their occurrence. Analytical and statistical analysis of procedural texturing processes will be performed, in order to find the relation with the corresponding artifacts.Les textures sont des images largement utilisĂ©es par les artistes infographistes pour ajouter des dĂ©tails dans leurs rendus. Ces textures peuvent provenir de diffĂ©rentes sources, telles que des images de surfaces rĂ©elles, des images crĂ©Ă©es manuellement dans des logiciels, ou des processus algorithmiques. Ces derniĂšres sont appelĂ©es“textures procĂ©durales”.Les textures procĂ©durales offrent beaucoup d’avantages, notamment la possibilitĂ© de dĂ©finir leur apparence Ă  partir de paramĂštres. Dans de nombreuses situations, ces paramĂštres Ă©voluent au cours du temps ou dans l’espace. Cela introduit cependant des contraintes contradictoires Ă  l’apparence de la texture originale, introduisant ainsi des artefacts visibles. Par exemple, pour animer un flot de lave, la texture rendue devrait suivre le flot simulĂ©, mais les formes telles que des rochers flottants Ă  la surface ne devraient pas ĂȘtre distordus, ni apparaĂźtre ou disparaĂźtrebrutalement, ce qui casserait l’illusion. Informellement, cette texture de lave doit “changer, mais rester la mĂȘme”. Cet exemple est une instance du problĂšme de la cohĂ©rence temporelle liĂ© Ă  l’évolution de paramĂštres d’une texture, ce qui introduit des artefacts dans le rendu final.Au cours de ce projet, nous essayons de classer les diffĂ©rents artefacts selon les causes qui les produisent et leurs effets sur les textures. Nous cherchons aussi des mĂ©thodes pour dĂ©crire objectivement et dĂ©tecter leur prĂ©sence, et mĂȘme prĂ©dire leur apparition. Des mĂ©thodes analytiques mais aussi statistiques des procĂ©dĂ©s de texture vont ĂȘtre mis en Ɠuvre, afin de trouver les liens entre artefacts et descripteurs

    Putting the Art in Artificial: Aesthetic Responses to Computer-generated Art

    Get PDF
    As artificial intelligence (AI) technology increasingly becomes a feature of everyday life, it is important to understand how creative acts, regarded as uniquely human, can be valued if produced by a machine. The current studies sought to investigate how observers respond to works of visual art created either by humans or by computers. Study 1 tested observers’ ability to discriminate between computer-generated and man-made art, and then examined how categorisation of art works impacted on perceived aesthetic value, revealing a bias against computer-generated art. In Study 2 this bias was reproduced in the context of robotic art, however it was found to be reversed when observers were given the opportunity to see robotic artists in action. These findings reveal an explicit prejudice against computergenerated art, driven largely by the kind of art observers believe computer algorithms are capable of producing. These prejudices can be overridden in circumstances in which observers are able to infer anthropomorphic characteristics in the computer programs, a finding which has implications for the future of artistic AI

    Sketch-based skeleton-driven 2D animation and motion capture.

    Get PDF
    This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect
    corecore