3,294 research outputs found

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    A study of how Chinese ink painting features can be applied to 3D scenes and models in real-time rendering

    Get PDF
    Past research findings addressed mature techniques for non-photorealistic rendering. However, research findings indicate that there is little information dealing with efficient methods to simulate Chinese ink painting features in rendering 3D scenes. Considering that Chinese ink painting has achieved many worldwide awards, the potential to effectively and automatically develop 3D animations and games in this style indicates a need for the development of appropriate technology for the future market. The goal of this research is about rendering 3D meshes in a Chinese ink painting style which is both appealing and realistic. Specifically, how can the output image appear similar to a hand-drawn Chinese ink painting. And how efficient does the rendering pipeline have to be to result in a real-time scene. For this study the researcher designed two rendering pipelines for static objects and moving objects in the final scene. The entire rendering process includes interior shading, silhouette extracting, textures integrating, and background rendering. Methodology involved the use of silhouette detection, multiple rendering passes, Gaussian blur for anti-aliasing, smooth step functions, and noise textures for simulating ink textures. Based on the output of each rendering pipeline, rendering process of the scene with best looking of Chinese ink painting style is illustrated in detail. The speed of the rendering pipeline proposed by this research was tested. The framerate of the final scenes created with this pipeline was higher than 30fps, a level considered to be real-time. One can conclude that the main objective of the research study was met even though other methods for generating Chinese ink painting rendering are available and should be explored

    Volumetric cloud generation using a Chinese brush calligraphy style

    Get PDF
    Includes bibliographical references.Clouds are an important feature of any real or simulated environment in which the sky is visible. Their amorphous, ever-changing and illuminated features make the sky vivid and beautiful. However, these features increase both the complexity of real time rendering and modelling. It is difficult to design and build volumetric clouds in an easy and intuitive way, particularly if the interface is intended for artists rather than programmers. We propose a novel modelling system motivated by an ancient painting style, Chinese Landscape Painting, to address this problem. With the use of only one brush and one colour, an artist can paint a vivid and detailed landscape efficiently. In this research, we develop three emulations of a Chinese brush: a skeleton-based brush, a 2D texture footprint and a dynamic 3D footprint, all driven by the motion and pressure of a stylus pen. We propose a hybrid mapping to generate both the body and surface of volumetric clouds from the brush footprints. Our interface integrates these components along with 3D canvas control and GPU-based volumetric rendering into an interactive cloud modelling system. Our cloud modelling system is able to create various types of clouds occurring in nature. User tests indicate that our brush calligraphy approach is preferred to conventional volumetric cloud modelling and that it produces convincing 3D cloud formations in an intuitive and interactive fashion. While traditional modelling systems focus on surface generation of 3D objects, our brush calligraphy technique constructs the interior structure. This forms the basis of a new modelling style for objects with amorphous shape

    Chinese Ink-and-Brush Painting with Film Lighting Aesthetics in 3D Computer Graphics

    Get PDF
    This thesis explores the topic of recreating Chinese ink-and-brush painting in 3D computer graphics and introducing film lighting aesthetics into the result. The method is primarily based on non-photorealistic shader development and digital compositing. The goal of this research is to study how to bing the visual aesthetics of Chinese ink-and-brush painting into 3D computer graphics as well as explore the artistic possibility of using film lighting principles in Chinese painting for visual story telling by using 3D computer graphics. In this research, we use the Jiangnan water country paintings by renowned contemporary Chinese artist Yang Ming-Yi as our primary visual reference. An analysis of the paintings is performed to study the visual characteristics of Yang's paintings. These include how the artist expresses shading, forms, shadow, reflection and compositing principles, which will be used as the guidelines for recreating the painting in computer graphics. 3D meshes are used to represent the subjects in the painting like houses, boats and water. Then procedural non-photorealistic shaders are developed and applied on 3D meshes to give the models an ink-look. Additionally, different types of 3D data are organized and rendered into different layers, which include shading, depth, and geometric information. Those layers are then composed together by using 2D image processing algorithms with custom artistic controls to achieve a more natural-looking ink-painting result. As a result, a short animation of Chinese ink-and-brush painting in 3D computer graphics will be created in which the same environment is rendered with different lighting designs to demonstrate the artistic intention

    Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

    Full text link
    Large text-to-image diffusion models have exhibited impressive proficiency in generating high-quality images. However, when applying these models to video domain, ensuring temporal consistency across video frames remains a formidable challenge. This paper proposes a novel zero-shot text-guided video-to-video translation framework to adapt image models to videos. The framework includes two parts: key frame translation and full video translation. The first part uses an adapted diffusion model to generate key frames, with hierarchical cross-frame constraints applied to enforce coherence in shapes, textures and colors. The second part propagates the key frames to other frames with temporal-aware patch matching and frame blending. Our framework achieves global style and local texture temporal consistency at a low cost (without re-training or optimization). The adaptation is compatible with existing image diffusion techniques, allowing our framework to take advantage of them, such as customizing a specific subject with LoRA, and introducing extra spatial guidance with ControlNet. Extensive experimental results demonstrate the effectiveness of our proposed framework over existing methods in rendering high-quality and temporally-coherent videos.Comment: Accepted to SIGGRAPH Asia 2023. Project page: https://www.mmlab-ntu.com/project/rerender

    Dyeing in Computer Graphics

    Get PDF

    The Admonitions of the Instructress to the Court Ladies picture-scroll by Hironobu Kohara

    Get PDF
    An edited translation of Kohara Hironobu’s 2000 revision of his study, ‘Joshi shin zukan’ 〈女史箴図巻〉, about the painting attributed to Gu Kaizhi (c. 344-c. 406) in the British Museum, originally published Kokka 《国華》 nos 908 (Nov 1967), 17-31 (part 1) & 909 (Dec 1967), 13-27 (part 2)

    Art Directed Shader for Real Time Rendering - Interactive 3D Painting

    Get PDF
    In this work, I develop an approach to include Global Illumination (GI) effects in non-photorealistic real-time rendering; real-time rendering is one of the main areas of focus in the gaming industry and the booming virtual reality(VR) and augmented reality(AR) industries. My approach is based on adapting the Barycentric shader to create a wide variety of painting effects. This shader helps achieve the look of a 2D painting in an interactively rendered 3D scene. The shader accommodates robust computation to obtain artistic reflection and refraction. My contributions can be summarized as follows: Development of a generalized Barycentric shader that can provide artistic control, integration of this generalized Barycentric shader into an interactive ray tracer, and interactive rendering of a 3D scene that closely represent the reference painting
    corecore