465 research outputs found

    A study of how Chinese ink painting features can be applied to 3D scenes and models in real-time rendering

    Get PDF
    Past research findings addressed mature techniques for non-photorealistic rendering. However, research findings indicate that there is little information dealing with efficient methods to simulate Chinese ink painting features in rendering 3D scenes. Considering that Chinese ink painting has achieved many worldwide awards, the potential to effectively and automatically develop 3D animations and games in this style indicates a need for the development of appropriate technology for the future market. The goal of this research is about rendering 3D meshes in a Chinese ink painting style which is both appealing and realistic. Specifically, how can the output image appear similar to a hand-drawn Chinese ink painting. And how efficient does the rendering pipeline have to be to result in a real-time scene. For this study the researcher designed two rendering pipelines for static objects and moving objects in the final scene. The entire rendering process includes interior shading, silhouette extracting, textures integrating, and background rendering. Methodology involved the use of silhouette detection, multiple rendering passes, Gaussian blur for anti-aliasing, smooth step functions, and noise textures for simulating ink textures. Based on the output of each rendering pipeline, rendering process of the scene with best looking of Chinese ink painting style is illustrated in detail. The speed of the rendering pipeline proposed by this research was tested. The framerate of the final scenes created with this pipeline was higher than 30fps, a level considered to be real-time. One can conclude that the main objective of the research study was met even though other methods for generating Chinese ink painting rendering are available and should be explored

    Inter-color NPR lines: A comparison of rendering techniques

    Get PDF
    Renders of 3D scenes can feature lines drawn automatically along sharp edges between colored areas on object textures, in order to imitate certain conventional styles of hand-drawn line art. However, such inter-color lines have been studied very little. Two algorithms for rendering these lines were compared in this study - a faster one utilizing lines baked into the textures themselves and a more complex one that dynamically generated the lines in image space on each frame - for the purpose of determining which of the two better imitated traditional, hand-drawn art styles and which was more visually appealing. Test subjects compared results of the two algorithms side by side in a real-time rendering program, which let them view a 3D scene both passively and interactively from a moving camera, and they noted the differences between each technique\u27s relative line thicknesses - the key visual disparity - in order to reach final judgments as to which better adhered to artistic conventions and which was more appealing. Statistical analysis of the sample proportions that preferred each algorithm failed to prove that any significant difference existed between the two algorithms in terms of either of the above metrics. Thus the algorithm using baked lines appeared to be more recommendable overall, as it was known to be computationally faster, whereas the dynamic algorithm was not shown to be preferred by viewers in terms of conventionality or aesthetics

    Expressive rendering of mountainous terrain

    Get PDF
    technical reportPainters and cartographers have developed artistic landscape rendering techniques for centuries. Such renderings can visualize complex three-dimensional landscapes in a pleasing and understandable way. In this work we examine a particular type of artistic depiction, panorama maps, in terms of function and style, and we develop methods to automatically generate panorama map reminiscent renderings from GIS data. In particular, we develop image-based procedural surface textures for mountainous terrain. Our methods use the structural information present in the terrain and are developed with perceptual metrics and artistic considerations in mind

    3D Line textures and the visualization of confidence in Architecture

    Get PDF
    technical reportThis work introduces a technique for interactive walkthroughs of non-photorealistically rendered (NPR) scenes using 3D line primitives to define architectural features of the model, as well as indicate textural qualities. Line primitives are not typically used in this manner in favor of texture mapping techniques which can encapsulate a great deal of information in a single texture map, and take advantage of GPU optimizations for accelerated rendering. However, texture mapped images may not maintain the visual quality or aesthetic appeal that is possible when using 3D lines to simulate NPR scenes such as hand-drawn illustrations or architectural renderings. In addition, line textures can be modi ed interactively, for instance changing the sketchy quality of the lines, and can be exported as vectors to allow the automatic generation of illustrations and further modi cation in vector-based graphics programs. The technique introduced here extracts feature edges from a model, and using these edges, generates a reduced set of line textures which indicate material properties while maintaining interactive frame rates. A clipping algorithm is presented to enable 3D lines to reside only in the interior of the 3D model without exposing the underlying triangulated mesh. The resulting system produces interactive illustrations with high visual quality that are free from animation artifacts

    Controlling Perceptual Factors in Neural Style Transfer

    Full text link
    Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.Comment: Accepted at CVPR201

    Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields for Controllable Scene Stylization

    Full text link
    Current 3D scene stylization methods transfer textures and colors as styles using arbitrary style references, lacking meaningful semantic correspondences. We introduce Reference-Based Non-Photorealistic Radiance Fields (Ref-NPR) to address this limitation. This controllable method stylizes a 3D scene using radiance fields with a single stylized 2D view as a reference. We propose a ray registration process based on the stylized reference view to obtain pseudo-ray supervision in novel views. Then we exploit semantic correspondences in content images to fill occluded regions with perceptually similar styles, resulting in non-photorealistic and continuous novel view sequences. Our experimental results demonstrate that Ref-NPR outperforms existing scene and video stylization methods regarding visual quality and semantic correspondence. The code and data are publicly available on the project page at https://ref-npr.github.io.Comment: Accepted by CVPR2023. 17 pages, 20 figures. Project page: https://ref-npr.github.io, Code: https://github.com/dvlab-research/Ref-NP

    Real-Time Stylized Rendering for Large-Scale 3D Scenes

    Get PDF
    While modern digital entertainment has seen a major shift toward photorealism in animation, there is still significant demand for stylized rendering tools. Stylized, or non-photorealistic rendering (NPR), applications generally sacrifice physical accuracy for artistic or functional visual output. Oftentimes, NPR applications focus on extracting specific features from a 3D environment and highlighting them in a unique manner. One application of interest involves recreating 2D hand-drawn art styles in a 3D-modeled environment. This task poses challenges in the form of spatial coherence, feature extraction, and stroke line rendering. Previous research on this topic has also struggled to overcome specific performance bottlenecks, which have limited use of this technology in real-time applications. Specifically, many stylized rendering techniques have difficulty operating on large-scale scenes, such as open-world terrain environments. In this paper, we describe various novel rendering techniques for mimicking hand-drawn art styles in a large-scale 3D environment, including modifications to existing methods for stroke rendering and hatch-line texturing. Our system focuses on providing various complex styles while maintaining real-time performance, to maximize user-interactability. Our results demonstrate improved performance over existing real-time methods, and offer a few unique style options for users, though the system still suffers from some visual inconsistencies

    5th SC@RUG 2008 proceedings:Student Colloquium 2007-2008

    Get PDF

    5th SC@RUG 2008 proceedings:Student Colloquium 2007-2008

    Get PDF
    corecore