523 research outputs found

    Painterly rendering techniques: A state-of-the-art review of current approaches

    Get PDF
    In this publication we will look at the different methods presented over the past few decades which attempt to recreate digital paintings. While previous surveys concentrate on the broader subject of non-photorealistic rendering, the focus of this paper is firmly placed on painterly rendering techniques. We compare different methods used to produce different output painting styles such as abstract, colour pencil, watercolour, oriental, oil and pastel. Whereas some methods demand a high level of interaction using a skilled artist, others require simple parameters provided by a user with little or no artistic experience. Many methods attempt to provide more automation with the use of varying forms of reference data. This reference data can range from still photographs, video, 3D polygonal meshes or even 3D point clouds. The techniques presented here endeavour to provide tools and styles that are not traditionally available to an artist. Copyright © 2012 John Wiley & Sons, Ltd

    Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality

    Get PDF
    A realistically inserted virtual object in the real-time physical environment is a desirable feature in augmented reality (AR) applications and mixed reality (MR) in general. This problem is considered a vital research area in computer graphics, a field that is experiencing ongoing discovery. The algorithms and methods used to obtain dynamic and real-time illumination measurement, estimating, and rendering of augmented reality scenes are utilized in many applications to achieve a realistic perception by humans. We cannot deny the powerful impact of the continuous development of computer vision and machine learning techniques accompanied by the original computer graphics and image processing methods to provide a significant range of novel AR/MR techniques. These techniques include methods for light source acquisition through image-based lighting or sampling, registering and estimating the lighting conditions, and composition of global illumination. In this review, we discussed the pipeline stages with the details elaborated about the methods and techniques that contributed to the development of providing a photo-realistic rendering, visual coherence, and interactive real-time illumination results in AR/MR

    Marker hiding methods: Applications in augmented reality

    Get PDF
    © 2015 Taylor & Francis Group, LLC.In augmented reality, the markers are noticeable by their simple design of a rectangular image with black and white areas that disturb the reality of the overall view. As the markerless techniques are not usually robust enough, hiding the markers has a valuable usage, which many researchers have focused on. Categorizing the marker hiding methods is the main motivation of this study, which explains each of them in detail and discusses the advantages and shortcomings of each. The main ideas, enhancements, and future works of the well-known techniques are also comprehensively summarized and analyzed in depth. The main goal of this study is to provide researchers who are interested in markerless or hiding-marker methods an easier approach for choosing the method that is best suited to their aims. This work reviews the different methods that hide the augmented reality marker by using information from its surrounding area. These methods have considerable differences in their smooth continuation of the textures that hide the marker area as well as their performance to hide the augmented reality marker in real time. It is also hoped that our analysis helps researchers find solutions to the drawbacks of each method. © 201

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance

    Art Directed Watercolor Shader for Non-photorealistic Rendering with a Focus on Reflections

    Get PDF
    In this research, I demonstrated that emulating painterly reflections is impossible using existing modeling, compositing and rendering software that does not provide programming capabilities. To obtain painterly reflections, we need to emulate three aspects of painterly reflections: (1) shape of reflections; (2) glossiness of reflections; and (3) colors of reflections. The first two turn out to be relatively easy. However, despite the perceived simplicity of color reproduction, the third one turned out to be hardest without developing our own proprietary tools. To demonstrate the difficulty, I have developed a shader using commercial rendering and shading software that does not provide explicit programming power. I assigned my shader as a surface material to 3D objects. Using my shader, I was able to create computer generated watercolor style renderings without reflections. My shader provide rendering effects such as diffuse, contours, specularity, shadow, and reflections. Although I can faithfully emulate non-reflected regions of given water-color paintings, I demonstrate that my shader cannot produce reflection colors that are faithful to colors of original reflections

    A Survey on Video-based Graphics and Video Visualization

    Get PDF

    Perceptually Inspired Real-time Artistic Style Transfer for Video Stream

    Get PDF
    This study presents a real-time texture transfer method for artistic style transfer for video stream. We propose a parallel framework using a T-shaped kernel to enhance the computational performance. With regard to accelerated motion estimation, which is necessarily required for maintaining temporal coherence, we present a method using a downscaled motion field to successfully achieve high real-time performance for texture transfer of video stream. In addition, to enhance the artistic quality, we calculate the level of abstraction using visual saliency and integrate it with the texture transfer algorithm. Thus, our algorithm can stylize video with perceptual enhancements

    Ref-NPR: Reference-Based Non-Photorealistic Radiance Fields for Controllable Scene Stylization

    Full text link
    Current 3D scene stylization methods transfer textures and colors as styles using arbitrary style references, lacking meaningful semantic correspondences. We introduce Reference-Based Non-Photorealistic Radiance Fields (Ref-NPR) to address this limitation. This controllable method stylizes a 3D scene using radiance fields with a single stylized 2D view as a reference. We propose a ray registration process based on the stylized reference view to obtain pseudo-ray supervision in novel views. Then we exploit semantic correspondences in content images to fill occluded regions with perceptually similar styles, resulting in non-photorealistic and continuous novel view sequences. Our experimental results demonstrate that Ref-NPR outperforms existing scene and video stylization methods regarding visual quality and semantic correspondence. The code and data are publicly available on the project page at https://ref-npr.github.io.Comment: Accepted by CVPR2023. 17 pages, 20 figures. Project page: https://ref-npr.github.io, Code: https://github.com/dvlab-research/Ref-NP
    corecore