97 research outputs found

    Interactive toon shading using mesh smoothing

    Get PDF
    Toon shading mimics the style of few colour bands and hence offers an effective way to convey the cartoon-style rendering. Despite an increasing amount of research on toon shading, little research has been reported on generation of toon shading style with more simplicity. In this paper, we present a method to create a simplified form of toon shading using mesh smoothing from 3D objects. The proposed method exploits the Laplacian smoothing to emphasise the simplicity of 3D objects. Motivated by simplified form of Phong lighting model, we create non-photorealistic style capable of enhancing the cartoonish appearance. An enhanced toon shading algorithm is applied on the simple 3D objects in order to convey more simple visual cues of tone. The experimental result reveals the ability of proposed method to produce more cartoonish simplistic effects

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Geometry-based shading for shape depiction Enhancement,

    Get PDF
    Recent works on Non-Photorealistic Rendering (NPR) show that object shape enhancement requires sophisticated effects such as: surface details detection and stylized shading. To date, some rendering techniques have been proposed to overcome this issue, but most of which are limited to correlate shape enhancement functionalities to surface feature variations. Therefore, this problem still persists especially in NPR. This paper is an attempt to address this problem by presenting a new approach for enhancing shape depiction of 3D objects in NPR. We first introduce a tweakable shape descriptor that offers versatile func- tionalities for describing the salient features of 3D objects. Then to enhance the classical shading models, we propose a new technique called Geometry-based Shading. This tech- nique controls reflected lighting intensities based on local geometry. Our approach works without any constraint on the choice of material or illumination. We demonstrate results obtained with Blinn-Phong shading, Gooch shading, and cartoon shading. These results prove that our approach produces more satisfying results compared with the results of pre- vious shape depiction techniques. Finally, our approach runs on modern graphics hardware in real time, which works efficiently with interactive 3D visualization

    Ink-and-Ray: Bas-Relief Meshes for Adding Global Illumination Effects to Hand-Drawn Characters

    Get PDF
    We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D handdrawn look-and-feel. We demonstrate our approach on a varied set of handdrawn images and animations, showing that even in comparison to ground truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding

    Importance-Driven Composition of Multiple Rendering Styles

    Get PDF
    International audienceWe introduce a non-uniform composition that integrates multiple rendering styles in a picture driven by an importance map. This map, either issued from saliency estimation or designed by a user, is introduced both in the creation of the multiple styles and in the final composition. Our approach accommodates a variety of stylization techniques, such as color desaturation, line drawing, blurring, edge-preserving smoothing and enhancement. We illustrate the versatility of the proposed approach and the variety of rendering styles on different applications such as images, videos, 3D scenes and even mixed reality. We also demonstrate that such an approach may help in directing user attention

    A volume filtering and rendering system for an improved visual balance of feature preservation and noise suppression in medical imaging

    Get PDF
    Preserving or enhancing salient features whilst effectively suppressing noise-derived artifacts and extraneous detail have been two consistent yet competing objectives in volumetric medical image processing. Illustrative techniques (and methods inspired by them) can help to enhance and, if desired, isolate the depiction of specific regions of interest whilst retaining overall context. However, highlighting or enhancing specific features can have the undesirable side-effect of highlighting noise. Second-derivative based methods can be employed effectively in both the rendering and volume filtering stages of a visualisation pipeline to enhance the depiction of feature detail whilst minimising noise-based artifacts. We develop a new 3D anisotropic-diffusion PDE for an improved balance of feature-retention and noise reduction; furthermore, we present a feature-enhancing visualisation pipeline that can be applied to multiple modalities and has been shown to be particularly effective in the context of 3D ultrasound

    An Image-based model for 3D shape quality measure

    Get PDF
    In light of increased research on 3D shapes and the increased processing capability of GPUs, there has been a significant increase in available 3D applications. In many applications, assessment of perceptual quality of 3D shapes is required. Due to the nature of 3D representation, this quality assessment may take various forms. While it is straightforward to measure geometric distortions directly on the 3D shape geometry, such measures are often inconsistent with human perception of quality. In most cases, human viewers tend to perceive 3D shapes from their 2D renderings. It is therefore plausible to measure shape quality using their 2D renderings. In this paper, we present an image-based quality metric for evaluating 3D shape quality given the original and distorted shapes. To provide a good coverage of 3D geometry from different views, we render each shape from 12 equally spaced views, along with a variety of rendering styles to capture different aspects of visual characteristics. Image-based metrics such as SSIM (Structure Similarity Index Measure) are then used to measure the quality of 3D shapes. Our experiments show that by effectively selecting a suitable combination of rendering styles and building a neural network based model, we achieve significantly better prediction for subjective perceptual quality than existing methods

    Colour videos with depth : acquisition, processing and evaluation

    Get PDF
    The human visual system lets us perceive the world around us in three dimensions by integrating evidence from depth cues into a coherent visual model of the world. The equivalent in computer vision and computer graphics are geometric models, which provide a wealth of information about represented objects, such as depth and surface normals. Videos do not contain this information, but only provide per-pixel colour information. In this dissertation, I hence investigate a combination of videos and geometric models: videos with per-pixel depth (also known as RGBZ videos). I consider the full life cycle of these videos: from their acquisition, via filtering and processing, to stereoscopic display. I propose two approaches to capture videos with depth. The first is a spatiotemporal stereo matching approach based on the dual-cross-bilateral grid – a novel real-time technique derived by accelerating a reformulation of an existing stereo matching approach. This is the basis for an extension which incorporates temporal evidence in real time, resulting in increased temporal coherence of disparity maps – particularly in the presence of image noise. The second acquisition approach is a sensor fusion system which combines data from a noisy, low-resolution time-of-flight camera and a high-resolution colour video camera into a coherent, noise-free video with depth. The system consists of a three-step pipeline that aligns the video streams, efficiently removes and fills invalid and noisy geometry, and finally uses a spatiotemporal filter to increase the spatial resolution of the depth data and strongly reduce depth measurement noise. I show that these videos with depth empower a range of video processing effects that are not achievable using colour video alone. These effects critically rely on the geometric information, like a proposed video relighting technique which requires high-quality surface normals to produce plausible results. In addition, I demonstrate enhanced non-photorealistic rendering techniques and the ability to synthesise stereoscopic videos, which allows these effects to be applied stereoscopically. These stereoscopic renderings inspired me to study stereoscopic viewing discomfort. The result of this is a surprisingly simple computational model that predicts the visual comfort of stereoscopic images. I validated this model using a perceptual study, which showed that it correlates strongly with human comfort ratings. This makes it ideal for automatic comfort assessment, without the need for costly and lengthy perceptual studies
    corecore