13,895 research outputs found

    Improving Shape Depiction under Arbitrary Rendering

    Get PDF
    International audienceBased on the observation that shading conveys shape information through intensity gradients, we present a new technique called Radiance Scaling that modifies the classical shading equations to offer versatile shape depiction functionalities. It works by scaling reflected light intensities depending on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated to surface feature variations, enhancing concavities and convexities. The first advantage of such an approach is that it produces satisfying results with any kind of material for direct and global illumination: we demonstrate results obtained with Phong and Ashikmin-Shirley BRDFs, Cartoon shading, sub-Lambertian materials, perfectly reflective or refractive objects. Another advantage is that there is no restriction to the choice of lighting environment: it works with a single light, area lights, and inter-reflections. Third, it may be adapted to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Finally, our approach works in real-time on modern graphics hardware making it suitable for any interactive 3D visualization

    Geometry-based shading for shape depiction Enhancement,

    Get PDF
    Recent works on Non-Photorealistic Rendering (NPR) show that object shape enhancement requires sophisticated effects such as: surface details detection and stylized shading. To date, some rendering techniques have been proposed to overcome this issue, but most of which are limited to correlate shape enhancement functionalities to surface feature variations. Therefore, this problem still persists especially in NPR. This paper is an attempt to address this problem by presenting a new approach for enhancing shape depiction of 3D objects in NPR. We first introduce a tweakable shape descriptor that offers versatile func- tionalities for describing the salient features of 3D objects. Then to enhance the classical shading models, we propose a new technique called Geometry-based Shading. This tech- nique controls reflected lighting intensities based on local geometry. Our approach works without any constraint on the choice of material or illumination. We demonstrate results obtained with Blinn-Phong shading, Gooch shading, and cartoon shading. These results prove that our approach produces more satisfying results compared with the results of pre- vious shape depiction techniques. Finally, our approach runs on modern graphics hardware in real time, which works efficiently with interactive 3D visualization

    Transport-Based Neural Style Transfer for Smoke Simulations

    Full text link
    Artistically controlling fluids has always been a challenging task. Optimization techniques rely on approximating simulation states towards target velocity or density field configurations, which are often handcrafted by artists to indirectly control smoke dynamics. Patch synthesis techniques transfer image textures or simulation features to a target flow field. However, these are either limited to adding structural patterns or augmenting coarse flows with turbulent structures, and hence cannot capture the full spectrum of different styles and semantically complex structures. In this paper, we propose the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric smoke data. Our method is able to transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Our transport-based approach allows direct control over the divergence of the stylization velocity field by optimizing incompressible and irrotational potentials that transport smoke towards stylization. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional materials: http://www.byungsoo.me/project/neural-flow-styl

    Longitudinal visualization for exploratory analysis of multiple sclerosis lesions

    Get PDF
    In multiple sclerosis (MS), the amount of brain damage, anatomical location, shape, and changes are important aspects that help medical researchers and clinicians to understand the temporal patterns of the disease. Interactive visualization for longitudinal MS data can support studies aimed at exploratory analysis of lesion and healthy tissue topology. Existing visualizations in this context comprise bar charts and summary measures, such as absolute numbers and volumes to summarize lesion trajectories over time, as well as summary measures such as volume changes. These techniques can work well for datasets having dual time point comparisons. For frequent follow-up scans, understanding patterns from multimodal data is difficult without suitable visualization approaches. As a solution, we propose a visualization application, wherein we present lesion exploration tools through interactive visualizations that are suitable for large time-series data. In addition to various volumetric and temporal exploration facilities, we include an interactive stacked area graph with other integrated features that enable comparison of lesion features, such as intensity or volume change. We derive the input data for the longitudinal visualizations from automated lesion tracking. For cases with a larger number of follow-ups, our visualization design can provide useful summary information while allowing medical researchers and clinicians to study features at lower granularities. We demonstrate the utility of our visualization on simulated datasets through an evaluation with domain experts.publishedVersio

    Semi-automatic transfer function generation for volumetric data visualization using contour tree analyses

    Get PDF
    corecore