13,895 research outputs found
Improving Shape Depiction under Arbitrary Rendering
International audienceBased on the observation that shading conveys shape information through intensity gradients, we present a new technique called Radiance Scaling that modifies the classical shading equations to offer versatile shape depiction functionalities. It works by scaling reflected light intensities depending on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated to surface feature variations, enhancing concavities and convexities. The first advantage of such an approach is that it produces satisfying results with any kind of material for direct and global illumination: we demonstrate results obtained with Phong and Ashikmin-Shirley BRDFs, Cartoon shading, sub-Lambertian materials, perfectly reflective or refractive objects. Another advantage is that there is no restriction to the choice of lighting environment: it works with a single light, area lights, and inter-reflections. Third, it may be adapted to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Finally, our approach works in real-time on modern graphics hardware making it suitable for any interactive 3D visualization
Geometry-based shading for shape depiction Enhancement,
Recent works on Non-Photorealistic Rendering (NPR) show that object shape enhancement requires sophisticated effects such as: surface details detection and stylized shading. To date, some rendering techniques have been proposed to overcome this issue, but most of which are limited to correlate shape enhancement functionalities to surface feature variations. Therefore, this problem still persists especially in NPR. This paper is an attempt to address this problem by presenting a new approach for enhancing shape depiction of 3D objects in NPR. We first introduce a tweakable shape descriptor that offers versatile func- tionalities for describing the salient features of 3D objects. Then to enhance the classical shading models, we propose a new technique called Geometry-based Shading. This tech- nique controls reflected lighting intensities based on local geometry. Our approach works without any constraint on the choice of material or illumination. We demonstrate results obtained with Blinn-Phong shading, Gooch shading, and cartoon shading. These results prove that our approach produces more satisfying results compared with the results of pre- vious shape depiction techniques. Finally, our approach runs on modern graphics hardware in real time, which works efficiently with interactive 3D visualization
Transport-Based Neural Style Transfer for Smoke Simulations
Artistically controlling fluids has always been a challenging task.
Optimization techniques rely on approximating simulation states towards target
velocity or density field configurations, which are often handcrafted by
artists to indirectly control smoke dynamics. Patch synthesis techniques
transfer image textures or simulation features to a target flow field. However,
these are either limited to adding structural patterns or augmenting coarse
flows with turbulent structures, and hence cannot capture the full spectrum of
different styles and semantically complex structures. In this paper, we propose
the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric
smoke data. Our method is able to transfer features from natural images to
smoke simulations, enabling general content-aware manipulations ranging from
simple patterns to intricate motifs. The proposed algorithm is physically
inspired, since it computes the density transport from a source input smoke to
a desired target configuration. Our transport-based approach allows direct
control over the divergence of the stylization velocity field by optimizing
incompressible and irrotational potentials that transport smoke towards
stylization. Temporal consistency is ensured by transporting and aligning
subsequent stylized velocities, and 3D reconstructions are computed by
seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional
materials: http://www.byungsoo.me/project/neural-flow-styl
Longitudinal visualization for exploratory analysis of multiple sclerosis lesions
In multiple sclerosis (MS), the amount of brain damage, anatomical location, shape, and changes are important aspects that help medical researchers and clinicians to understand the temporal patterns of the disease. Interactive visualization for longitudinal MS data can support studies aimed at exploratory analysis of lesion and healthy tissue topology. Existing visualizations in this context comprise bar charts and summary measures, such as absolute numbers and volumes to summarize lesion trajectories over time, as well as summary measures such as volume changes. These techniques can work well for datasets having dual time point comparisons. For frequent follow-up scans, understanding patterns from multimodal data is difficult without suitable visualization approaches. As a solution, we propose a visualization application, wherein we present lesion exploration tools through interactive visualizations that are suitable for large time-series data. In addition to various volumetric and temporal exploration facilities, we include an interactive stacked area graph with other integrated features that enable comparison of lesion features, such as intensity or volume change. We derive the input data for the longitudinal visualizations from automated lesion tracking. For cases with a larger number of follow-ups, our visualization design can provide useful summary information while allowing medical researchers and clinicians to study features at lower granularities. We demonstrate the utility of our visualization on simulated datasets through an evaluation with domain experts.publishedVersio
Recommended from our members
Image-Based 3D Photography Using Opacity Hulls
We have built a system for acquiring and displaying high quality graphical models of objects that are impossible to scan with traditional scanners. Our system can acquire highly specular and fuzzy materials, such as fur and feathers. The hardware set-up consists of a turntable, two plasma displays, an array of cameras, and a rotating array of directional lights. We use multi-background matting techniques to acquire alpha mattes of the object from multiple viewpoints. The alpha mattes are used to construct an opacity hull. The opacity hull is a new shape representation, defined as the visual hull of the object with view-dependent opacity. It enables visualization of complex object silhouettes and seamless blending of objects into new environments. Our system also supports relighting of objects with arbitrary appearance using surface reflectance fields, a purely image-based appearance representation. Our system is the first to acquire and render surface reflectance fields under varying illumination from arbitrary viewpoints. We have built three generations of digitizers with increasing sophistication. In this paper, we present our results from digitizing hundreds of models.Engineering and Applied Science
- …