2,284 research outputs found

    Interactive Vegetation Rendering with Slicing and Blending

    Get PDF
    Detailed and interactive 3D rendering of vegetation is one of the challenges of traditional polygon-oriented computer graphics, due to large geometric complexity even of simple plants. In this paper we introduce a simplified image-based rendering approach based solely on alpha-blended textured polygons. The simplification is based on the limitations of human perception of complex geometry. Our approach renders dozens of detailed trees in real-time with off-the-shelf hardware, while providing significantly improved image quality over existing real-time techniques. The method is based on using ordinary mesh-based rendering for the solid parts of a tree, its trunk and limbs. The sparse parts of a tree, its twigs and leaves, are instead represented with a set of slices, an image-based representation. A slice is a planar layer, represented with an ordinary alpha or color-keyed texture; a set of parallel slices is a slicing. Rendering from an arbitrary viewpoint in a 360 degree circle around the center of a tree is achieved by blending between the nearest two slicings. In our implementation, only 6 slicings with 5 slices each are sufficient to visualize a tree for a moving or stationary observer with the perceptually similar quality as the original model

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    A directional occlusion shading model for interactive direct volume rendering

    Get PDF
    Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading

    Foundry: Hierarchical Material Design for Multi-Material Fabrication

    Get PDF
    We demonstrate a new approach for designing functional material definitions for multi-material fabrication using our system called Foundry. Foundry provides an interactive and visual process for hierarchically designing spatially-varying material properties (e.g., appearance, mechanical, optical). The resulting meta-materials exhibit structure at the micro and macro level and can surpass the qualities of traditional composites. The material definitions are created by composing a set of operators into an operator graph. Each operator performs a volume decomposition operation, remaps space, or constructs and assigns a material composition. The operators are implemented using a domain-specific language for multi-material fabrication; users can easily extend the library by writing their own operators. Foundry can be used to build operator graphs that describe complex, parameterized, resolution-independent, and reusable material definitions. We also describe how to stage the evaluation of the final material definition which in conjunction with progressive refinement, allows for interactive material evaluation even for complex designs. We show sophisticated and functional parts designed with our system.National Science Foundation (U.S.) (1138967)National Science Foundation (U.S.) (1409310)National Science Foundation (U.S.) (1547088)National Science Foundation (U.S.). Graduate Research Fellowship ProgramMassachusetts Institute of Technology. Undergraduate Research Opportunities Progra

    Semi-transparent textures based on opaque and transparent texels augmented with a thickness

    Full text link
    Le rendu en temps réel repose sur des compromis entre la performance et le réalisme. Un de ces compromis est de représenter des matériaux plus minces tels que les tissus comme étant infiniment minces pour économiser mémoire et temps de rendu. Par contre, cette perte de dimension prive la surface de propriétés essentielles à certains effets visuels. Dans ce mémoire, nous présentons une méthode pour simuler les effets de l’épaisseur sur des surfaces semi-transparentes en utilisant des textures composées de texels opaques et transparents. Nous analysons les trous formés par les texels transparents et nous conservons de l’information sur les contours des trous dans une structure hiérarchique compatible avec la méthode de filtrage de textures par MIP map. Nous dérivons des équations représentant la proportion de lumière passant dans un trou avec des murs intérieurs en fonction de l’angle incident des rayons de lumière. Nous combinons ces équations avec l’information conservée pour calculer un terme de transparence à différents niveaux de détail en temps réel.Real-time rendering is built upon compromises between performance and realism. One such compromise is to represent thinner materials like textile as infinitely thin in order to save on memory and rendering time. However, this loss of dimension robs the surface of properties key to some visual effects. In this thesis, we present a method to simulate the effects of thickness on semi-transparent surfaces using textures consisting of opaque and transparent texels. We analyze holes formed by transparent texels and store information about the contours of the holes in a hierarchical structure compatible with the filtering method of MIP mapping. We derive equations representing the proportion of light passing through a hole as a function of the incident angle of light. The proportions of texel top, texel side wall, and hole are computed accurately. We combine these equations with the information stored to compute a transparency term at different levels of detail in real time

    MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures

    Full text link
    Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware. This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard rendering pipelines. The NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors. Traditional rendering of the polygons with a z-buffer yields an image with features at every pixel, which are interpreted by a small, view-dependent MLP running in a fragment shader to produce a final pixel color. This approach enables NeRFs to be rendered with the traditional polygon rasterization pipeline, which provides massive pixel-level parallelism, achieving interactive frame rates on a wide range of compute platforms, including mobile phones.Comment: CVPR 2023. Project page: https://mobile-nerf.github.io, code: https://github.com/google-research/jax3d/tree/main/jax3d/projects/mobilener

    Real-time rendering of physically-based cloud simulations for university undergraduate research fellows

    Get PDF
    Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to [email protected], referencing the URI of the item.Includes bibliographical references (leaves 38-39).Computers today employ simulations of physical phenomena such as wind and fire and other physical properties in many common applications, including programs meant for training and entertainment. We focus particularly on the realistic simulation of cloud formation and existence on current commercially-available computers. One of the challenges associated with this simulation is its display onto a computer screen, often referred to as rendering. We will present a brief overview of existing cloud rendering techniques and compare their effectiveness to rendering a simulation as it occurs. We will then suggest our rendering method which relies upon the use of three-dimensional textures and modified Gaussian transfer functions for the self-shadowing properties associated with clouds. We will analyze these results, focusing on frame rates and visual appearance, and then conclude by suggesting further work on this topic
    • …
    corecore