8,654 research outputs found

    Live User-guided Intrinsic Video For Static Scenes

    Get PDF
    We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection.We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance

    Noise-based volume rendering for the visualization of multivariate volumetric data

    Get PDF

    3D Line textures and the visualization of confidence in Architecture

    Get PDF
    technical reportThis work introduces a technique for interactive walkthroughs of non-photorealistically rendered (NPR) scenes using 3D line primitives to define architectural features of the model, as well as indicate textural qualities. Line primitives are not typically used in this manner in favor of texture mapping techniques which can encapsulate a great deal of information in a single texture map, and take advantage of GPU optimizations for accelerated rendering. However, texture mapped images may not maintain the visual quality or aesthetic appeal that is possible when using 3D lines to simulate NPR scenes such as hand-drawn illustrations or architectural renderings. In addition, line textures can be modi ed interactively, for instance changing the sketchy quality of the lines, and can be exported as vectors to allow the automatic generation of illustrations and further modi cation in vector-based graphics programs. The technique introduced here extracts feature edges from a model, and using these edges, generates a reduced set of line textures which indicate material properties while maintaining interactive frame rates. A clipping algorithm is presented to enable 3D lines to reside only in the interior of the 3D model without exposing the underlying triangulated mesh. The resulting system produces interactive illustrations with high visual quality that are free from animation artifacts

    Space-Time Transfinite Interpolation of Volumetric Material Properties

    Get PDF
    The paper presents a novel technique based on extension of a general mathematical method of transfinite interpolation to solve an actual problem in the context of a heterogeneous volume modelling area. It deals with time-dependent changes to the volumetric material properties (material density, colour and others) as a transformation of the volumetric material distributions in space-time accompanying geometric shape transformations such as metamorphosis. The main idea is to represent the geometry of both objects by scalar fields with distance properties, to establish in a higher-dimensional space a time gap during which the geometric transformation takes place, and to use these scalar fields to apply the new space-time transfinite interpolation to volumetric material attributes within this time gap. The proposed solution is analytical in its nature, does not require heavy numerical computations and can be used in real-time applications. Applications of this technique also include texturing and displacement mapping of time-variant surfaces, and parametric design of volumetric microstructures

    Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets

    Get PDF
    Journal ArticleMost direct volume renderings produced today employ onedimensional transfer functions, which assign color and opacity to the volume based solely on the single scalar quantity which comprises the dataset. Though they have not received widespread attention, multi-dimensional transfer functions are a very effective way to extract specific material boundaries and convey subtle surface properties. However, identifying good transfer functions is difficult enough in one dimension, let alone two or three dimensions. This paper demonstrates an important class of three-dimensional transfer functions for scalar data (based on data value, gradient magnitude, and a second directional derivative), and describes a set of direct manipulation widgets which make specifying such transfer functions intuitive and convenient. We also describe how to use modern graphics hardware to interactively render with multi-dimensional transfer functions. The transfer functions, widgets, and hardware combine to form a powerful system for interactive volume exploration

    Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets

    Get PDF
    Journal ArticleMost direct volume renderings produced today employ one-dimensional transfer functions, which assign color and opacity to the volume based solely on the single scalar quantity which comprises the dataset. Though they have not received widespread attention, multi-dimensional transfer functions are a very effective way to extract specific material boundaries and convey subtle surface properties

    Fast Rendering of Forest Ecosystems with Dynamic Global Illumination

    Get PDF
    Real-time rendering of large-scale, forest ecosystems remains a challenging problem, in that important global illumination effects, such as leaf transparency and inter-object light scattering, are difficult to capture, given tight timing constraints and scenes that typically contain hundreds of millions of primitives. We propose a new lighting model, adapted from a model previously used to light convective clouds and other participating media, together with GPU ray tracing, in order to achieve these global illumination effects while maintaining near real-time performance. The lighting model is based on a lattice-Boltzmann method in which reflectance, transmittance, and absorption parameters are taken from measurements of real plants. The lighting model is solved as a preprocessing step, requires only seconds on a single GPU, and allows dynamic lighting changes at run-time. The ray tracing engine, which runs on one or multiple GPUs, combines multiple acceleration structures to achieve near real-time performance for large, complex scenes. Both the preprocessing step and the ray tracing engine make extensive use of NVIDIA\u27s Compute Unified Device Architecture (CUDA)
    corecore