1,458 research outputs found
Efficient From-Point Visibility for Global Illumination in Virtual Scenes with Participating Media
Sichtbarkeitsbestimmung ist einer der fundamentalen Bausteine fotorealistischer Bildsynthese. Da die Berechnung der Sichtbarkeit allerdings äußerst kostspielig zu berechnen ist, wird nahezu die gesamte Berechnungszeit darauf verwendet. In dieser Arbeit stellen wir neue Methoden zur Speicherung, Berechnung und Approximation von Sichtbarkeit in Szenen mit streuenden Medien vor, die die Berechnung erheblich beschleunigen, dabei trotzdem qualitativ hochwertige und artefaktfreie Ergebnisse liefern
Efficient Many-Light Rendering of Scenes with Participating Media
We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media
Real-Time Volumetric Shadows using 1D Min-Max Mipmaps
Light scattering in a participating medium is responsible for several important effects we see in the natural world. In the presence of occluders, computing single scattering requires integrating the illumination scattered towards the eye along the camera ray, modulated by the visibility towards the light at each point. Unfortunately, incorporating volumetric shadows into this integral, while maintaining real-time performance, remains challenging.
In this paper we present a new real-time algorithm for computing volumetric shadows in single-scattering media on the GPU. This computation requires evaluating the scattering integral over the intersections of camera rays with the shadow map, expressed as a 2D height field. We observe that by applying epipolar rectification to the shadow map, each camera ray only travels through a single row of the shadow map (an epipolar slice), which allows us to find the visible segments by considering only 1D height fields. At the core of our algorithm is the use of an acceleration structure (a 1D minmax mipmap) which allows us to quickly find the lit segments for all pixels in an epipolar slice in parallel. The simplicity of this data structure and its traversal allows for efficient implementation using only pixel shaders on the GPU
Isogeometric FEM-BEM coupled structural-acoustic analysis of shells using subdivision surfaces
We introduce a coupled finite and boundary element formulation for acoustic
scattering analysis over thin shell structures. A triangular Loop subdivision
surface discretisation is used for both geometry and analysis fields. The
Kirchhoff-Love shell equation is discretised with the finite element method and
the Helmholtz equation for the acoustic field with the boundary element method.
The use of the boundary element formulation allows the elegant handling of
infinite domains and precludes the need for volumetric meshing. In the present
work the subdivision control meshes for the shell displacements and the
acoustic pressures have the same resolution. The corresponding smooth
subdivision basis functions have the continuity property required for the
Kirchhoff-Love formulation and are highly efficient for the acoustic field
computations. We validate the proposed isogeometric formulation through a
closed-form solution of acoustic scattering over a thin shell sphere.
Furthermore, we demonstrate the ability of the proposed approach to handle
complex geometries with arbitrary topology that provides an integrated
isogeometric design and analysis workflow for coupled structural-acoustic
analysis of shells
Fast Rendering of Forest Ecosystems with Dynamic Global Illumination
Real-time rendering of large-scale, forest ecosystems remains a challenging problem, in that important global illumination effects, such as leaf transparency and inter-object light scattering, are difficult to capture, given tight timing constraints and scenes that typically contain hundreds of millions of primitives. We propose a new lighting model, adapted from a model previously used to light convective clouds and other participating media, together with GPU ray tracing, in order to achieve these global illumination effects while maintaining near real-time performance. The lighting model is based on a lattice-Boltzmann method in which reflectance, transmittance, and absorption parameters are taken from measurements of real plants. The lighting model is solved as a preprocessing step, requires only seconds on a single GPU, and allows dynamic lighting changes at run-time. The ray tracing engine, which runs on one or multiple GPUs, combines multiple acceleration structures to achieve near real-time performance for large, complex scenes. Both the preprocessing step and the ray tracing engine make extensive use of NVIDIA\u27s Compute Unified Device Architecture (CUDA)
- …