11 research outputs found

    The dimension of C1C^1 splines of arbitrary degree on a tetrahedral partition

    No full text
    We consider the linear space of piecewise polynomials in three variables which are globally smooth, i.e., trivariate C1C^1 splines. The splines are defined on a uniform tetrahedral partition Δ\Delta, which is a natural generalization of the four-directional mesh. By using Bernstein-B{\´e}zier techniques, we establish formulae for the dimension of the C1C^1 splines of arbitrary degree

    Experiments with iterative improvement algorithms on completely unimodel hypercubes

    No full text

    A custom designed density estimation method for light transport

    No full text
    We present a new Monte Carlo method for solving the global illumination problem in environments with general geometry descriptions and light emission and scattering properties. Current Monte Carlo global illumination algorithms are based on generic density estimation techniques that do not take into account any knowledge about the nature of the data points --- light and potential particle hit points --- from which a global illumination solution is to be reconstructed. We propose a novel estimator, especially designed for solving linear integral equations such as the rendering equation. The resulting single-pass global illumination algorithm promises to combine the flexibility and robustness of bi-directional path tracing with the efficiency of algorithms such as photon mapping

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Hardware-supported cloth rendering

    Get PDF
    Many computer graphics applications involve rendering humans and their natural surroundings, which inevitably requires displaying textiles. To accurately resemble the appearance of e.g. clothing or furniture, reflection models are needed which are capable of modeling the highly complex reflection effects exhibited by textiles. This thesis focuses on generating realistic high quality images of textiles by developing suitable reflection models and introducing algorithms for illumination computation of cloth surfaces. As efficiency is essential for illumination computation, we additionally place great importance on exploiting graphics hardware to achieve high frame rates. To this end, we present a variety of hardware-accelerated methods to compute the illumination in textile micro geometry. We begin by showing how indirect illumination and shadows can be efficiently accounted for in heightfields, parametric surfaces, and triangle meshes. Using these methods, we can considerably speed up the computation of data structures like tabular bidirectional reflectance distribution functions (BRDFs) and bidirectional texture functions (BTFs), and also efficiently illuminate heightfield geometry and bump maps. Furthermore, we develop two shading models, which account for all important reflection properties exhibited by textiles. While the first model is suited for rendering textiles with general micro geometry, the second, based on volumetric textures, is specially tailored for rendering knitwear. To apply the second model e.g. to the triangle mesh of a garment, we finally introduce a new rendering algorithm for displaying semi-transparent volumetric textures at high interactive rates.Eine Vielzahl von Anwendungen in der Computergraphik schließen auch die Darstellung von Menschen und deren natürlicher Umgebung ein, was zwangsläufig auch die Darstellung von Textilien erfordert. Um beispielsweise das Aussehen von Bekleidung oder Möbeln genau zu erfassen, werden Reflexionsmodelle benötigt, die in der Lage sind, die hochkomplexen Reflexionseffekte von Textilien zu berücksichtigen. Der Schwerpunkt dieser Dissertation liegt in der Generierung qualitativ hochwertiger Bilder von Textilien, was wir durch die Entwicklung geeigneter Reflexionsmodelle und von Algorithmen zur Beleuchtungsberechnung an Stoffoberflächen ermöglichen. Da Effizienz essentiell für die Beleuchtungsberechnung ist, nutzen wir die Möglichkeiten von Graphikhardware aus, um hohe Bildwiederholraten zu erzielen. Hierfür legen wir eine Vielzahl von hardware-beschleunigten Methoden zur Beleuchtungsberechnung der Mikrogeometrie von Textilien vor. Zuerst zeigen wir, wie indirekte Beleuchtung und Schatten effizient in Höhenfeldern, parametrischen Flächen und Dreiecksnetzen berücksichtigt werden können. Mit Hilfe dieser Methoden kann die Berechnung von Datenstrukturen wie tabellarischer bidirectional reflectance distribution functions (BRDFs) und bidirectional texture functions (BTFs) erheblich beschleunigt, sowie die Beleuchtung von Höhenfeld-Geometrie und Bumpmaps effizient errechnet werden.Weiterhin entwickeln wir zwei Reflexionsmodelle, welche alle wichtigen Reflexionseigenschaften berücksichtigen, die Textilien aufweisen. Während das erste Modell sich zur Darstellung von Textilien mit allgemeiner Mikrogeometrie eignet, ist das zweite, welches auf volumetrischen Texturen basiert, speziell auf die Darstellung von Strickwaren zugeschnitten. Um das zweite Modell z.B. auf das Dreiecksnetz eines Bekleidungsstückes anzuwenden führen wir einen neuen Renderingalgorithmus für die Darstellung von semi-transparenten volumetrischen Texturen mit hohen Bildwiederholraten ein

    Efficient light transport using precomputed visibility

    No full text
    Visibility computations are the most time-consuming part of global illumination algorithms. The cost is amplified by the fact that quite often identical or similar information is recomputed multiple times. In particular this is the case when multiple images of the same scene are to be generated under varying lighting conditions and/or viewpoints. But even for a single image with static illumination, the computations could be accelerated by reusing visibility information for many different light paths. In this report we describe a general method of precomputing, storing, and reusing visibility information for light transport in a number of different types of scenes. In particular, we consider general parametric surfaces, triangle meshes without a global parameterization, and participating media. We also reorder the light transport in such a way that the visibility information is accessed in structured memory access patterns. This yields a method that is well suited for SIMD-style parallelization of the light transport, and can efficiently be implemented both in software and using graphics hardware. We finally demonstrate applications of the method to highly efficient precomputation of BRDFs, bidirectional texture functions, light fields, as well as near-interactive volume lighting

    Efficient light transport using precomputed visibility

    Get PDF
    SIGLEAvailable from TIB Hannover: RR 1912(2001-4-003) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekDEGerman

    Efficient Light Transport Using Precomputed Visibility

    No full text
    Visibility computations are the most time-consuming part of global illumination algorithms. The cost is amplified by the fact that quite often identical or similar information is recomputed multiple times. In particular this is the case when multiple images of the same scene are to be generated under varying lighting conditions and/or viewpoints. But even for a single image with static illumination, the computations could be accelerated by reusing visibility information for many different light paths. In this paper we describe a general method of precomputing, storing, and reusing visibility information for light transport in a number of different types of scenes. In particular, we consider general parametric surfaces, triangle meshes without a global parameterization, and participating media. We also reorder the light transport in such a way that the visibility information is accessed in structured memory access patterns. This yields a method that is well suited for SIMD-style parallelization of the light transport, and can efficiently be implemented both in software and using graphics hardware. We finally demonstrate applications of the method to highly efficient precomputation of BRDFs, bidirectional texture functions, light fields, as well as near-interactive volume lighting
    corecore