8 research outputs found

    A Discrete Radiosity Method

    Get PDF
    International audienceWe present a completely new principle of computation of radiosity values in a 3D scene. The method is based on a voxel approximation of the objects, and all occlusion calculations involve only integer arithmetics operation. The method is proved to converge. Some experimental results are presented

    CACHED MULTI-BOUNCE SOLUTION AND RECONSTRUCTION FOR VOXEL-BASED GLOBAL ILLUMINATION

    Get PDF
    International audienceWe address the main shortcomings of the voxel-based multi-bounce global illumination method of Chatelier and Malgouyres (2006), by introducing an iterated cached method which allows increasing sampling coarse-ness at each bounce for improved efficiency, and by introducing a ray-tracing based reconstruction process for a better final image quality. The result is a competitive accurate multi-bounce global illumination method with octree voxel-based irradiance caching

    A Low Complexity Discrete Radiosity Method

    Get PDF
    International audienceRather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largely dominant in practice. Hence, the apparent complexity is linear for time and space, with respect to the number of voxels in the scene. This method does not require the storage of pre-computed form factors, since they are computed on the fly in an efficient way. The algorithm which is described does not use 3D discrete line traversal and is not similar to simple ray-tracing. In the present form, the voxel-based radiosity equation assumes the ideal diffuse case and uses solid angles similarly to the hemicube

    How forward‐scattering snow and terrain change the Alpine radiation balance with application to solar panels

    Get PDF
    Rough terrain in mid- and high latitudes is often covered with highly reflective snow, giving rise to a very complex transfer of incident sunlight. In order to simplify the radiative transfer in weather and climate models, snow is generally treated as an isotropically reflecting material. We develop a new model of radiative transfer over mountainous terrain, which considers for the first time the forward scattering properties of snow. Combining ground-measured meteorological data and high resolution digital elevation models, we show that the forward scattering peak of snow leads to a strong local redistribution of incident terrain reflected radiation. In particular, the effect of multiple terrain reflections is enhanced. While local effects are large, area-wide albedo is only marginally decreased. In addition, we show that solar panels on snowy ground can clearly benefit from forward scattering, helping to maximize winter electricity production

    Data-driven approaches for interactive appearance editing

    Get PDF
    This thesis proposes several techniques for interactive editing of digital content and fast rendering of virtual 3D scenes. Editing of digital content - such as images or 3D scenes - is difficult, requires artistic talent and technical expertise. To alleviate these difficulties, we exploit data-driven approaches that use the easily accessible Internet data (e. g., images, videos, materials) to develop new tools for digital content manipulation. Our proposed techniques allow casual users to achieve high-quality editing by interactively exploring the manipulations without the need to understand the underlying physical models of appearance. First, the thesis presents a fast algorithm for realistic image synthesis of virtual 3D scenes. This serves as the core framework for a new method that allows artists to fine tune the appearance of a rendered 3D scene. Here, artists directly paint the final appearance and the system automatically solves for the material parameters that best match the desired look. Along this line, an example-based material assignment approach is proposed, where the 3D models of a virtual scene can be "materialized" simply by giving a guidance source (image/video). Next, the thesis proposes shape and color subspaces of an object that are learned from a collection of exemplar images. These subspaces can be used to constrain image manipulations to valid shapes and colors, or provide suggestions for manipulations. Finally, data-driven color manifolds which contain colors of a specific context are proposed. Such color manifolds can be used to improve color picking performance, color stylization, compression or white balancing.Diese Dissertation stellt Techniken zum interaktiven Editieren von digitalen Inhalten und zum schnellen Rendering von virtuellen 3D Szenen vor. Digitales Editieren - seien es Bilder oder dreidimensionale Szenen - ist kompliziert, benötigt kĂŒnstlerisches Talent und technische Expertise. Um diese Schwierigkeiten zu relativieren, nutzen wir datengesteuerte AnsĂ€tze, die einfach zugĂ€ngliche Internetdaten, wie Bilder, Videos und Materialeigenschaften, nutzen um neue Werkzeuge zur Manipulation von digitalen Inhalten zu entwickeln. Die von uns vorgestellten Techniken erlauben Gelegenheitsnutzern das Editieren in hoher QualitĂ€t, indem Manipulationsmöglichkeiten interaktiv exploriert werden können ohne die zugrundeliegenden physikalischen Modelle der Bildentstehung verstehen zu mĂŒssen. ZunĂ€chst stellen wir einen effizienten Algorithmus zur realistischen Bildsynthese von virtuellen 3D Szenen vor. Dieser dient als KerngerĂŒst einer Methode, die Nutzern die Feinabstimmung des finalen Aussehens einer gerenderten dreidimensionalen Szene erlaubt. Hierbei malt der KĂŒnstler direkt das beabsichtigte Aussehen und das System errechnet automatisch die zugrundeliegenden Materialeigenschaften, die den beabsichtigten Eigenschaften am nahesten kommen. Zu diesem Zweck wird ein auf Beispielen basierender Materialzuordnungsansatz vorgestellt, fĂŒr den das 3D Model einer virtuellen Szene durch das simple AnfĂŒhren einer Leitquelle (Bild, Video) in Materialien aufgeteilt werden kann. Als NĂ€chstes schlagen wir Form- und FarbunterrĂ€ume von Objektklassen vor, die aus einer Sammlung von Beispielbildern gelernt werden. Diese UnterrĂ€ume können genutzt werden um Bildmanipulationen auf valide Formen und Farben einzuschrĂ€nken oder ManipulationsvorschlĂ€ge zu liefern. Schließlich werden datenbasierte Farbmannigfaltigkeiten vorgestellt, die Farben eines spezifischen Kontexts enthalten. Diese Mannigfaltigkeiten ermöglichen eine Leistungssteigerung bei Farbauswahl, Farbstilisierung, Komprimierung und Weißabgleich
    corecore