8,694 research outputs found

    Gestion de la complexité géométrique dans le calcul d'éclairement pour la présentation publique de scènes archéologiques complexes

    Get PDF
    International audienceFor cultural heritage, more and more 3D objects are acquired using 3D scanners [Levoy 2000]. The resulting objects are very detailed with a large visual richness but their geometric complexity requires specific methods to render them. We first show how to simplify those objects using a low-resolution mesh with its associated normal maps [Boubekeur 2005] which encode details. Using this representation, we show how to add global illumination with a grid-based and vector-based representation [Pacanowski 2005]. This grid captures efficiently low-frequency indirect illumination. We use 3D textures (for large objects) and 2D textures (for quasi-planar objects) for storing a fixed set of irradiance vectors. These grids are built during a preprocessing step by using almost any existing stochastic global illumination approach. During the rendering step, the indirect illumination within a grid cell is interpolated from its associated irradiance vectors, resulting in a smooth everywhere representation. Furthermore, the vector-based representation offers additional robustness against local variations of geometric properties of the scene.Pour l’étude du patrimoine, de plus en plus d’objets 3D sont acquis par le biais de scanners 3D [Levoy 2000]. Les objets ainsi acquis contiennent de nombreux détails et fournissent une très grande richesse visuelle. Mais pour les afficher, leur très grande complexité géométrique nécessite l’utilisation d’algorithmes spécifiques. Nous montrons ici comment simplifier ces objets par un maillage de faible résolution et une collection de cartes de normales [Boubekeur 2005] pour préserver les détails. Avec cette représentation, nous montrons comment il est possible de calculer un éclairement réaliste à l’aide d’une grille et de données vectorielles [Pacanowski 2005]. Cette grille permet de capturer efficacement les basses fréquences d’un éclairement indirect. Nous utilisons des textures 3D (pour des gros objets) et potentiellement des textures 2D (pour les objets quasi-plans) afin de stocker un nombre prédéterminé de vecteurs d’irradiance. Ces grilles sont calculées au cours d’un pré-calcul à l’aide de n’importe quelle méthode stochastique de calcul d’éclairement global. Pour l’affichage, l’éclairement indirect dû à la grille est interpolé au sein de la cellule associée à la position courante, fournissant ainsi une représentation continue. De plus, cette approche vectorielle permet une plus grande robustesse aux variations locales des propriétés géométriques de la scène

    Neural Free-Viewpoint Relighting for Glossy Indirect Illumination

    Full text link
    Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.Comment: 13 pages, 9 figures, to appear in cgf proceedings of egsr 202

    CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

    Full text link
    Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present \ICG, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging \ICG, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering' published in ECCV, 201
    • …
    corecore