9 research outputs found

    Time and Space Coherent Occlusion Culling for Tileable Extended 3D Worlds

    Get PDF
    International audienceIn order to interactively render large virtual worlds, the amount of 3D geometry passed to the graphics hardware must be kept to a minimum. Typical solutions to this problem include the use of potentially visible sets and occlusion culling, however, these solutions do not scale well, in time nor in memory, with the size of a virtual world. We propose a fast and inexpensive variant of occlusion culling tailored to a simple tiling scheme that improves scalability while maintaining very high performance. Tile visibilities are evaluated with hardwareaccelerated occlusion queries, and in-tile rendering is rapidly computed using BVH instantiation and any visibility method; we use the CHC++ occlusion culling method for its good general performance. Tiles are instantiated only when tested locally for visibility, thus avoiding the need for a preconstructed global structure for the complete world. Our approach can render large-scale, diversified virtual worlds with complex geometry, such as cities or forests, all at high performance and with a modest memory footprint

    CreaciĂł d'un algorisme per a l'automatitzaciĂł i la simplificaciĂł d'entorns grĂ fics complexos

    Get PDF
    Aquest document analitza i descriu la implementaciĂł d'una aplicaciĂł que inclou un algorisme de Random Visibility Sampling que, amb l'ajuda d'un algorisme d'expansiĂł de visibilitat per veĂŻnatge de triangles, dona una de les possibles solucions al problema de la visibilitat.This document analyzes and describes the implementation of an application that includes a Random Visibility Sampling algorithm that, alongside with a visibility expansion algorithm, using the vicinity of triangles, gives one of the possible solutions to the visibility problem

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods

    Visibility-Based Optimizations for Image Synthesis

    Get PDF
    Katedra počítačové grafiky a interakce

    Efficient geometric sound propagation using visibility culling

    Get PDF
    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario

    Modélisation procédurale de mondes virtuels par pavage d'occultation

    Get PDF
    Demonstration videos can be found on fr.linkedin.com/in/doriangomez/Cette thèse porte sur la modélisation procédurale de mondes virtuels étendus dans le domaine de l’informatique graphique. Nous proposons d’exploiter les propriétés de visibilité entre régions élémentaires de la scène, que nous appelons tuiles, pour contrôler sa construction par pavage rectangulaire. Deux objectifs distincts sont visés par nos travaux : (1) fournir aux infographistes un moyen efficace pour générer du contenu 3D pour ces scènes virtuelles de très grande taille, et (2) garantir, dès la création du monde, des performances de rendu et de visualisation efficace. Pour cela, nous proposons plusieurs méthodes de détermination de la visibilité en 2D et en 3D. Ces méthodes permettent l’évaluation d’ensembles potentiellement visibles (PVS) en temps interactif ou en temps réel. Elles sont basées sur les calculs de lignes séparatrices et de lignes de support des objets, mais aussi sur l’organisation hiérarchique des objets associés aux tuiles. La première technique (2D) garantit l’occultation complète du champ visuel à partir d’une distance fixe, spécifiée par le concepteur de la scène, depuis n’importe quel endroit sur le pavage. La seconde permet d’estimer et de localiser les tuiles où se propage la visibilité, et de construire le monde en conséquence. Afin de pouvoir générer des mondes variés, nous présentons ensuite l’extension de cette dernière méthode à la 3D. Enfin, nous proposons deux méthodes d’optimisation du placement des objets sur les tuiles permettant d’améliorer leurs propriétés d’occultation et leurs impacts sur les performances de rendu tout en conservant l’atmosphère créée par l’infographiste par ses choix de placement initiaux.This thesis deals with procedural modeling applied to extended worlds for computer graphics.We study visibility applied to tiling patterns, aiming at two distinct objectives : (1) providing artists with efficient tools to generate 3D content for very extended virtual scenes, and (2) guaranteeing that this content improves performance of subsequent renderings, during its construction. We propose several methods for 2D and 3D visibility determination, in order to achieve interactive or real-time evaluation of potentially visible sets (PVS). They are based on the concepts of separating and supporting lines/planes, as well as objects hierarchies over tiles. Our first 2D method guarantees full occlusion of the visual field (view frustum) beyond a fixed distance, regardless of the observer’s location on a tiling. The second method enables fast estimation and localization of visible tiles, and builds up a virtual world accordingly. We also extend this method to 3D. Finally, we present two methods to optimize objects locations on tiles, and show how to improve rendering performance for scenes generated on the fly

    Occlusion culling et pipeline hybride CPU/GPU pour le rendu temps réel de scènes complexes pour la réalité virtuelle mobile

    Get PDF
    Le rendu 3D temps réel est devenu ces dernières années un outil indispensable pour tous travaux de modélisation et de maintenance des systèmes mécaniques complexes, pour le développement des jeux sérieux ou ludiques et plus généralement pour toute application de visualisation interactive dans l'industrie, la médecine, l'architecture,... Actuellement, c'est le domaine de prédilection des cartes graphiques en raison de leur architecture spécifiquement conçue pour effectuer des rendus 3D rapides, en particulier grâce à leurs unités de discrétisation et de texture dédiées. Cependant, les applications industrielles sont exécutées sur une large gamme d'ordinateurs, hétérogènes en terme de puissance de calcul. Ces machines ne disposent pas toujours de composants matériels haut de gamme, ce qui restreint leur utilisation pour les applications proposant l'affichage de scènes 3D complexes. Les recherches actuelles sont fortement orientées vers des solutions basées sur les capacités de calcul des cartes graphiques modernes, de haute performance. Au contraire, nous ne supposons pas l'existence systématique de telles cartes sur toutes les architectures et proposons donc d'ajuster notre pipeline de rendu à celles-ci afin d'obtenir un rendu efficace. Notre moteur de rendu s'adapte aux capacités de l'ordinateur, tout en prenant en compte chaque unité de calcul, CPU et GPU. Le but est d'équilibrer au mieux la charge de travail des deux unités afin de permettre un rendu temps réel des scènes complexes, même sur des ordinateurs bas de gamme. Ce pipeline est aisément intégrable à tout moteur de rendu classique et ne nécessite aucune étape de précalculNowadays, 3D real-time rendering has become an essential tool for any modeling work and maintenance of industrial equipment, for the development of serious or fun games, and in general for any visualization application in the domains of industry, medical care, architecture,... Currently, this task is generally assigned to graphics hardware, due to its specific design and its dedicated rasterization and texturing units. However, in the context of industrial applications, a wide range of computers is used, heterogeneous in terms of computation power. These architectures are not always equipped with high-end hardware, which may limit their use for this type of applications. Current research is strongly oriented towards modern high performance graphics hardware-based solutions. On the contrary, we do not assume the existence of such hardware on all architectures. We propose therefore to adapt our pipeline according to the computing architecture in order to obtain an efficient rendering. Our pipeline adapts to the computer's capabilities, taking into account each computing unit, CPU and GPU. The goal is to provide a well-balanced load on the two computing units, thus ensuring a real-time rendering of complex scenes, even on low-end computers. This pipeline can be easily integrated into any conventional rendering system and does not require any precomputation ste
    corecore