1,323 research outputs found

    QuadStream: {A} Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction

    Get PDF

    Visualization and inspection of the geometry of particle packings

    Get PDF
    Gegenstand dieser Dissertation ist die Entwicklung von effizienten Verfahren zur Visualisierung und Inspektion der Geometrie von Partikelmischungen. Um das Verhalten der Simulation für die Partikelmischung besser zu verstehen und zu überwachen, sollten nicht nur die Partikel selbst, sondern auch spezielle von den Partikeln gebildete Bereiche, die den Simulationsfortschritt und die räumliche Verteilung von Hotspots anzeigen können, visualisiert werden können. Dies sollte auch bei großen Packungen mit Millionen von Partikeln zumindest mit einer interaktiven Darstellungsgeschwindigkeit möglich sein. . Da die Simulation auf der Grafikkarte (GPU) durchgeführt wird, sollten die Visualisierungstechniken die Daten des GPU-Speichers vollständig nutzen. Um die Qualität von trockenen Partikelmischungen wie Beton zu verbessern, wurde der Korngrößenverteilung große Aufmerksamkeit gewidmet, die die Raumfüllungsrate hauptsächlich beeinflusst und daher zwei der wichtigsten Eigenschaften des Betons bestimmt: die strukturelle Robustheit und die Haltbarkeit. Anhand der Korngrößenverteilung kann die Raumfüllungsrate durch Computersimulationen bestimmt werden, die analytischen Ansätzen in der Praxis wegen der breiten Größenverteilung der Partikel oft überlegen sind. Eine der weit verbreiteten Simulationsmethoden ist das Collective Rearrangement, bei dem die Partikel zunächst an zufälligen Positionen innerhalb eines Behälters platziert werden. Später werden Überlappungen zwischen Partikeln aufgelöst, indem überlappende Partikel voneinander weggedrückt werden. Durch geschickte Anpassung der Behältergröße während der Simulation, kann die Collective Rearrangement-Methode am Ende eine ziemlich dichte Partikelpackung generieren. Es ist jedoch sehr schwierig, den gesamten Simulationsprozess ohne ein interaktives Visualisierungstool zu optimieren oder dort Fehler zu finden. Ausgehend von der etablierten rasterisierungsbasierten Methode zum Darstellen einer großen Menge von Kugeln, bietet diese Dissertation zunächst schnelle und pixelgenaue Methoden zur neuartigen Visualisierung der Überlappungen und Freiräume zwischen kugelförmigen Partikeln innerhalb eines Behälters.. Die auf Rasterisierung basierenden Verfahren funktionieren gut für kleinere Partikelpackungen bis ca. eine Million Kugeln. Bei größeren Packungen entstehen Probleme durch die lineare Laufzeit und den Speicherverbrauch. Zur Lösung dieses Problems werden neue Methoden mit Hilfe von Raytracing zusammen mit zwei neuen Arten von Bounding-Volume-Hierarchien (BVHs) bereitgestellt. Diese können den Raytracing-Prozess deutlich beschleunigen --- die erste kann die vorhandene Datenstruktur für die Simulation wiederverwenden und die zweite ist speichereffizienter. Beide BVHs nutzen die Idee des Loose Octree und sind die ersten ihrer Art, die die Größe von Primitiven für interaktives Raytracing mit häufig aktualisierten Beschleunigungsdatenstrukturen berücksichtigen. Darüber hinaus können die Visualisierungstechniken in dieser Dissertation auch angepasst werden, um Eigenschaften wie das Volumen bestimmter Bereiche zu berechnen. All diese Visualisierungstechniken werden dann auf den Fall nicht-sphärischer Partikel erweitert, bei denen ein nicht-sphärisches Partikel durch ein starres System von Kugeln angenähert wird, um die vorhandene kugelbasierte Simulation wiederverwenden zu können. Dazu wird auch eine neue GPU-basierte Methode zum effizienten Füllen eines nicht-kugelförmigen Partikels mit polydispersen überlappenden Kugeln vorgestellt, so dass ein Partikel mit weniger Kugeln gefüllt werden kann, ohne die Raumfüllungsrate zu beeinträchtigen. Dies erleichtert sowohl die Simulation als auch die Visualisierung. Basierend auf den Arbeiten in dieser Dissertation können ausgefeiltere Algorithmen entwickelt werden, um großskalige nicht-sphärische Partikelmischungen effizienter zu visualisieren. Weiterhin kann in Zukunft Hardware-Raytracing neuerer Grafikkarten anstelle des in dieser Dissertation eingesetzten Software-Raytracing verwendet werden. Die neuen Techniken können auch als Grundlage für die interaktive Visualisierung anderer partikelbasierter Simulationen verwendet werden, bei denen spezielle Bereiche wie Freiräume oder Überlappungen zwischen Partikeln relevant sind.The aim of this dissertation is to find efficient techniques for visualizing and inspecting the geometry of particle packings. Simulations of such packings are used e.g. in material sciences to predict properties of granular materials. To better understand and supervise the behavior of these simulations, not only the particles themselves but also special areas formed by the particles that can show the progress of the simulation and spatial distribution of hot spots, should be visualized. This should be possible with a frame rate that allows interaction even for large scale packings with millions of particles. Moreover, given the simulation is conducted in the GPU, the visualization techniques should take full use of the data in the GPU memory. To improve the performance of granular materials like concrete, considerable attention has been paid to the particle size distribution, which is the main determinant for the space filling rate and therefore affects two of the most important properties of the concrete: the structural robustness and the durability. Given the particle size distribution, the space filling rate can be determined by computer simulations, which are often superior to analytical approaches due to irregularities of particles and the wide range of size distribution in practice. One of the widely adopted simulation methods is the collective rearrangement, for which particles are first placed at random positions inside a container, later overlaps between particles will be resolved by letting overlapped particles push away from each other to fill empty space in the container. By cleverly adjusting the size of the container according to the process of the simulation, the collective rearrangement method could get a pretty dense particle packing in the end. However, it is very hard to fine-tune or debug the whole simulation process without an interactive visualization tool. Starting from the well-established rasterization-based method to render spheres, this dissertation first provides new fast and pixel-accurate methods to visualize the overlaps and free spaces between spherical particles inside a container. The rasterization-based techniques perform well for small scale particle packings but deteriorate for large scale packings due to the large memory requirements that are hard to be approximated correctly in advance. To address this problem, new methods based on ray tracing are provided along with two new kinds of bounding volume hierarchies (BVHs) to accelerate the ray tracing process --- the first one can reuse the existing data structure for simulation and the second one is more memory efficient. Both BVHs utilize the idea of loose octree and are the first of their kind to consider the size of primitives for interactive ray tracing with frequently updated acceleration structures. Moreover, the visualization techniques provided in this dissertation can also be adjusted to calculate properties such as volumes of the specific areas. All these visualization techniques are then extended to non-spherical particles, where a non-spherical particle is approximated by a rigid system of spheres to reuse the existing simulation. To this end a new GPU-based method is presented to fill a non-spherical particle with polydisperse possibly overlapping spheres efficiently, so that a particle can be filled with fewer spheres without sacrificing the space filling rate. This eases both simulation and visualization. Based on approaches presented in this dissertation, more sophisticated algorithms can be developed to visualize large scale non-spherical particle mixtures more efficiently. Besides, one can try to exploit the hardware ray tracing of more recent graphic cards instead of maintaining the software ray tracing as in this dissertation. The new techniques can also become the basis for interactively visualizing other particle-based simulations, where special areas such as free space or overlaps between particles are of interest

    GPU point list generation through histogram pyramids

    No full text
    Image Pyramids are frequently used in porting non-local algorithms to graphics hardware. A Histogram pyramid (short: HistoPyramid), a special version of image pyramid, sums up the number of active entries in a 2D image hierarchically. We show how a HistoPyramid can be utilized as an implicit indexing data structure, allowing us to convert a sparse matrix into a coordinate list of active cell entries (a point list) on graphics hardware . The algorithm reduces a highly sparse matrix with N elements to a list of its M active entries in O(N) + M (log N) steps, despite the restricted graphics hardware architecture. Applications are numerous, including feature detection, pixel classification and binning, conversion of 3D volumes to particle clouds and sparse matrix compression

    RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding

    Full text link
    We present a new hierarchical compression scheme for encoding light field images (LFI) that is suitable for interactive rendering. Our method (RLFC) exploits redundancies in the light field images by constructing a tree structure. The top level (root) of the tree captures the common high-level details across the LFI, and other levels (children) of the tree capture specific low-level details of the LFI. Our decompressing algorithm corresponds to tree traversal operations and gathers the values stored at different levels of the tree. Furthermore, we use bounded integer sequence encoding which provides random access and fast hardware decoding for compressing the blocks of children of the tree. We have evaluated our method for 4D two-plane parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to implement and involves only bit manipulations and integer arithmetic operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and Games (I3D '19

    A Survey on Video-based Graphics and Video Visualization

    Get PDF

    A heterogeneous data-based proposal for procedural 3D cities visualization and generalization

    Get PDF
    Ce projet de thèse est né d'un projet de collaboration entre l'équipe de recherche VORTEX/ Objets visuels: de la réalité à l'expression (maintenant REVA: Réel Expression Vie Artificielle) à l'IRIT : Institut de Recherche en Informatique de Toulouse d'une part et de professionnels de l'éducation, entreprises et entités publiques d'autre part. Le projet de collaboration SCOLA est essentiellement une plate-forme d'apprentissage en ligne basée sur l'utilisation des jeux sérieux dans les écoles. Il aide les utilisateurs à acquérir et à repérer des compétences prédéfinies. Cette plate-forme offre aux enseignants un nouvel outil flexible qui crée des scénarios liés à la pédagogie et personnalise les dossiers des élèves. Plusieurs contributions ont été attribuées à l'IRIT. L'une d'elles consiste à suggérer une solution pour la création automatique d'environnements 3D, à intégrer au scénario du jeu. Cette solution vise à empêcher les infographistes 3D de modéliser manuellement des environnements 3D détaillés et volumineux, ce qui peut être très coûteux et prendre beaucoup de temps. Diverses applications et prototypes ont été développés pour permettre à l'utilisateur de généraliser et de visualiser son propre monde virtuel principalement à partir d'un ensemble de règles. Par conséquent, il n'existe pas de schéma de représentation unique dans le monde virtuel en raison de l'hétérogénéité et de la diversité de la conception de contenus 3D, en particulier des modèles de ville. Cette contrainte nous a amené à nous appuyer largement dans notre projet sur de vraies données urbaines 3D au lieu de données personnalisées prédéfinies par le concepteur de jeu. Les progrès réalisés en infographie, les capacités de calcul élevées et les technologies Web ont largement révolutionné les techniques de reconstruction et de visualisation des données. Ces techniques sont appliquées dans divers domaines, en commençant par les jeux vidéo, les simulations et en terminant par les films qui utilisent des espaces générés de manière procédurale et des animations de personnages. Bien que les jeux informatiques modernes n'aient pas les mêmes restrictions matérielles et de mémoire que les anciens jeux, la génération procédurale est fréquemment utilisée pour créer des jeux, des cartes, des niveaux, des personnages ou d'autres facettes aléatoires uniques sur chaque jeu. Actuellement, la tendance est déplacée vers les SIG: Systèmes d'Information Géographiques pour créer des mondes urbains, en particulier après leur mise en œuvre réussie dans le monde entier afin de prendre en charge de nombreuses domaines d'applications. Les SIG sont plus particulièrement dédiés à des applications telles que la simulation, la gestion des catastrophes et la planification urbaine, avec une grande utilisation plus ou moins limitée dans les jeux, par exemple le jeu "Minecraft", dont la dernière version propose une cartographie utilisant des villes du monde réel Geodata in Minecraft. L'utilisation des données urbaines existantes devient de plus en plus répandue dans les applications cartographiques pour deux raisons principales: premièrement, elle permet de comprendre le contenu spatial d'objets urbains de manière plus logique et, deuxièmement, elle fournit une plate-forme commune pour intégrer des informations au niveau de la ville provenant de différents environnements ou ressources et les rendre accessibles aux utilisateurs. Un modèle de ville virtuelle en 3D est une représentation numérique de l'espace urbain qui décrit les propriétés géométriques, topologiques, sémantiques et d'apparence de ses composants. En général, un MV3D\footnote{Modèle de Ville en 3D} sert de plate-forme d'intégration pour plusieurs facettes d'un espace d'informations urbain, comme l'a souligné "Batty": "En bref, les nouveaux modèles ne sont pas simplement la géométrie numérique des modèles traditionnels, mais des bases de données à grande échelle pouvant être visualisées en 3D. En tant que tels, ils représentent déjà un moyen de fusionner des données symboliques ou thématiques plus abstraites, même des modèles symboliques, dans ce mode de représentation".This thesis project was born from a collaborative project between the research team VORTEX / Visual objects: from reality to expression (now REVA: Real Expression Artificial Life) at IRIT: Institute of Research in Computer Science Toulouse on the one hand and education professionals, companies and public entities on the other.The SCOLA collaborative project is essentially an online learning platform based on the use of serious games in schools. It helps users to acquire and track predefined skills. This platform provides teachers with a new flexible tool that creates pedagogical scenarios and personalizes student records. Several contributions have been attributed to IRIT. One of these is to suggest a solution for the automatic creation of 3D environments, to integrate into the game scenario. This solution aims to prevent 3D graphic designers from manually modeling detailed and large 3D environments, which can be very expensive and take a lot of time. Various applications and prototypes have been developed to allow the user to generalize and visualize their own virtual world primarily from a set of rules. Therefore, there is no single representation scheme in the virtual world due to the heterogeneity and diversity of 3D content design, especially city models. This constraint has led us to rely heavily on our project on real 3D urban data instead of custom data predefined by the game designer. Advances in computer graphics, high computing capabilities, and Web technologies have revolutionized data reconstruction and visualization techniques. These techniques are applied in a variety of areas, starting with video games, simulations, and ending with movies that use procedurally generated spaces and character animations. Although modern computer games do not have the same hardware and memory restrictions as older games, procedural generation is frequently used to create unique games, cards, levels, characters, or other random facets on each. Currently, the trend is shifting towards GIS : Geographical Information Systems to create urban worlds, especially after their successful implementation around the world to support many areas of applications. GIS are more specifically dedicated to applications such as simulation, disaster management and urban planning, with a great use more or less limited in games, for example the game "Minecraft", the latest version offers a map using real world cities Geodata in Minecraft. The use of existing urban data is becoming more and more widespread in cartographic applications for two main reasons: first, it makes it possible to understand the spatial content of urban objects in a more logical way and, secondly, it provides a common platform to integrate city-level information from different environments or resources and make them available to users. A 3D virtual city model is a digital representation of urban space that describes the geometric, topological, semantic, and appearance properties of its components. In general, an MV3D: 3D City Model serves as an integration platform for many facets of an urban information space, as "Batty" pointed out: "In short, the new models are not just the digital geometry of traditional models, but large-scale databases that can be visualized in 3D. As such, they already represent a way to merge more abstract symbolic or thematic data, even symbolic patterns, into this mode of representation"

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process
    corecore