267 research outputs found
Interactive inspection of complex multi-object industrial assemblies
The final publication is available at Springer via http://dx.doi.org/10.1016/j.cad.2016.06.005The use of virtual prototypes and digital models containing thousands of individual objects is commonplace in complex industrial applications like the cooperative design of huge ships. Designers are interested in selecting and editing specific sets of objects during the interactive inspection sessions. This is however not supported by standard visualization systems for huge models. In this paper we discuss in detail the concept of rendering front in multiresolution trees, their properties and the algorithms that construct the hierarchy and efficiently render it, applied to very complex CAD models, so that the model structure and the identities of objects are preserved. We also propose an algorithm for the interactive inspection of huge models which uses a rendering budget and supports selection of individual objects and sets of objects, displacement of the selected objects and real-time collision detection during these displacements. Our solution–based on the analysis of several existing view-dependent visualization schemes–uses a Hybrid Multiresolution Tree that mixes layers of exact geometry, simplified models and impostors, together with a time-critical, view-dependent algorithm and a Constrained Front. The algorithm has been successfully tested in real industrial environments; the models involved are presented and discussed in the paper.Peer ReviewedPostprint (author's final draft
Multiresolution Techniques for Real–Time Visualization of Urban Environments and Terrains
In recent times we are witnessing a steep increase in the availability of data coming from real–life environments.
Nowadays, virtually everyone connected to the Internet may have instant access to a tremendous amount of data coming from satellite elevation maps, airborne time-of-flight scanners and digital cameras, street–level photographs and even cadastral maps.
As for other, more traditional types of media such as pictures and videos, users of digital exploration softwares expect commodity hardware to exhibit good performance for interactive purposes, regardless of the dataset size.
In this thesis we propose novel solutions to the problem of rendering large terrain and urban models on commodity platforms, both for local and remote exploration.
Our solutions build on the concept of multiresolution representation, where alternative representations of the same data with different accuracy are used to selectively distribute the computational power, and consequently the visual accuracy, where it is more needed on the base of the user’s point of view.
In particular, we will introduce an efficient multiresolution data compression technique for planar and spherical surfaces applied to terrain datasets which is able to handle huge amount of information at a planetary scale.
We will also describe a novel data structure for compact storage and rendering of urban entities such as buildings to allow real–time exploration of cityscapes from a remote online repository.
Moreover, we will show how recent technologies can be exploited to transparently integrate virtual exploration and general computer graphics techniques with web applications
Supporting multi-resolution out-of-core rendering of massive LiDAR point clouds through non-redundant data structures
This is an Accepted Manuscript of an article published by Taylor & Francis in INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE on 28 Nov 2018, available at: https://doi.org/10.1080/13658816.2018.1549734[Abstract]: In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.This research was supported by Xunta de Galicia under the Consolidation Programme of Competitive Reference Groups, co-founded by ERDF funds from the EU [Ref. ED431C 2017/04]; Consolidation Programme of Competitive Research Units, co-founded by ERDF funds from the EU [Ref. R2016/037]; Xunta de Galicia (Centro Singular de Investigación de Galicia accreditation 2016/2019) and the European Union (European Regional Development Fund, ERDF) under Grant [Ref. ED431G/01]; and the Ministry of Economy and Competitiveness of Spain and ERDF funds from the EU [TIN2016-75845-P].Xunta de Galicia; ED431C 2017/04Xunta de Galicia; R2016/037Xunta de Galicia; ED431G/0
Distributed texture-based terrain synthesis
Terrain synthesis is an important field of Computer Graphics that deals with the generation of 3D landscape models for use in virtual environments. The field has evolved to a stage where large and even infinite landscapes can be generated in realtime. However, user control of the generation process is still minimal, as well as the creation of virtual landscapes that mimic real terrain. This thesis investigates the use of texture synthesis techniques on real landscapes to improve realism and the use of sketch-based interfaces to enable intuitive user control
Beyond high-resolution geometry in 3D Cultural Heritage: enhancing visualization realism in interactive contexts
La tesi, nell’ambito della computer graphics 3D
interattiva, descrive la definizione e sviluppo di algoritmi per un migliore realismo nella visualizzazione di modelli tridimensionali di grandi dimensioni, con particolare attenzione alle applicazioni di queste tecnologie di visualizzazione 3D ai beni culturali
Scalable exploration of 3D massive models
Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Esta tese presenta unha serie técnicas escalables que avanzan o estado da arte da creación e exploración de grandes modelos tridimensionaies. No ámbito da xeración
destes modelos, preséntanse métodos para mellorar a adquisición e procesado de
escenas reais, grazas a unha implementación eficiente dun sistema out- of- core de
xestión de nubes de puntos, e unha nova metodoloxía escalable de fusión de datos
de xeometría e cor para adquisicións con oclusións. No ámbito da visualización de
grandes conxuntos de datos, que é o núcleo principal desta tese, preséntanse dous
novos métodos. O primeiro é unha técnica adaptabile out-of-core que aproveita o
hardware de rasterización da GPU e as occlusion queries para crear lotes coherentes
de traballo, que serán procesados por kernels de trazado de raios codificados en
shaders, permitindo out-of-core ray-tracing con sombreado e iluminación global. O segundo
é un método de compresión agresivo que aproveita a redundancia xeométrica
que se adoita atopar en grandes modelos 3D para comprimir os datos de forma
que caiban, nun formato totalmente renderizable, na memoria da GPU. O método
está deseñado para representacións voxelizadas de escenas 3D, que son amplamente
utilizadas para diversos cálculos como para acelerar as consultas de visibilidade na
GPU. A compresión lógrase fusionando subárbores idénticas a través dunha transformación
de similitude, e aproveitando a distribución non homoxénea de referencias
a nodos compartidos para almacenar punteiros aos nodos fillo, e utilizando unha
codificación de bits variable. A capacidade e o rendemento de todos os métodos
avalíanse utilizando diversos casos de uso do mundo real de diversos ámbitos e
sectores, incluídos o patrimonio cultural, a enxeñería e os videoxogos.[Resumen] En esta tesis se presentan una serie técnicas escalables que avanzan el estado del arte de la creación y exploración de grandes modelos tridimensionales. En el ámbito de
la generación de estos modelos, se presentan métodos para mejorar la adquisición y
procesado de escenas reales, gracias a una implementación eficiente de un sistema
out-of-core de gestión de nubes de puntos, y una nueva metodología escalable de
fusión de datos de geometría y color para adquisiciones con oclusiones. Para la
visualización de grandes conjuntos de datos, que constituye el núcleo principal de
esta tesis, se presentan dos nuevos métodos. El primero de ellos es una técnica
adaptable out-of-core que aprovecha el hardware de rasterización de la GPU y las
occlusion queries, para crear lotes coherentes de trabajo, que serán procesados por
kernels de trazado de rayos codificados en shaders, permitiendo renders out-of-core
avanzados con sombreado e iluminación global. El segundo es un método de compresión
agresivo, que aprovecha la redundancia geométrica que se suele encontrar en
grandes modelos 3D para comprimir los datos de forma que quepan, en un formato
totalmente renderizable, en la memoria de la GPU. El método está diseñado para
representaciones voxelizadas de escenas 3D, que son ampliamente utilizadas para
diversos cálculos como la aceleración las consultas de visibilidad en la GPU o el
trazado de sombras. La compresión se logra fusionando subárboles idénticos a través
de una transformación de similitud, y aprovechando la distribución no homogénea de
referencias a nodos compartidos para almacenar punteros a los nodos hijo, utilizando
una codificación de bits variable. La capacidad y el rendimiento de todos los métodos
se evalúan utilizando diversos casos de uso del mundo real de diversos ámbitos y
sectores, incluidos el patrimonio cultural, la ingeniería y los videojuegos.[Abstract] This thesis introduces scalable techniques that advance the state-of-the-art in massive model creation and exploration. Concerning model creation, we present methods for improving reality-based scene acquisition and processing, introducing an efficient
implementation of scalable out-of-core point clouds and a data-fusion approach for
creating detailed colored models from cluttered scene acquisitions. The core of this
thesis concerns enabling technology for the exploration of general large datasets.
Two novel solutions are introduced. The first is an adaptive out-of-core technique
exploiting the GPU rasterization pipeline and hardware occlusion queries in order
to create coherent batches of work for localized shader-based ray tracing kernels,
opening the door to out-of-core ray tracing with shadowing and global illumination.
The second is an aggressive compression method that exploits redundancy in large
models to compress data so that it fits, in fully renderable format, in GPU memory.
The method is targeted to voxelized representations of 3D scenes, which are widely
used to accelerate visibility queries on the GPU. Compression is achieved by merging
subtrees that are identical through a similarity transform and by exploiting the skewed
distribution of references to shared nodes to store child pointers using a variable bitrate
encoding The capability and performance of all methods are evaluated on many
very massive real-world scenes from several domains, including cultural heritage,
engineering, and gaming
Towards Predictive Rendering in Virtual Reality
The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation
- …