306 research outputs found
Multiresolution Ray Tracing For Point-Based Geometry [QA445. N832 2007 f rb].
Tumpuan utama di dalam tesis ini adalah kajian tentang integrasi teknik berbilang peleraian dengan penyurihan sinar di dalam menjanakan imej objek objek 3D berasas titik.
The primary concern in this thesis is with the incorporation of multiresolutionbased optimization into ray tracing algorithms specially tailored for point-based geometry
Time-varying volume visualization
Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin
Multiresolution Ray Tracing For Point-Based Geometry
Tumpuan utama di dalam tesis ini adalah kajian tentang integrasi teknik
berbilang peleraian dengan penyurihan sinar di dalam menjanakan imej objekobjek
3D berasas titik.
The primary concern in this thesis is with the incorporation of multiresolutionbased
optimization into ray tracing algorithms specially tailored for point-based
geometry
Interactive global illumination on the CPU
Computing realistic physically-based global illumination in real-time remains one
of the major goals in the fields of rendering and visualisation; one that has not
yet been achieved due to its inherent computational complexity. This thesis focuses
on CPU-based interactive global illumination approaches with an aim to
develop generalisable hardware-agnostic algorithms. Interactive ray tracing is reliant
on spatial and cache coherency to achieve interactive rates which conflicts
with needs of global illumination solutions which require a large number of incoherent
secondary rays to be computed. Methods that reduce the total number of
rays that need to be processed, such as Selective rendering, were investigated to
determine how best they can be utilised.
The impact that selective rendering has on interactive ray tracing was analysed
and quantified and two novel global illumination algorithms were developed,
with the structured methodology used presented as a framework. Adaptive Inter-
leaved Sampling, is a generalisable approach that combines interleaved sampling
with an adaptive approach, which uses efficient component-specific adaptive guidance
methods to drive the computation. Results of up to 11 frames per second
were demonstrated for multiple components including participating media. Temporal Instant Caching, is a caching scheme for accelerating the computation of
diffuse interreflections to interactive rates. This approach achieved frame rates
exceeding 9 frames per second for the majority of scenes. Validation of the results
for both approaches showed little perceptual difference when comparing
against a gold-standard path-traced image. Further research into caching led to
the development of a new wait-free data access control mechanism for sharing the
irradiance cache among multiple rendering threads on a shared memory parallel
system. By not serialising accesses to the shared data structure the irradiance
values were shared among all the threads without any overhead or contention,
when reading and writing simultaneously. This new approach achieved efficiencies
between 77% and 92% for 8 threads when calculating static images and animations.
This work demonstrates that, due to the
flexibility of the CPU, CPU-based
algorithms remain a valid and competitive choice for achieving global illumination
interactively, and an alternative to the generally brute-force GPU-centric
algorithms
VolumeEVM: A new surface/volume integrated model
Volume visualization is a very active research area in the field of scien-tific
visualization. The Extreme Vertices Model (EVM) has proven to be
a complete intermediate model to visualize and manipulate volume data
using a surface rendering approach. However, the ability to integrate the
advantages of surface rendering approach with the superiority in visual exploration
of the volume rendering would actually produce a very complete
visualization and edition system for volume data. Therefore, we decided
to define an enhanced EVM-based model which incorporates the volumetric
information required to achieved a nearly direct volume visualization
technique. Thus, VolumeEVM was designed maintaining the same EVM-based
data structure plus a sorted list of density values corresponding to
the EVM-based VoIs interior voxels. A function which relates interior
voxels of the EVM with the set of densities was mandatory to be defined.
This report presents the definition of this new surface/volume integrated
model based on the well known EVM encoding and propose implementations
of the main software-based direct volume rendering techniques
through the proposed model.Postprint (published version
Scalable exploration of 3D massive models
Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Esta tese presenta unha serie técnicas escalables que avanzan o estado da arte da creación e exploración de grandes modelos tridimensionaies. No ámbito da xeración
destes modelos, preséntanse métodos para mellorar a adquisición e procesado de
escenas reais, grazas a unha implementación eficiente dun sistema out- of- core de
xestión de nubes de puntos, e unha nova metodoloxía escalable de fusión de datos
de xeometría e cor para adquisicións con oclusións. No ámbito da visualización de
grandes conxuntos de datos, que é o núcleo principal desta tese, preséntanse dous
novos métodos. O primeiro é unha técnica adaptabile out-of-core que aproveita o
hardware de rasterización da GPU e as occlusion queries para crear lotes coherentes
de traballo, que serán procesados por kernels de trazado de raios codificados en
shaders, permitindo out-of-core ray-tracing con sombreado e iluminación global. O segundo
é un método de compresión agresivo que aproveita a redundancia xeométrica
que se adoita atopar en grandes modelos 3D para comprimir os datos de forma
que caiban, nun formato totalmente renderizable, na memoria da GPU. O método
está deseñado para representacións voxelizadas de escenas 3D, que son amplamente
utilizadas para diversos cálculos como para acelerar as consultas de visibilidade na
GPU. A compresión lógrase fusionando subárbores idénticas a través dunha transformación
de similitude, e aproveitando a distribución non homoxénea de referencias
a nodos compartidos para almacenar punteiros aos nodos fillo, e utilizando unha
codificación de bits variable. A capacidade e o rendemento de todos os métodos
avalíanse utilizando diversos casos de uso do mundo real de diversos ámbitos e
sectores, incluídos o patrimonio cultural, a enxeñería e os videoxogos.[Resumen] En esta tesis se presentan una serie técnicas escalables que avanzan el estado del arte de la creación y exploración de grandes modelos tridimensionales. En el ámbito de
la generación de estos modelos, se presentan métodos para mejorar la adquisición y
procesado de escenas reales, gracias a una implementación eficiente de un sistema
out-of-core de gestión de nubes de puntos, y una nueva metodología escalable de
fusión de datos de geometría y color para adquisiciones con oclusiones. Para la
visualización de grandes conjuntos de datos, que constituye el núcleo principal de
esta tesis, se presentan dos nuevos métodos. El primero de ellos es una técnica
adaptable out-of-core que aprovecha el hardware de rasterización de la GPU y las
occlusion queries, para crear lotes coherentes de trabajo, que serán procesados por
kernels de trazado de rayos codificados en shaders, permitiendo renders out-of-core
avanzados con sombreado e iluminación global. El segundo es un método de compresión
agresivo, que aprovecha la redundancia geométrica que se suele encontrar en
grandes modelos 3D para comprimir los datos de forma que quepan, en un formato
totalmente renderizable, en la memoria de la GPU. El método está diseñado para
representaciones voxelizadas de escenas 3D, que son ampliamente utilizadas para
diversos cálculos como la aceleración las consultas de visibilidad en la GPU o el
trazado de sombras. La compresión se logra fusionando subárboles idénticos a través
de una transformación de similitud, y aprovechando la distribución no homogénea de
referencias a nodos compartidos para almacenar punteros a los nodos hijo, utilizando
una codificación de bits variable. La capacidad y el rendimiento de todos los métodos
se evalúan utilizando diversos casos de uso del mundo real de diversos ámbitos y
sectores, incluidos el patrimonio cultural, la ingeniería y los videojuegos.[Abstract] This thesis introduces scalable techniques that advance the state-of-the-art in massive model creation and exploration. Concerning model creation, we present methods for improving reality-based scene acquisition and processing, introducing an efficient
implementation of scalable out-of-core point clouds and a data-fusion approach for
creating detailed colored models from cluttered scene acquisitions. The core of this
thesis concerns enabling technology for the exploration of general large datasets.
Two novel solutions are introduced. The first is an adaptive out-of-core technique
exploiting the GPU rasterization pipeline and hardware occlusion queries in order
to create coherent batches of work for localized shader-based ray tracing kernels,
opening the door to out-of-core ray tracing with shadowing and global illumination.
The second is an aggressive compression method that exploits redundancy in large
models to compress data so that it fits, in fully renderable format, in GPU memory.
The method is targeted to voxelized representations of 3D scenes, which are widely
used to accelerate visibility queries on the GPU. Compression is achieved by merging
subtrees that are identical through a similarity transform and by exploiting the skewed
distribution of references to shared nodes to store child pointers using a variable bitrate
encoding The capability and performance of all methods are evaluated on many
very massive real-world scenes from several domains, including cultural heritage,
engineering, and gaming
Efficient raytracing of deforming point-sampled surfaces
We present efficient data structures and caching schemes to accelerate ray-surface intersections for deforming point-sampled surfaces. By exploiting spatial and temporal coherence of the deformation during the animation, we are able to improve rendering performance by a factor of two to three compared to existing techniques. Starting from a tight bounding sphere hierarchy for the undeformed object, we use a lazy updating scheme to adapt the hierarchy to the deformed surface in each animation step. In addition, we achieve a significant speedup for ray-surface intersections by caching per-ray intersection points. We also present a technique for rendering sharp edges and corners in point-sampled models by introducing a novel surface clipping algorithm. © The Eurographics Association and Blackwell Publishing 2005
Image synthesis based on a model of human vision
Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading.
However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer.
This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach.
A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures.
A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering.
This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision
- …