134 research outputs found
Analysis domain model for shared virtual environments
The field of shared virtual environments, which also
encompasses online games and social 3D environments, has a
system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model
Large Model Visualization : Techniques and Applications
The size of datasets in scientific computing is rapidly
increasing. This increase is caused by a boost of processing power in
the past years, which in turn was invested in an increase of the
accuracy and the size of the models. A similar trend enabled a
significant improvement of medical scanners; more than 1000 slices of
a resolution of 512x512 can be generated by modern scanners in daily
practice. Even in computer-aided engineering typical models eas-ily
contain several million polygons. Unfortunately, the data complexity
is growing faster than the rendering performance of modern computer
systems. This is not only due to the slower growing graphics
performance of the graphics subsystems, but in particular because of
the significantly slower growing memory bandwidth for the transfer of
the geometry and image data from the main memory to the graphics
accelerator.
Large model visualization addresses this growing divide between data
complexity and rendering performance. Most methods focus on the
reduction of the geometric or pixel complexity, and hence also the
memory bandwidth requirements are reduced.
In this dissertation, we discuss new approaches from three different
research areas. All approaches target at the reduction of the
processing complexity to achieve an interactive visualization of large
datasets. In the second part, we introduce applications of the
presented ap-proaches. Specifically, we introduce the new VIVENDI
system for the interactive virtual endoscopy and other applications
from mechanical engineering, scientific computing, and architecture.The size of datasets in scientific computing is rapidly
increasing. This increase is caused by a boost of processing power in
the past years, which in turn was invested in an increase of the
accuracy and the size of the models. A similar trend enabled a
significant improvement of medical scanners; more than 1000 slices of
a resolution of 512x512 can be generated by modern scanners in daily
practice. Even in computer-aided engineering typical models eas-ily
contain several million polygons. Unfortunately, the data complexity
is growing faster than the rendering performance of modern computer
systems. This is not only due to the slower growing graphics
performance of the graphics subsystems, but in particular because of
the significantly slower growing memory bandwidth for the transfer of
the geometry and image data from the main memory to the graphics
accelerator.
Large model visualization addresses this growing divide between data
complexity and rendering performance. Most methods focus on the
reduction of the geometric or pixel complexity, and hence also the
memory bandwidth requirements are reduced.
In this dissertation, we discuss new approaches from three different
research areas. All approaches target at the reduction of the
processing complexity to achieve an interactive visualization of large
datasets. In the second part, we introduce applications of the
presented ap-proaches. Specifically, we introduce the new VIVENDI
system for the interactive virtual endoscopy and other applications
from mechanical engineering, scientific computing, and architecture
Doctor of Philosophy
dissertationDataflow pipeline models are widely used in visualization systems. Despite recent advancements in parallel architecture, most systems still support only a single CPU or a small collection of CPUs such as a SMP workstation. Even for systems that are specifically tuned towards parallel visualization, their execution models only provide support for data-parallelism while ignoring taskparallelism and pipeline-parallelism. With the recent popularization of machines equipped with multicore CPUs and multi-GPU units, these visualization systems are undoubtedly falling further behind in reaching maximum efficiency. On the other hand, there exist several libraries that can schedule program executions on multiple CPUs and/or multiple GPUs. However, due to differences in executing a task graph and a pipeline along with their APIs being considerably low-level, it still remains a challenge to integrate these run-time libraries into current visualization systems. Thus, there is a need for a redesigned dataflow architecture to fully support and exploit the power of highly parallel machines in large-scale visualization. The new design must be able to schedule executions on heterogeneous platforms while at the same time supporting arbitrarily large datasets through the use of streaming data structures. The primary goal of this dissertation work is to develop a parallel dataflow architecture for streaming large-scale visualizations. The framework includes supports for platforms ranging from multicore processors to clusters consisting of thousands CPUs and GPUs. We achieve this in our system by introducing the notion of Virtual Processing Elements and Task-Oriented Modules along with a highly customizable scheduler that controls the assignment of tasks to elements dynamically. This creates an intuitive way to maintain multiple CPU/GPU kernels yet still provide coherency and synchronization across module executions. We have implemented these techniques into HyperFlow which is made of an API with all basic dataflow constructs described in the dissertation, and a distributed run-time library that can be used to deploy those pipelines on multicore, multi-GPU and cluster-based platforms
Parallel interactive ray tracing and exploiting spatial coherence
Dissertação de mestrado em Engenharia de InformáticaRay tracing is a rendering technique that allows simulating a wide range of light transport phenomena, resulting on highly realistic computer generated imaging. Ray tracing is, however, computationally very demanding, compared to other techniques such as rasterization that achieves shorter rendering times by greatly simplifying the physics of light propagation, at the cost of less realistic images.
The complexity of the ray tracing algorithm makes it unusable for interactive applications on machines without dedicated hardware, such as GPUs. The extreme task independent nature of the algorithm offers great potential for parallel processing, increasing the available computational power by using additional resources. This thesis studies different approaches and enhancements on the decomposition of workload and load balancing in a distributed shared memory cluster in order to achieve interactive frame rates.
This thesis also studies approaches to enhance the ray tracing algorithm, by reducing the computational demand without decreasing the quality of the results. To achieve this goal, optimizations that depend on the rays’ processing order were implemented. An alternative to the traditional image plan traversal order, scan line, is studied, using space-filling curves.
Results have shown linear speed-ups of the used ray tracer in a distributed shared memory cluster. They have also shown that spatial coherence can be used to increase the performance of the ray tracing algorithm and that the improvement depends of the traversal order of the image plane.O ray tracing Ă© uma tĂ©cnica de sĂntese de imagens que permite simular um vasto conjunto de
fenĂłmenos da luz, resultando em imagens geradas por computador altamente realistas. O ray
tracing Ă©, no entanto, computacionalmente muito exigente quando comparado com outras
tĂ©cnicas tais como a rasterização, a qual consegue tempos de sĂntese mais baixos mas com
imagens menos realistas.
A complexidade do algoritmo de ray tracing torna o seu uso impossĂvel para aplicações
interativas em máquinas que não disponham de hardware dedicado a esse tipo de
processamento, como os GPUs. No entanto, a natureza extremamente paralela do algoritmo
oferece um grande potencial para o processamento paralelo. Nesta tese sĂŁo analisadas
diferentes abordagens e optimizações da decomposição das tarefas e balanceamento da carga
num cluster de memĂłria distribuĂda, por forma a alcançar frame rates interativas.
Esta tese também estuda abordagens que melhoram o algoritmo de ray tracing, ao reduzir o
esforço computacional sem perder qualidade nos resultados. Para esse efeito, foram
implementadas optimizações que dependem da ordem pela qual os raios são processados.
Foi estudada, nomeadamente, uma travessia do plano da imagem alternativa Ă tradicional,
scan line, usando curvas de preenchimento espacial.
Os resultados obtidos mostraram aumento de desempenho linear do ray tracer utilizado num
cluster de memĂłria distribuĂda. Demonstraram tambĂ©m que a coerĂŞncia espacial pode ser
usada para melhorar o desempenho do algoritmo de ray tracing e que estas melhorias
dependem do algoritmo de travessia utilizado
Rectangular Selection of Components in Large 3D Models on the Web
We introduce a novel method for rectangular selection of components in large 3D models on the web. Our technique provides an easy to use solution that is developed for renderers with partial fragment shader support such as embedded systems running WebGL. This method was implemented using the Unity 3D game engine within the 3D Repo open source framework running on a web browser. A case study with industrial 3D models of varying complexity and object count shows that such a solution performs within reasonable rendering expectations even on underpowered devices without a dedicated graphics card
Real-time simulation and visualization of deformations on heightfields
Ankara : The Department of Computer Engineering and The Institute of Engineering and Science of Bilkent University, 2010.Thesis (Master's) -- Bilkent University, 2010.Includes bibliographical references leaves 117-121.The applications of computer graphics raise new expectations, such as realistic
rendering, real-time dynamic scenes and physically correct simulations. The
aim of this thesis is to investigate these problems on the height eld structure,
an extended 2D model that can be processed e ciently by data-parallel architectures.
This thesis presents methods for simulation of deformations on height eld
as caused by triangular objects, physical simulation of objects interacting with
height eld and advanced visualization of deformations. The height eld is stored
in two di erent resolutions to support fast rendering and precise physical simulations
as required. The methods are implemented as part of a large-scale height-
eld management system, which applies additional level of detail and culling
optimizations for the proposed methods and data structures. The solutions provide
real-time interaction and recent graphics hardware (GPU) capabilities are
utilized to achieve real-time results. All the methods described in this thesis
are demonstrated by a sample application and performance characteristics and
results are presented to support the conclusions.Yalçın, M AdilM.S
Visibility-Based Optimizations for Image Synthesis
Katedra poÄŤĂtaÄŤovĂ© grafiky a interakce
- …