2,794 research outputs found
High quality rendering of protein dynamics in space filling mode
Producing high quality depictions of molecular structures has been an area of academic interest for years, with visualisation tools such as UCSF Chimera, Yasara and PyMol providing a huge number of different rendering modes and lighting effects. However, no visualisation program supports per-pixel lighting effects with shadows whilst rendering a molecular trajectory in space filling mode. In this paper, a new approach to rendering high quality visualisations of molecular trajectories is presented. To enhance depth, ambient occlusion is included within the render. Shadows are also included to help the user perceive relative motions of parts of the protein as they move based on their trajectories. Our approach requires a regular grid to be constructed every time the molecular structure deforms allowing per-pixel lighting effects and ambient occlusion to be rendered every frame, at interactive refresh rates. Two different regular grids are investigated, a fixed grid and a memory efficient compact grid. The algorithms used allow trajectories of proteins comprising of up to 300,000 atoms in size to be rendered at ninety frames per second on a desktop computer using the GPU for general purpose computations. Regular grid construction was found to only take up a small proportion of the total time to render a frame. It was found that despite being slower to construct, the memory efficient compact grid outperformed the theoretically faster fixed grid when the protein being rendered is large, owing to its more efficient memory access patterns. The techniques described could be implemented in other molecular rendering software
Parallel graphics and visualization
Computer Graphics and Visualization are two fields that
continue to evolve at a fast pace, always addressing new
application areas and achieving better and faster results.
The volume of data processed by such applications keeps
getting larger and the illumination and light transport
models used to generate pictorial representations of this
data keep getting more sophisticated. Richer illumination
and light transport models allow the generation of richer
images that convey more information about the phenomenons
or virtual worlds represented by the data and are
more realistic and engaging to the user. The combination
of large data sets, rich illumination models and large,
sophisticated displays results in huge workloads that
cannot be processed sequentially and still maintain
acceptable response times. Parallel processing is thus an
obvious approach to such problems, creating the field of
Parallel Graphics and Visualization.
The Eurographics Symposium on Parallel Graphics and
Visualization (EGPGV) gathers together researchers from
all over the world to foster research focused on theoretical
and applied issues critical to parallel and distributed
computing and its application to all aspects of computer
graphics, virtual reality, scientific and engineering visualization.
This special issue is a collection of five papers
selected from those presented at the 7th EGPGV, which
took place in Lugano, Switzerland, in May, 2007.
The research presented in this symposium has evolved
over the years, often reflecting the evolution of the
underlying systems’ architectures. While papers presented
in the first few events focused on Single Instruction
Multiple Data and Massively Parallel Multi-Processing
systems, in recent years the focus was mainly on Symmetric
Multiprocessing machines and PC clusters, often also
including the utilization of multiple Graphics Processing
Units. The 2007 event witnessed the first papers addressing
multicore processors, thus following the general trend of
computer systems’ architecture.
The paper by Wald, Ize and Parker discusses acceleration
structures for interactive ray tracing of dynamic
scenes. They propose the utilization of Bounding Volume
Hierarchies (BVH), which for deformable scenes can be
rapidly updated by adjusting the bounding primitives while
maintaining the hierarchy. To avoid a significant performance
penalty due to a large mismatch between the scene
geometry and the tree topology the BVH is rebuilt
asynchronously and concurrently with rendering. According
to the authors, in the near future interactive ray tracers
are expected to run on highly parallel multicore architectures.
Thus, all results reported were obtained on an 8
processor dual core system, totalling 16 cores.
Gribble, Brownlee and Parker propose two algorithms
targeting highly parallel multicore architectures enabling
interactive navigation and exploration of large particle data
sets with global illumination effects. Rendering samples are
lazily evaluated using Monte Carlo path tracing, while
visualization occurs asynchronously by using Dynamic
Luminance Textures that cache the renderer results. The
combined utilization of particle based simulation methods
and global illumination enables the effective communication
of subtle changes in the three-dimensional structure of the
data. All results were also obtained on a 16 cores architecture.
The paper by Thomaszweski, Pabst and Blochinger
analyzes parallel techniques for physically based simulation,
in particular, the time integration and collision
handling phases. The former is addressed using the
conjugate gradient algorithm and static problem decomposition,
while the latter exhibits a dynamic structure, thus
requiring fully dynamic task decomposition. Their results
were obtained using three different quad-core systems.
Hong and Shen derive an efficient parallel algorithm for
symmetry computation in volume data represented by
regular grids. Sequential detection of symmetric features in
volumetric data sets has a prohibitive cost, thus requiring
efficient parallel algorithms and powerful parallel systems.
The authors obtained the reported results on a PC cluster
with Infiniband and 64 nodes, each being a dual processor,
single core Opteron.
Bettio, Gobbetti, Marton and Pintore describe a scalable
multiresolution rendering system targeting massive triangle
meshes and driving different sized light field displays. The
larger light field display Ă°1:6 0:9m2Ăž is based on a special
arrangement of projectors and a holographic screen. It
allows multiple freely moving viewers to see the scene from
their respective points of view and enjoy continuous
horizontal parallax without any specialized viewing devices.
To drive this 35 Mbeams display they use a scalable
parallel renderer, resorting to out of core and level of detail
techniques, and running on a 15 nodes PC cluster
The role of graphics super-workstations in a supercomputing environment
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms)
Highly Parallel Geometric Characterization and Visualization of Volumetric Data Sets
Volumetric 3D data sets are being generated in many different application areas. Some examples are CAT scans and MRI data, 3D models of protein molecules represented by implicit surfaces, multi-dimensional numeric simulations of plasma turbulence, and stacks of confocal microscopy images of cells. The size of these data sets has been increasing, requiring the speed of analysis and visualization techniques to also increase to keep up.
Recent advances in processor technology have stopped increasing clock speed and instead begun increasing parallelism, resulting in multi-core CPUS and many-core GPUs. To take advantage of these new parallel architectures, algorithms must be explicitly written to exploit parallelism. In this thesis we describe several algorithms and techniques for volumetric data set analysis and visualization that are amenable to these modern parallel architectures.
We first discuss modeling volumetric data with Gaussian Radial Basis Functions (RBFs). RBF representation of a data set has several advantages, including lossy compression, analytic differentiability, and analytic application of Gaussian blur. We also describe a parallel volume rendering algorithm that can create images of the data directly from the RBF representation.
Next we discuss a parallel, stochastic algorithm for measuring the surface area of volumetric representations of molecules. The algorithm is suitable for implementation on a GPU and is also progressive, allowing it to return a rough answer almost immediately and refine the answer over time to the desired level of accuracy.
After this we discuss the concept of Confluent Visualization, which allows the visualization of the interaction between a pair of volumetric data sets. The interaction is visualized through volume rendering, which is well suited to implementation on parallel architectures.
Finally we discuss a parallel, stochastic algorithm for classifying stem cells as having been grown on a surface that induces differentiation or on a surface that does not induce differentiation. The algorithm takes as input 3D volumetric models of the cells generated from confocal microscopy. This algorithm builds on our algorithm for surface area measurement and, like that algorithm, this algorithm is also suitable for implementation on a GPU and is progressive
Lattice-Boltzmann simulations of cerebral blood flow
Computational haemodynamics play a central role in the understanding of blood behaviour
in the cerebral vasculature, increasing our knowledge in the onset of vascular
diseases and their progression, improving diagnosis and ultimately providing better
patient prognosis. Computer simulations hold the potential of accurately characterising
motion of blood and its interaction with the vessel wall, providing the capability to
assess surgical treatments with no danger to the patient. These aspects considerably
contribute to better understand of blood circulation processes as well as to augment
pre-treatment planning. Existing software environments for treatment planning consist
of several stages, each requiring significant user interaction and processing time,
significantly limiting their use in clinical scenarios.
The aim of this PhD is to provide clinicians and researchers with a tool to aid
in the understanding of human cerebral haemodynamics. This tool employs a high
performance
fluid solver based on the lattice-Boltzmann method (coined HemeLB),
high performance distributed computing and grid computing, and various advanced
software applications useful to efficiently set up and run patient-specific simulations.
A graphical tool is used to segment the vasculature from patient-specific CT or MR
data and configure boundary conditions with ease, creating models of the vasculature
in real time. Blood flow visualisation is done in real time using in situ rendering
techniques implemented within the parallel
fluid solver and aided by steering capabilities;
these programming strategies allows the clinician to interactively display the
simulation results on a local workstation. A separate software application is used
to numerically compare simulation results carried out at different spatial resolutions,
providing a strategy to approach numerical validation. This developed software and
supporting computational infrastructure was used to study various patient-specific
intracranial aneurysms with the collaborating interventionalists at the National Hospital
for Neurology and Neuroscience (London), using three-dimensional rotational
angiography data to define the patient-specific vasculature. Blood flow motion was
depicted in detail by the visualisation capabilities, clearly showing vortex fluid
ow features and stress distribution at the inner surface of the aneurysms and their surrounding
vasculature. These investigations permitted the clinicians to rapidly assess
the risk associated with the growth and rupture of each aneurysm. The ultimate goal
of this work is to aid clinical practice with an efficient easy-to-use toolkit for real-time
decision support
Doctor of Philosophy
dissertationRay tracing presents an efficient rendering algorithm for scientific visualization using common visualization tools and scales with increasingly large geometry counts while allowing for accurate physically-based visualization and analysis, which enables enhanced rendering and new visualization techniques. Interactivity is of great importance for data exploration and analysis in order to gain insight into large-scale data. Increasingly large data sizes are pushing the limits of brute-force rasterization algorithms present in the most widely-used visualization software. Interactive ray tracing presents an alternative rendering solution which scales well on multicore shared memory machines and multinode distributed systems while scaling with increasing geometry counts through logarithmic acceleration structure traversals. Ray tracing within existing tools also provides enhanced rendering options over current implementations, giving users additional insight from better depth cues while also enabling publication-quality rendering and new models of visualization such as replicating photographic visualization techniques
Hierarchical N-Body problem on graphics processor unit
Galactic simulation is an important cosmological computation, and represents a classical N-body problem suitable for implementation on vector processors. Barnes-Hut algorithm is a hierarchical N-Body method used to simulate such galactic evolution systems.
Stream processing architectures expose data locality and concurrency available in multimedia applications. On the other hand, there are numerous compute-intensive scientific or engineering applications that can potentially benefit from such computational and communication models. These applications are traditionally implemented on vector processors.
Stream architecture based graphics processor units (GPUs) present a novel computational alternative for efficiently implementing such high-performance applications. Rendering on a stream architecture sustains high performance, while user-programmable modules allow implementing complex algorithms efficiently. GPUs have evolved over the years, from being fixed-function pipelines to user programmable processors.
In this thesis, we focus on the implementation of Barnes-Hut algorithm on typical current-generation programmable GPUs. We exploit computation and communication requirements present in Barnes-Hut algorithm to expose their suitability for user-programmable GPUs. Our implementation of the Barnes-Hut algorithm is formulated as a fragment shader targeting the selected GPU. We discuss implementation details, design issues, results, and challenges encountered in programming the fragment shader
- …