298,060 research outputs found
Parallel graphics and visualization
Computer Graphics and Visualization are two fields that
continue to evolve at a fast pace, always addressing new
application areas and achieving better and faster results.
The volume of data processed by such applications keeps
getting larger and the illumination and light transport
models used to generate pictorial representations of this
data keep getting more sophisticated. Richer illumination
and light transport models allow the generation of richer
images that convey more information about the phenomenons
or virtual worlds represented by the data and are
more realistic and engaging to the user. The combination
of large data sets, rich illumination models and large,
sophisticated displays results in huge workloads that
cannot be processed sequentially and still maintain
acceptable response times. Parallel processing is thus an
obvious approach to such problems, creating the field of
Parallel Graphics and Visualization.
The Eurographics Symposium on Parallel Graphics and
Visualization (EGPGV) gathers together researchers from
all over the world to foster research focused on theoretical
and applied issues critical to parallel and distributed
computing and its application to all aspects of computer
graphics, virtual reality, scientific and engineering visualization.
This special issue is a collection of five papers
selected from those presented at the 7th EGPGV, which
took place in Lugano, Switzerland, in May, 2007.
The research presented in this symposium has evolved
over the years, often reflecting the evolution of the
underlying systems’ architectures. While papers presented
in the first few events focused on Single Instruction
Multiple Data and Massively Parallel Multi-Processing
systems, in recent years the focus was mainly on Symmetric
Multiprocessing machines and PC clusters, often also
including the utilization of multiple Graphics Processing
Units. The 2007 event witnessed the first papers addressing
multicore processors, thus following the general trend of
computer systems’ architecture.
The paper by Wald, Ize and Parker discusses acceleration
structures for interactive ray tracing of dynamic
scenes. They propose the utilization of Bounding Volume
Hierarchies (BVH), which for deformable scenes can be
rapidly updated by adjusting the bounding primitives while
maintaining the hierarchy. To avoid a significant performance
penalty due to a large mismatch between the scene
geometry and the tree topology the BVH is rebuilt
asynchronously and concurrently with rendering. According
to the authors, in the near future interactive ray tracers
are expected to run on highly parallel multicore architectures.
Thus, all results reported were obtained on an 8
processor dual core system, totalling 16 cores.
Gribble, Brownlee and Parker propose two algorithms
targeting highly parallel multicore architectures enabling
interactive navigation and exploration of large particle data
sets with global illumination effects. Rendering samples are
lazily evaluated using Monte Carlo path tracing, while
visualization occurs asynchronously by using Dynamic
Luminance Textures that cache the renderer results. The
combined utilization of particle based simulation methods
and global illumination enables the effective communication
of subtle changes in the three-dimensional structure of the
data. All results were also obtained on a 16 cores architecture.
The paper by Thomaszweski, Pabst and Blochinger
analyzes parallel techniques for physically based simulation,
in particular, the time integration and collision
handling phases. The former is addressed using the
conjugate gradient algorithm and static problem decomposition,
while the latter exhibits a dynamic structure, thus
requiring fully dynamic task decomposition. Their results
were obtained using three different quad-core systems.
Hong and Shen derive an efficient parallel algorithm for
symmetry computation in volume data represented by
regular grids. Sequential detection of symmetric features in
volumetric data sets has a prohibitive cost, thus requiring
efficient parallel algorithms and powerful parallel systems.
The authors obtained the reported results on a PC cluster
with Infiniband and 64 nodes, each being a dual processor,
single core Opteron.
Bettio, Gobbetti, Marton and Pintore describe a scalable
multiresolution rendering system targeting massive triangle
meshes and driving different sized light field displays. The
larger light field display ð1:6 0:9m2Þ is based on a special
arrangement of projectors and a holographic screen. It
allows multiple freely moving viewers to see the scene from
their respective points of view and enjoy continuous
horizontal parallax without any specialized viewing devices.
To drive this 35 Mbeams display they use a scalable
parallel renderer, resorting to out of core and level of detail
techniques, and running on a 15 nodes PC cluster
From Big Data to Big Displays: High-Performance Visualization at Blue Brain
Blue Brain has pushed high-performance visualization (HPV) to complement its
HPC strategy since its inception in 2007. In 2011, this strategy has been
accelerated to develop innovative visualization solutions through increased
funding and strategic partnerships with other research institutions.
We present the key elements of this HPV ecosystem, which integrates C++
visualization applications with novel collaborative display systems. We
motivate how our strategy of transforming visualization engines into services
enables a variety of use cases, not only for the integration with high-fidelity
displays, but also to build service oriented architectures, to link into web
applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho
Recent Advances in Graph Partitioning
We survey recent trends in practical algorithms for balanced graph
partitioning together with applications and future research directions
EPiK-a Workflow for Electron Tomography in Kepler.
Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility
- …