442,099 research outputs found
A Variational Level Set Approach for Surface Area Minimization of Triply Periodic Surfaces
In this paper, we study triply periodic surfaces with minimal surface area
under a constraint in the volume fraction of the regions (phases) that the
surface separates. Using a variational level set method formulation, we present
a theoretical characterization of and a numerical algorithm for computing these
surfaces. We use our theoretical and computational formulation to study the
optimality of the Schwartz P, Schwartz D, and Schoen G surfaces when the volume
fractions of the two phases are equal and explore the properties of optimal
structures when the volume fractions of the two phases not equal. Due to the
computational cost of the fully, three-dimensional shape optimization problem,
we implement our numerical simulations using a parallel level set method
software package.Comment: 28 pages, 16 figures, 3 table
OPENMENDEL: A Cooperative Programming Project for Statistical Genetics
Statistical methods for genomewide association studies (GWAS) continue to
improve. However, the increasing volume and variety of genetic and genomic data
make computational speed and ease of data manipulation mandatory in future
software. In our view, a collaborative effort of statistical geneticists is
required to develop open source software targeted to genetic epidemiology. Our
attempt to meet this need is called the OPENMENDELproject
(https://openmendel.github.io). It aims to (1) enable interactive and
reproducible analyses with informative intermediate results, (2) scale to big
data analytics, (3) embrace parallel and distributed computing, (4) adapt to
rapid hardware evolution, (5) allow cloud computing, (6) allow integration of
varied genetic data types, and (7) foster easy communication between
clinicians, geneticists, statisticians, and computer scientists. This article
reviews and makes recommendations to the genetic epidemiology community in the
context of the OPENMENDEL project.Comment: 16 pages, 2 figures, 2 table
Parallel graphics and visualization
Computer Graphics and Visualization are two fields that
continue to evolve at a fast pace, always addressing new
application areas and achieving better and faster results.
The volume of data processed by such applications keeps
getting larger and the illumination and light transport
models used to generate pictorial representations of this
data keep getting more sophisticated. Richer illumination
and light transport models allow the generation of richer
images that convey more information about the phenomenons
or virtual worlds represented by the data and are
more realistic and engaging to the user. The combination
of large data sets, rich illumination models and large,
sophisticated displays results in huge workloads that
cannot be processed sequentially and still maintain
acceptable response times. Parallel processing is thus an
obvious approach to such problems, creating the field of
Parallel Graphics and Visualization.
The Eurographics Symposium on Parallel Graphics and
Visualization (EGPGV) gathers together researchers from
all over the world to foster research focused on theoretical
and applied issues critical to parallel and distributed
computing and its application to all aspects of computer
graphics, virtual reality, scientific and engineering visualization.
This special issue is a collection of five papers
selected from those presented at the 7th EGPGV, which
took place in Lugano, Switzerland, in May, 2007.
The research presented in this symposium has evolved
over the years, often reflecting the evolution of the
underlying systems’ architectures. While papers presented
in the first few events focused on Single Instruction
Multiple Data and Massively Parallel Multi-Processing
systems, in recent years the focus was mainly on Symmetric
Multiprocessing machines and PC clusters, often also
including the utilization of multiple Graphics Processing
Units. The 2007 event witnessed the first papers addressing
multicore processors, thus following the general trend of
computer systems’ architecture.
The paper by Wald, Ize and Parker discusses acceleration
structures for interactive ray tracing of dynamic
scenes. They propose the utilization of Bounding Volume
Hierarchies (BVH), which for deformable scenes can be
rapidly updated by adjusting the bounding primitives while
maintaining the hierarchy. To avoid a significant performance
penalty due to a large mismatch between the scene
geometry and the tree topology the BVH is rebuilt
asynchronously and concurrently with rendering. According
to the authors, in the near future interactive ray tracers
are expected to run on highly parallel multicore architectures.
Thus, all results reported were obtained on an 8
processor dual core system, totalling 16 cores.
Gribble, Brownlee and Parker propose two algorithms
targeting highly parallel multicore architectures enabling
interactive navigation and exploration of large particle data
sets with global illumination effects. Rendering samples are
lazily evaluated using Monte Carlo path tracing, while
visualization occurs asynchronously by using Dynamic
Luminance Textures that cache the renderer results. The
combined utilization of particle based simulation methods
and global illumination enables the effective communication
of subtle changes in the three-dimensional structure of the
data. All results were also obtained on a 16 cores architecture.
The paper by Thomaszweski, Pabst and Blochinger
analyzes parallel techniques for physically based simulation,
in particular, the time integration and collision
handling phases. The former is addressed using the
conjugate gradient algorithm and static problem decomposition,
while the latter exhibits a dynamic structure, thus
requiring fully dynamic task decomposition. Their results
were obtained using three different quad-core systems.
Hong and Shen derive an efficient parallel algorithm for
symmetry computation in volume data represented by
regular grids. Sequential detection of symmetric features in
volumetric data sets has a prohibitive cost, thus requiring
efficient parallel algorithms and powerful parallel systems.
The authors obtained the reported results on a PC cluster
with Infiniband and 64 nodes, each being a dual processor,
single core Opteron.
Bettio, Gobbetti, Marton and Pintore describe a scalable
multiresolution rendering system targeting massive triangle
meshes and driving different sized light field displays. The
larger light field display Ă°1:6 0:9m2Ăž is based on a special
arrangement of projectors and a holographic screen. It
allows multiple freely moving viewers to see the scene from
their respective points of view and enjoy continuous
horizontal parallax without any specialized viewing devices.
To drive this 35 Mbeams display they use a scalable
parallel renderer, resorting to out of core and level of detail
techniques, and running on a 15 nodes PC cluster
Parallel load balancing strategy for Volume-of-Fluid methods on 3-D unstructured meshes
© 2016. This version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/l Volume-of-Fluid (VOF) is one of the methods of choice to reproduce the interface motion in the simulation of multi-fluid flows. One of its main strengths is its accuracy in capturing sharp interface geometries, although requiring for it a number of geometric calculations. Under these circumstances, achieving parallel performance on current supercomputers is a must. The main obstacle for the parallelization is that the computing costs are concentrated only in the discrete elements that lie on the interface between fluids. Consequently, if the interface is not homogeneously distributed throughout the domain, standard domain decomposition (DD) strategies lead to imbalanced workload distributions. In this paper, we present a new parallelization strategy for general unstructured VOF solvers, based on a dynamic load balancing process complementary to the underlying DD. Its parallel efficiency has been analyzed and compared to the DD one using up to 1024 CPU-cores on an Intel SandyBridge based supercomputer. The results obtained on the solution of several artificially generated test cases show a speedup of up to similar to 12x with respect to the standard DD, depending on the interface size, the initial distribution and the number of parallel processes engaged. Moreover, the new parallelization strategy presented is of general purpose, therefore, it could be used to parallelize any VOF solver without requiring changes on the coupled flow solver. Finally, note that although designed for the VOF method, our approach could be easily adapted to other interface-capturing methods, such as the Level-Set, which may present similar workload imbalances. (C) 2014 Elsevier Inc. Allrights reserved.Peer ReviewedPostprint (author's final draft
- …