3,808 research outputs found
Using 3D Voronoi grids in radiative transfer simulations
Probing the structure of complex astrophysical objects requires effective
three-dimensional (3D) numerical simulation of the relevant radiative transfer
(RT) processes. As with any numerical simulation code, the choice of an
appropriate discretization is crucial. Adaptive grids with cuboidal cells such
as octrees have proven very popular, however several recently introduced
hydrodynamical and RT codes are based on a Voronoi tessellation of the spatial
domain. Such an unstructured grid poses new challenges in laying down the rays
(straight paths) needed in RT codes. We show that it is straightforward to
implement accurate and efficient RT on 3D Voronoi grids. We present a method
for computing straight paths between two arbitrary points through a 3D Voronoi
grid in the context of a RT code. We implement such a grid in our RT code
SKIRT, using the open source library Voro++ to obtain the relevant properties
of the Voronoi grid cells based solely on the generating points. We compare the
results obtained through the Voronoi grid with those generated by an octree
grid for two synthetic models, and we perform the well-known Pascucci RT
benchmark using the Voronoi grid. The presented algorithm produces correct
results for our test models. Shooting photon packages through the geometrically
much more complex 3D Voronoi grid is only about three times slower than the
equivalent process in an octree grid with the same number of cells, while in
fact the total number of Voronoi grid cells may be lower for an equally good
representation of the density field. We conclude that the benefits of using a
Voronoi grid in RT simulation codes will often outweigh the somewhat slower
performance.Comment: 9 pages, 7 figures, accepted by A
Cosmological Simulations Using Special Purpose Computers: Implementing P3M on Grape
An adaptation of the Particle-Particle/Particle-Mesh (P3M) code to the
special purpose hardware GRAPE is presented. The short range force is
calculated by a four chip GRAPE-3A board, while the rest of the calculation is
performed on a Sun Sparc 10/51 workstation. The limited precision of the GRAPE
hardware and algorithm constraints introduce stochastic errors of the order of
a few percent in the gravitational forces. Tests of this new P3MG3A code show
that it is a robust tool for cosmological simulations. The code currently
achieves a peak efficiency of one third the speed of the vectorized P3M code on
a Cray C-90 and significant improvements are planned in the near future.
Special purpose computers like GRAPE are therefore an attractive alternative to
supercomputers for numerical cosmology.Comment: 9 pages (ApJS style); uuencoded compressed PostScript file (371 kb)
Also available by anonymous 'ftp' to astro.Princeton.EDU [128.112.24.45] in:
summers/grape/p3mg3a.ps (668 kb) and WWW at:
http://astro.Princeton.EDU/~library/prep.html (as POPe-600) Send all
comments, questions, requests, etc. to: [email protected]
Afivo: a framework for quadtree/octree AMR with shared-memory parallelization and geometric multigrid methods
Afivo is a framework for simulations with adaptive mesh refinement (AMR) on
quadtree (2D) and octree (3D) grids. The framework comes with a geometric
multigrid solver, shared-memory (OpenMP) parallelism and it supports output in
Silo and VTK file formats. Afivo can be used to efficiently simulate AMR
problems with up to about unknowns on desktops, workstations or single
compute nodes. For larger problems, existing distributed-memory frameworks are
better suited. The framework has no built-in functionality for specific physics
applications, so users have to implement their own numerical methods. The
included multigrid solver can be used to efficiently solve elliptic partial
differential equations such as Poisson's equation. Afivo's design was kept
simple, which in combination with the shared-memory parallelism facilitates
modification and experimentation with AMR algorithms. The framework was already
used to perform 3D simulations of streamer discharges, which required tens of
millions of cells
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Multiscale and inhomogeneous molecular systems are challenging topics in the
field of molecular simulation. In particular, modeling biological systems in
the context of multiscale simulations and exploring material properties are
driving a permanent development of new simulation methods and optimization
algorithms. In computational terms, those methods require parallelization
schemes that make a productive use of computational resources for each
simulation and from its genesis. Here, we introduce the heterogeneous domain
decomposition approach which is a combination of an heterogeneity sensitive
spatial domain decomposition with an \textit{a priori} rearrangement of
subdomain-walls. Within this approach, the theoretical modeling and
scaling-laws for the force computation time are proposed and studied as a
function of the number of particles and the spatial resolution ratio. We also
show the new approach capabilities, by comparing it to both static domain
decomposition algorithms and dynamic load balancing schemes. Specifically, two
representative molecular systems have been simulated and compared to the
heterogeneous domain decomposition proposed in this work. These two systems
comprise an adaptive resolution simulation of a biomolecule solvated in water
and a phase separated binary Lennard-Jones fluid.Comment: 14 pages, 12 figure
- …