541 research outputs found
A modified parallel tree code for N-body simulation of the Large Scale Structure of the Universe
N-body codes to perform simulations of the origin and evolution of the Large
Scale Structure of the Universe have improved significantly over the past
decade both in terms of the resolution achieved and of reduction of the CPU
time. However, state-of-the-art N-body codes hardly allow one to deal with
particle numbers larger than a few 10^7, even on the largest parallel systems.
In order to allow simulations with larger resolution, we have first
re-considered the grouping strategy as described in Barnes (1990) (hereafter
B90) and applied it with some modifications to our WDSH-PT (Work and Data
SHaring - Parallel Tree) code. In the first part of this paper we will give a
short description of the code adopting the Barnes and Hut algorithm
\cite{barh86} (hereafter BH), and in particular of the memory and work
distribution strategy applied to describe the {\it data distribution} on a
CC-NUMA machine like the CRAY-T3E system. In the second part of the paper we
describe the modification to the Barnes grouping strategy we have devised to
improve the performance of the WDSH-PT code. We will use the property that
nearby particles have similar interaction list. This idea has been checked in
B90, where an interaction list is builded which applies everywhere within a
cell C_{group} containing a little number of particles N_{crit}. B90 reuses
this interaction list for each particle in the cell in turn.
We will assume each particle p to have the same interaction list.
Thus it has been possible to reduce the CPU time increasing the performances.
This leads us to run simulations with a large number of particles (N ~
10^7/10^9) in non-prohibitive times.Comment: 13 pages and 7 Figure
Substructure recovery by 3D Discrete Wavelet Transforms
We present and discuss a method to identify substructures in combined
angular-redshift samples of galaxies within Clusters. The method relies on the
use of Discrete Wavelet Transform (hereafter DWT) and has already been applied
to the analysis of the Coma cluster (Gambera et al. 1997). The main new
ingredient of our method with respect to previous studies lies in the fact that
we make use of a 3D data set rather than a 2D. We test the method on mock
cluster catalogs with spatially localized substructures and on a N-body
simulation. Our main conclusion is that our method is able to identify the
existing substructures provided that: a) the subclumps are detached in part or
all of the phase space, b) one has a statistically significant number of
redshifts, increasing as the distance decreases due to redshift distortions; c)
one knows {\it a priori} the scale on which substructures are to be expected.
We have found that to allow an accurate recovery we must have both a
significant number of galaxies ( for clusters at z or
about 800 at z 0.4) and a limiting magnitude for completeness .
The only true limitation to our method seems to be the necessity of knowing
{\it a priori} the scale on which the substructure is to be found. This is an
intrinsic drawback of the method and no improvement in numerical codes based on
this technique could make up for it.Comment: Accepted for publication in MNRAS. 7 pages, 2 figure
An Innovative Workspace for The Cherenkov Telescope Array
The Cherenkov Telescope Array (CTA) is an initiative to build the next
generation, ground-based gamma-ray observatories. We present a prototype
workspace developed at INAF that aims at providing innovative solutions for the
CTA community. The workspace leverages open source technologies providing web
access to a set of tools widely used by the CTA community. Two different user
interaction models, connected to an authentication and authorization
infrastructure, have been implemented in this workspace. The first one is a
workflow management system accessed via a science gateway (based on the Liferay
platform) and the second one is an interactive virtual desktop environment. The
integrated workflow system allows to run applications used in astronomy and
physics researches into distributed computing infrastructures (ranging from
clusters to grids and clouds). The interactive desktop environment allows to
use many software packages without any installation on local desktops
exploiting their native graphical user interfaces. The science gateway and the
interactive desktop environment are connected to the authentication and
authorization infrastructure composed by a Shibboleth identity provider and a
Grouper authorization solution. The Grouper released attributes are consumed by
the science gateway to authorize the access to specific web resources and the
role management mechanism in Liferay provides the attribute-role mapping
Visualization, Exploration and Data Analysis of Complex Astrophysical Data
In this paper we show how advanced visualization tools can help the
researcher in investigating and extracting information from data. The focus is
on VisIVO, a novel open source graphics application, which blends high
performance multidimensional visualization techniques and up-to-date
technologies to cooperate with other applications and to access remote,
distributed data archives. VisIVO supports the standards defined by the
International Virtual Observatory Alliance in order to make it interoperable
with VO data repositories. The paper describes the basic technical details and
features of the software and it dedicates a large section to show how VisIVO
can be used in several scientific cases.Comment: 32 pages, 15 figures, accepted by PAS
Astrocomp: a web service for the use of high performance computers in Astrophysics
Astrocomp is a joint project, developed by the INAF-Astrophysical Observatory
of Catania, University of Roma La Sapienza and Enea. The project has the goal
of providing the scientific community of a web-based user-friendly interface
which allows running parallel codes on a set of high-performance computing
(HPC) resources, without any need for specific knowledge about parallel
programming and Operating Systems commands. Astrocomp provides, also, computing
time on a set of parallel computing systems, available to the authorized user.
At present, the portal makes a few codes available, among which: FLY, a
cosmological code for studying three-dimensional collisionless self-gravitating
systems with periodic boundary conditions; ATD, a parallel tree-code for the
simulation of the dynamics of boundary-free collisional and collisionless
self-gravitating systems and MARA, a code for stellar light curves analysis.
Other codes are going to be added to the portal.Comment: LaTeX with elsart.cls and harvard.sty (included). 7 pages. To be
submitted to a specific journa
Properties of galaxy halos in Clusters and Voids
We use the results of a high resolution N-body simulation to investigate the
role of the environment on the formation and evolution of galaxy-sized halos.
Starting from a set of constrained initial conditions, we have produced a final
configuration hosting a double cluster in one octant and a large void extending
over two octants of the simulation box. We present results for two statistics:
the relationship between 1-D velocity dispersion and mass and the probability
distribution of the spin parameter . The \svm relationship is well
reproduced by the Truncated Isothermal Sphere (TIS) model introduced by Shapiro
et al. (1999), although the slope is different from the original prediction. A
series of \svm relationships for different values of the anisotropy parameter
, obtained using the theoretical predictions by Lokas and Mamon (2001)
for NFW density profiles are found to be only marginally consistent with the
data. Using some properties of the equilibrium TIS models, we construct
subsamples of {\em fiducial} equilibrium TIS halos from each of the three
subregions, and we study their properties. For these halos, we do find an
environmental dependence of their properties, in particular of the spin
parameter distribution . We study in more detail the TIS model, and
we find new relationships between the truncation radius and other structural
parameters. No gravitationally bound halo is found having a radius larger than
the critical value for gravithermal instability for TIS halos (\rt , where is the core radius of the TIS solution). We do however
find a dependence of this relationship on the environment, like for the
statistics. These facts hint at a possible r\^{o}le of tidal
fields at determining the statistical properties of halos.Comment: 12 pages, 14 figures. Accepted by MNRAS. Adopted an improved
algorithm for halo finding and added a comparison with NFW model
The Global sphere reconstruction (GSR) - Demonstrating an independent implementation of the astrometric core solution for Gaia
Context. The Gaia ESA mission will estimate the astrometric and physical data
of more than one billion objects, providing the largest and most precise
catalog of absolute astrometry in the history of Astronomy. The core of this
process, the so-called global sphere reconstruction, is represented by the
reduction of a subset of these objects which will be used to define the
celestial reference frame. As the Hipparcos mission showed, and as is inherent
to all kinds of absolute measurements, possible errors in the data reduction
can hardly be identified from the catalog, thus potentially introducing
systematic errors in all derived work. Aims. Following up on the lessons
learned from Hipparcos, our aim is thus to develop an independent sphere
reconstruction method that contributes to guarantee the quality of the
astrometric results without fully reproducing the main processing chain.
Methods. Indeed, given the unfeasibility of a complete replica of the data
reduction pipeline, an astrometric verification unit (AVU) was instituted by
the Gaia Data Processing and Analysis Consortium (DPAC). One of its jobs is to
implement and operate an independent global sphere reconstruction (GSR),
parallel to the baseline one (AGIS, namely Astrometric Global Iterative
Solution) but limited to the primary stars and for validation purposes, to
compare the two results, and to report on any significant differences. Results.
Tests performed on simulated data show that GSR is able to reproduce at the
sub-as level the results of the AGIS demonstration run presented in
Lindegren et al. (2012). Conclusions. Further development is ongoing to improve
on the treatment of real data and on the software modules that compare the AGIS
and GSR solutions to identify possible discrepancies above the tolerance level
set by the accuracy of the Gaia catalog.Comment: Accepted for publication on Astronomy & Astrophysic
A Parallel Tree code for large Nbody simulation: dynamic load balance and data distribution on CRAY T3D system
N-body algorithms for long-range unscreened interactions like gravity belong
to a class of highly irregular problems whose optimal solution is a challenging
task for present-day massively parallel computers. In this paper we describe a
strategy for optimal memory and work distribution which we have applied to our
parallel implementation of the Barnes & Hut (1986) recursive tree scheme on a
Cray T3D using the CRAFT programming environment. We have performed a series of
tests to find an " optimal data distribution " in the T3D memory, and to
identify a strategy for the " Dynamic Load Balance " in order to obtain good
performances when running large simulations (more than 10 million particles).
The results of tests show that the step duration depends on two main factors:
the data locality and the T3D network contention. Increasing data locality we
are able to minimize the step duration if the closest bodies (direct
interaction) tend to be located in the same PE local memory (contiguous block
subdivison, high granularity), whereas the tree properties have a fine grain
distribution. In a very large simulation, due to network contention, an
unbalanced load arises. To remedy this we have devised an automatic work
redistribution mechanism which provided a good Dynamic Load Balance at the
price of an insignificant overhead.Comment: 16 pages with 11 figures included, (Latex, elsart.style). Accepted by
Computer Physics Communication
- …
