303 research outputs found

    Substructure recovery by 3D Discrete Wavelet Transforms

    Get PDF
    We present and discuss a method to identify substructures in combined angular-redshift samples of galaxies within Clusters. The method relies on the use of Discrete Wavelet Transform (hereafter DWT) and has already been applied to the analysis of the Coma cluster (Gambera et al. 1997). The main new ingredient of our method with respect to previous studies lies in the fact that we make use of a 3D data set rather than a 2D. We test the method on mock cluster catalogs with spatially localized substructures and on a N-body simulation. Our main conclusion is that our method is able to identify the existing substructures provided that: a) the subclumps are detached in part or all of the phase space, b) one has a statistically significant number of redshifts, increasing as the distance decreases due to redshift distortions; c) one knows {\it a priori} the scale on which substructures are to be expected. We have found that to allow an accurate recovery we must have both a significant number of galaxies (200\approx 200 for clusters at z0.4\geq 0.4 or about 800 at z\leq 0.4) and a limiting magnitude for completeness mB=16m_B=16. The only true limitation to our method seems to be the necessity of knowing {\it a priori} the scale on which the substructure is to be found. This is an intrinsic drawback of the method and no improvement in numerical codes based on this technique could make up for it.Comment: Accepted for publication in MNRAS. 7 pages, 2 figure

    A modified parallel tree code for N-body simulation of the Large Scale Structure of the Universe

    Full text link
    N-body codes to perform simulations of the origin and evolution of the Large Scale Structure of the Universe have improved significantly over the past decade both in terms of the resolution achieved and of reduction of the CPU time. However, state-of-the-art N-body codes hardly allow one to deal with particle numbers larger than a few 10^7, even on the largest parallel systems. In order to allow simulations with larger resolution, we have first re-considered the grouping strategy as described in Barnes (1990) (hereafter B90) and applied it with some modifications to our WDSH-PT (Work and Data SHaring - Parallel Tree) code. In the first part of this paper we will give a short description of the code adopting the Barnes and Hut algorithm \cite{barh86} (hereafter BH), and in particular of the memory and work distribution strategy applied to describe the {\it data distribution} on a CC-NUMA machine like the CRAY-T3E system. In the second part of the paper we describe the modification to the Barnes grouping strategy we have devised to improve the performance of the WDSH-PT code. We will use the property that nearby particles have similar interaction list. This idea has been checked in B90, where an interaction list is builded which applies everywhere within a cell C_{group} containing a little number of particles N_{crit}. B90 reuses this interaction list for each particle pCgroup p \in C_{group} in the cell in turn. We will assume each particle p to have the same interaction list. Thus it has been possible to reduce the CPU time increasing the performances. This leads us to run simulations with a large number of particles (N ~ 10^7/10^9) in non-prohibitive times.Comment: 13 pages and 7 Figure

    Visualization, Exploration and Data Analysis of Complex Astrophysical Data

    Full text link
    In this paper we show how advanced visualization tools can help the researcher in investigating and extracting information from data. The focus is on VisIVO, a novel open source graphics application, which blends high performance multidimensional visualization techniques and up-to-date technologies to cooperate with other applications and to access remote, distributed data archives. VisIVO supports the standards defined by the International Virtual Observatory Alliance in order to make it interoperable with VO data repositories. The paper describes the basic technical details and features of the software and it dedicates a large section to show how VisIVO can be used in several scientific cases.Comment: 32 pages, 15 figures, accepted by PAS

    Astrocomp: a web service for the use of high performance computers in Astrophysics

    Full text link
    Astrocomp is a joint project, developed by the INAF-Astrophysical Observatory of Catania, University of Roma La Sapienza and Enea. The project has the goal of providing the scientific community of a web-based user-friendly interface which allows running parallel codes on a set of high-performance computing (HPC) resources, without any need for specific knowledge about parallel programming and Operating Systems commands. Astrocomp provides, also, computing time on a set of parallel computing systems, available to the authorized user. At present, the portal makes a few codes available, among which: FLY, a cosmological code for studying three-dimensional collisionless self-gravitating systems with periodic boundary conditions; ATD, a parallel tree-code for the simulation of the dynamics of boundary-free collisional and collisionless self-gravitating systems and MARA, a code for stellar light curves analysis. Other codes are going to be added to the portal.Comment: LaTeX with elsart.cls and harvard.sty (included). 7 pages. To be submitted to a specific journa

    A Parallel Tree code for large Nbody simulation: dynamic load balance and data distribution on CRAY T3D system

    Get PDF
    N-body algorithms for long-range unscreened interactions like gravity belong to a class of highly irregular problems whose optimal solution is a challenging task for present-day massively parallel computers. In this paper we describe a strategy for optimal memory and work distribution which we have applied to our parallel implementation of the Barnes & Hut (1986) recursive tree scheme on a Cray T3D using the CRAFT programming environment. We have performed a series of tests to find an " optimal data distribution " in the T3D memory, and to identify a strategy for the " Dynamic Load Balance " in order to obtain good performances when running large simulations (more than 10 million particles). The results of tests show that the step duration depends on two main factors: the data locality and the T3D network contention. Increasing data locality we are able to minimize the step duration if the closest bodies (direct interaction) tend to be located in the same PE local memory (contiguous block subdivison, high granularity), whereas the tree properties have a fine grain distribution. In a very large simulation, due to network contention, an unbalanced load arises. To remedy this we have devised an automatic work redistribution mechanism which provided a good Dynamic Load Balance at the price of an insignificant overhead.Comment: 16 pages with 11 figures included, (Latex, elsart.style). Accepted by Computer Physics Communication
    corecore