8,916 research outputs found
GADGET: A code for collisionless and gasdynamical cosmological simulations
We describe the newly written code GADGET which is suitable both for
cosmological simulations of structure formation and for the simulation of
interacting galaxies. GADGET evolves self-gravitating collisionless fluids with
the traditional N-body approach, and a collisional gas by smoothed particle
hydrodynamics. Along with the serial version of the code, we discuss a parallel
version that has been designed to run on massively parallel supercomputers with
distributed memory. While both versions use a tree algorithm to compute
gravitational forces, the serial version of GADGET can optionally employ the
special-purpose hardware GRAPE instead of the tree. Periodic boundary
conditions are supported by means of an Ewald summation technique. The code
uses individual and adaptive timesteps for all particles, and it combines this
with a scheme for dynamic tree updates. Due to its Lagrangian nature, GADGET
thus allows a very large dynamic range to be bridged, both in space and time.
So far, GADGET has been successfully used to run simulations with up to 7.5e7
particles, including cosmological studies of large-scale structure formation,
high-resolution simulations of the formation of clusters of galaxies, as well
as workstation-sized problems of interacting galaxies. In this study, we detail
the numerical algorithms employed, and show various tests of the code. We
publically release both the serial and the massively parallel version of the
code.Comment: 32 pages, 14 figures, replaced to match published version in New
Astronomy. For download of the code, see
http://www.mpa-garching.mpg.de/gadget (new version 1.1 available
Hydra: A Parallel Adaptive Grid Code
We describe the first parallel implementation of an adaptive
particle-particle, particle-mesh code with smoothed particle hydrodynamics.
Parallelisation of the serial code, ``Hydra'', is achieved by using CRAFT, a
Cray proprietary language which allows rapid implementation of a serial code on
a parallel machine by allowing global addressing of distributed memory.
The collisionless variant of the code has already completed several 16.8
million particle cosmological simulations on a 128 processor Cray T3D whilst
the full hydrodynamic code has completed several 4.2 million particle combined
gas and dark matter runs. The efficiency of the code now allows parameter-space
explorations to be performed routinely using particles of each species.
A complete run including gas cooling, from high redshift to the present epoch
requires approximately 10 hours on 64 processors.
In this paper we present implementation details and results of the
performance and scalability of the CRAFT version of Hydra under varying degrees
of particle clustering.Comment: 23 pages, LaTex plus encapsulated figure
The cosmological simulation code GADGET-2
We discuss the cosmological simulation code GADGET-2, a new massively
parallel TreeSPH code, capable of following a collisionless fluid with the
N-body method, and an ideal gas by means of smoothed particle hydrodynamics
(SPH). Our implementation of SPH manifestly conserves energy and entropy in
regions free of dissipation, while allowing for fully adaptive smoothing
lengths. Gravitational forces are computed with a hierarchical multipole
expansion, which can optionally be applied in the form of a TreePM algorithm,
where only short-range forces are computed with the `tree'-method while
long-range forces are determined with Fourier techniques. Time integration is
based on a quasi-symplectic scheme where long-range and short-range forces can
be integrated with different timesteps. Individual and adaptive short-range
timesteps may also be employed. The domain decomposition used in the
parallelisation algorithm is based on a space-filling curve, resulting in high
flexibility and tree force errors that do not depend on the way the domains are
cut. The code is efficient in terms of memory consumption and required
communication bandwidth. It has been used to compute the first cosmological
N-body simulation with more than 10^10 dark matter particles, reaching a
homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has
also been used to carry out very large cosmological SPH simulations that
account for radiative cooling and star formation, reaching total particle
numbers of more than 250 million. We present the algorithms used by the code
and discuss their accuracy and performance using a number of test problems.
GADGET-2 is publicly released to the research community.Comment: submitted to MNRAS, 31 pages, 20 figures (reduced resolution), code
available at http://www.mpa-garching.mpg.de/gadge
Ptolemaic Indexing
This paper discusses a new family of bounds for use in similarity search,
related to those used in metric indexing, but based on Ptolemy's inequality,
rather than the metric axioms. Ptolemy's inequality holds for the well-known
Euclidean distance, but is also shown here to hold for quadratic form metrics
in general, with Mahalanobis distance as an important special case. The
inequality is examined empirically on both synthetic and real-world data sets
and is also found to hold approximately, with a very low degree of error, for
important distances such as the angular pseudometric and several Lp norms.
Indexing experiments demonstrate a highly increased filtering power compared to
existing, triangular methods. It is also shown that combining the Ptolemaic and
triangular filtering can lead to better results than using either approach on
its own
The study of probability model for compound similarity searching
Information Retrieval or IR system main task is to retrieve relevant documents according to the users query. One of IR most popular retrieval model is the Vector Space Model. This model assumes relevance based on similarity, which is defined as the distance between query and document in the concept space. All currently existing chemical compound database systems have adapt the vector space model to calculate the similarity of a database entry to a query compound. However, it assumes that fragments represented by the bits are independent of one another, which is not necessarily true. Hence, the possibility of applying another IR model is explored, which is the Probabilistic Model, for chemical compound searching. This model estimates the probabilities of a chemical structure to have the same bioactivity as a target compound. It is envisioned that by ranking chemical structures in decreasing order of their probability of relevance to the query structure, the effectiveness of a molecular similarity searching system can be increased. Both fragment dependencies and independencies assumption are taken into consideration in achieving improvement towards compound similarity searching system. After conducting a series of simulated similarity searching, it is concluded that PM approaches really did perform better than the existing similarity searching. It gave better result in all evaluation criteria to confirm this statement. In terms of which probability model performs better, the BD model shown improvement over the BIR model
The Five Factor Model of personality and evaluation of drug consumption risk
The problem of evaluating an individual's risk of drug consumption and misuse
is highly important. An online survey methodology was employed to collect data
including Big Five personality traits (NEO-FFI-R), impulsivity (BIS-11),
sensation seeking (ImpSS), and demographic information. The data set contained
information on the consumption of 18 central nervous system psychoactive drugs.
Correlation analysis demonstrated the existence of groups of drugs with
strongly correlated consumption patterns. Three correlation pleiades were
identified, named by the central drug in the pleiade: ecstasy, heroin, and
benzodiazepines pleiades. An exhaustive search was performed to select the most
effective subset of input features and data mining methods to classify users
and non-users for each drug and pleiad. A number of classification methods were
employed (decision tree, random forest, -nearest neighbors, linear
discriminant analysis, Gaussian mixture, probability density function
estimation, logistic regression and na{\"i}ve Bayes) and the most effective
classifier was selected for each drug. The quality of classification was
surprisingly high with sensitivity and specificity (evaluated by leave-one-out
cross-validation) being greater than 70\% for almost all classification tasks.
The best results with sensitivity and specificity being greater than 75\% were
achieved for cannabis, crack, ecstasy, legal highs, LSD, and volatile substance
abuse (VSA).Comment: Significantly extended report with 67 pages, 27 tables, 21 figure
- âŠ