15,553 research outputs found
USHER: an algorithm for particle insertion in dense fluids
The insertion of solvent particles in molecular dynamics simulations of
complex fluids is required in many situations involving open systems, but this
challenging task has been scarcely explored in the literature. We propose a
simple and fast algorithm (USHER) that inserts the new solvent particles at
locations where the potential energy has the desired prespecified value. For
instance, this value may be set equal to the system's excess energy per
particle, in such way that the inserted particles are energetically
indistinguishable from the other particles present. During the search for the
insertion site, the USHER algorithm uses a steepest descent iterator with a
displacement whose magnitude is adapted to the local features of the energy
landscape. The only adjustable parameter in the algorithm is the maximum
displacement and we show that its optimal value can be extracted from an
analysis of the structure of the potential energy landscape. We present
insertion tests in periodic and non-periodic systems filled with a
Lennard-Jones fluid whose density ranges from moderate values to high values.Comment: 10 pages (Latex), 8 figures (postscript); J. Chem. Phys. (in press)
200
Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation
Despite the importance of sparse matrices in numerous fields of science,
software implementations remain difficult to use for non-expert users,
generally requiring the understanding of underlying details of the chosen
sparse matrix storage format. In addition, to achieve good performance, several
formats may need to be used in one program, requiring explicit selection and
conversion between the formats. This can be both tedious and error-prone,
especially for non-expert users. Motivated by these issues, we present a
user-friendly and open-source sparse matrix class for the C++ language, with a
high-level application programming interface deliberately similar to the widely
used MATLAB language. This facilitates prototyping directly in C++ and aids the
conversion of research code into production environments. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, Red-Black tree, coordinate list)
depending on which format is best suited and/or available for specific
operations, and (ii) a template-based meta-programming framework to
automatically detect and optimise execution of common expression patterns.
Empirical evaluations on large sparse matrices with various densities of
non-zero elements demonstrate the advantages of the hybrid storage framework
and the expression optimisation mechanism.Comment: extended and revised version of an earlier conference paper
arXiv:1805.0338
Practical Sparse Matrices in C++ with Hybrid Storage and Template-Based Expression Optimisation
Despite the importance of sparse matrices in numerous fields of science,
software implementations remain difficult to use for non-expert users,
generally requiring the understanding of underlying details of the chosen
sparse matrix storage format. In addition, to achieve good performance, several
formats may need to be used in one program, requiring explicit selection and
conversion between the formats. This can be both tedious and error-prone,
especially for non-expert users. Motivated by these issues, we present a
user-friendly and open-source sparse matrix class for the C++ language, with a
high-level application programming interface deliberately similar to the widely
used MATLAB language. This facilitates prototyping directly in C++ and aids the
conversion of research code into production environments. The class internally
uses two main approaches to achieve efficient execution: (i) a hybrid storage
framework, which automatically and seamlessly switches between three underlying
storage formats (compressed sparse column, Red-Black tree, coordinate list)
depending on which format is best suited and/or available for specific
operations, and (ii) a template-based meta-programming framework to
automatically detect and optimise execution of common expression patterns.
Empirical evaluations on large sparse matrices with various densities of
non-zero elements demonstrate the advantages of the hybrid storage framework
and the expression optimisation mechanism.Comment: extended and revised version of an earlier conference paper
arXiv:1805.0338
Revisiting Date and Party Hubs: Novel Approaches to Role Assignment in Protein Interaction Networks
The idea of 'date' and 'party' hubs has been influential in the study of
protein-protein interaction networks. Date hubs display low co-expression with
their partners, whilst party hubs have high co-expression. It was proposed that
party hubs are local coordinators whereas date hubs are global connectors. Here
we show that the reported importance of date hubs to network connectivity can
in fact be attributed to a tiny subset of them. Crucially, these few, extremely
central, hubs do not display particularly low expression correlation,
undermining the idea of a link between this quantity and hub function. The
date/party distinction was originally motivated by an approximately bimodal
distribution of hub co-expression; we show that this feature is not always
robust to methodological changes. Additionally, topological properties of hubs
do not in general correlate with co-expression. Thus, we suggest that a
date/party dichotomy is not meaningful and it might be more useful to conceive
of roles for protein-protein interactions rather than individual proteins. We
find significant correlations between interaction centrality and the functional
similarity of the interacting proteins.Comment: 27 pages, 5 main figures, 4 supplementary figure
FPGA-based data partitioning
Implementing parallel operators in multi-core machines often involves a data partitioning step that divides the data into cache-size blocks and arranges them so to allow concurrent threads to process them in parallel. Data partitioning is expensive, in some cases up to 90% of the cost of, e.g., a parallel hash join. In this paper we explore the use of an FPGA to accelerate data partitioning. We do so in the context of new hybrid architectures where the FPGA is located as a co-processor residing on a socket and with coherent access to the same memory as the CPU residing on the other socket. Such an architecture reduces data transfer overheads between the CPU and the FPGA, enabling hybrid operator execution where the partitioning happens on the FPGA and the build and probe phases of a join happen on the CPU. Our experiments demonstrate that FPGA-based partitioning is significantly faster and more robust than CPU-based partitioning. The results open interesting options as FPGAs are gradually integrated tighter with the CPU
- …