936 research outputs found
Monte Carlo Algorithm for Simulating Reversible Aggregation of Multisite Particles
We present an efficient and exact Monte Carlo algorithm to simulate
reversible aggregation of particles with dedicated binding sites. This method
introduces a novel data structure of dynamic bond tree to record clusters and
sequences of bond formations. The algorithm achieves a constant time cost for
processing cluster association and a cost between and
for processing bond dissociation in clusters with bonds.
The algorithm is statistically exact and can reproduce results obtained by the
standard method. We applied the method to simulate a trivalent ligand and a
bivalent receptor clustering system and obtained an average scaling of
for processing bond dissociation in acyclic
aggregation, compared to a linear scaling with the cluster size in standard
methods. The algorithm also demands substantially less memory than the
conventional method.Comment: 8 pages, 3 figure
Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford
A \emph{metric tree embedding} of expected \emph{stretch~}
maps a weighted -node graph to a weighted tree with such that, for all ,
and
. Such embeddings are highly useful for designing
fast approximation algorithms, as many hard problems are easy to solve on tree
instances. However, to date the best parallel -depth algorithm that achieves an asymptotically optimal expected stretch of
requires
work and a metric as input.
In this paper, we show how to achieve the same guarantees using
depth and
work, where and is an arbitrarily small constant.
Moreover, one may further reduce the work to at the expense of increasing the expected stretch to
.
Our main tool in deriving these parallel algorithms is an algebraic
characterization of a generalization of the classic Moore-Bellman-Ford
algorithm. We consider this framework, which subsumes a variety of previous
"Moore-Bellman-Ford-like" algorithms, to be of independent interest and discuss
it in depth. In our tree embedding algorithm, we leverage it for providing
efficient query access to an approximate metric that allows sampling the tree
using depth and work.
We illustrate the generality and versatility of our techniques by various
examples and a number of additional results
Optimal path for a quantum teleportation protocol in entangled networks
Bellman's optimality principle has been of enormous importance in the
development of whole branches of applied mathematics, computer science, optimal
control theory, economics, decision making, and classical physics. Examples are
numerous: dynamic programming, Markov chains, stochastic dynamics, calculus of
variations, and the brachistochrone problem. Here we show that Bellman's
optimality principle is violated in a teleportation problem on a quantum
network. This implies that finding the optimal fidelity route for teleporting a
quantum state between two distant nodes on a quantum network with bi-partite
entanglement will be a tough problem and will require further investigation.Comment: 4 pages, 1 figure, RevTeX
Near-equilibrium measurements of nonequilibrium free energy
A central endeavor of thermodynamics is the measurement of free energy
changes. Regrettably, although we can measure the free energy of a system in
thermodynamic equilibrium, typically all we can say about the free energy of a
non-equilibrium ensemble is that it is larger than that of the same system at
equilibrium. Herein, we derive a formally exact expression for the probability
distribution of a driven system, which involves path ensemble averages of the
work over trajectories of the time-reversed system. From this we find a simple
near-equilibrium approximation for the free energy in terms of an excess mean
time-reversed work, which can be experimentally measured on real systems. With
analysis and computer simulation, we demonstrate the accuracy of our
approximations for several simple models.Comment: 5 pages, 3 figure
A Scalable Asynchronous Distributed Algorithm for Topic Modeling
Learning meaningful topic models with massive document collections which
contain millions of documents and billions of tokens is challenging because of
two reasons: First, one needs to deal with a large number of topics (typically
in the order of thousands). Second, one needs a scalable and efficient way of
distributing the computation across multiple machines. In this paper we present
a novel algorithm F+Nomad LDA which simultaneously tackles both these problems.
In order to handle large number of topics we use an appropriately modified
Fenwick tree. This data structure allows us to sample from a multinomial
distribution over items in time. Moreover, when topic counts
change the data structure can be updated in time. In order to
distribute the computation across multiple processor we present a novel
asynchronous framework inspired by the Nomad algorithm of
\cite{YunYuHsietal13}. We show that F+Nomad LDA significantly outperform
state-of-the-art on massive problems which involve millions of documents,
billions of words, and thousands of topics
Solving mazes with memristors: a massively-parallel approach
Solving mazes is not just a fun pastime. Mazes are prototype models in graph theory, topology, robotics, traffic optimization, psychology, and in many other areas of science and technology. However, when maze complexity increases their solution becomes cumbersome and very time consuming. Here, we show that a network of memristors - resistors with memory - can solve such a non-trivial problem quite easily. In particular, maze solving by the network of memristors occurs in a massively parallel fashion since all memristors in the network participate simultaneously in the calculation. The result of the calculation is then recorded into the memristors’ states, and can be used and/or recovered at a later time. Furthermore, the network of memristors finds all possible solutions in multiple-solution mazes, and sorts out the solution paths according to their length. Our results demonstrate not only the first application of memristive networks to the field of massively-parallel computing, but also a novel algorithm to solve mazes which could find applications in different research fields
Finding flows in the one-way measurement model
The one-way measurement model is a framework for universal quantum
computation, in which algorithms are partially described by a graph G of
entanglement relations on a collection of qubits. A sufficient condition for an
algorithm to perform a unitary embedding between two Hilbert spaces is for the
graph G, together with input/output vertices I, O \subset V(G), to have a flow
in the sense introduced by Danos and Kashefi [quant-ph/0506062]. For the
special case of |I| = |O|, using a graph-theoretic characterization, I show
that such flows are unique when they exist. This leads to an efficient
algorithm for finding flows, by a reduction to solved problems in graph theory.Comment: 8 pages, 3 figures: somewhat condensed and updated version, to appear
in PR
A box-covering algorithm for fractal scaling in scale-free networks
A random sequential box-covering algorithm recently introduced to measure the
fractal dimension in scale-free networks is investigated. The algorithm
contains Monte Carlo sequential steps of choosing the position of the center of
each box, and thereby, vertices in preassigned boxes can divide subsequent
boxes into more than one pieces, but divided boxes are counted once. We find
that such box-split allowance in the algorithm is a crucial ingredient
necessary to obtain the fractal scaling for fractal networks; however, it is
inessential for regular lattice and conventional fractal objects embedded in
the Euclidean space. Next the algorithm is viewed from the cluster-growing
perspective that boxes are allowed to overlap and thereby, vertices can belong
to more than one box. Then, the number of distinct boxes a vertex belongs to is
distributed in a heterogeneous manner for SF fractal networks, while it is of
Poisson-type for the conventional fractal objects.Comment: 12 pages, 11 figures, a proceedings of the conference, "Optimization
in complex networks." held in Los Alamo
A model for the onset of transport in systems with distributed thresholds for conduction
We present a model supported by simulation to explain the effect of
temperature on the conduction threshold in disordered systems. Arrays with
randomly distributed local thresholds for conduction occur in systems ranging
from superconductors to metal nanocrystal arrays. Thermal fluctuations provide
the energy to overcome some of the local thresholds, effectively erasing them
as far as the global conduction threshold for the array is concerned. We
augment this thermal energy reasoning with percolation theory to predict the
temperature at which the global threshold reaches zero. We also study the
effect of capacitive nearest-neighbor interactions on the effective charging
energy. Finally, we present results from Monte Carlo simulations that find the
lowest-cost path across an array as a function of temperature. The main result
of the paper is the linear decrease of conduction threshold with increasing
temperature: , where is an
effective charging energy that depends on the particle radius and interparticle
distance, and is the percolation threshold of the underlying lattice. The
predictions of this theory compare well to experiments in one- and
two-dimensional systems.Comment: 14 pages, 10 figures, submitted to PR
Studying the Effect of Data Structures on the Efficiency of Collaborative Filtering Systems
This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in
CERI '16 Proceedings of the 4th Spanish Conference on Information Retrieval, http://dx.doi.org/10.1145/2934732.2934747Recommender systems is an active research area where the
major focus has been on how to improve the quality of gen-
erated recommendations, but less attention has been paid
on how to do it in an e cient way. This aspect is increas-
ingly important because the information to be considered by
recommender systems is growing exponentially. In this pa-
per we study how di erent data structures a ect the perfor-
mance of these systems. Our results with two public datasets
provide relevant insights regarding the optimal data struc-
tures in terms of memory and time usages. Speci cally, we
show that classical data structures like Binary Search Trees
and Red-Black Trees can beat more complex and popular
alternatives like Hash Tables
- …