11,496 research outputs found
Ninth and Tenth Order Virial Coefficients for Hard Spheres in D Dimensions
We evaluate the virial coefficients B_k for k<=10 for hard spheres in
dimensions D=2,...,8. Virial coefficients with k even are found to be negative
when D>=5. This provides strong evidence that the leading singularity for the
virial series lies away from the positive real axis when D>=5. Further analysis
provides evidence that negative virial coefficients will be seen for some k>10
for D=4, and there is a distinct possibility that negative virial coefficients
will also eventually occur for D=3.Comment: 33 pages, 12 figure
Brief Announcement: Memory Lower Bounds for Self-Stabilization
In the context of self-stabilization, a silent algorithm guarantees that the communication registers (a.k.a register) of every node do not change once the algorithm has stabilized. At the end of the 90\u27s, Dolev et al. [Acta Inf. \u2799] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent deterministic algorithm must use a memory of Omega(log n) bits per register in n-node networks. Similarly, Korman et al. [Dist. Comp. \u2707] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning tree (MST), every silent algorithm must use a memory of Omega(log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity.
In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary deterministic self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for deterministic general algorithms, also established at the end of the 90\u27s, is due to Beauquier et al. [PODC \u2799] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing the lower bound Omega(log log n) bits per register for deterministic self-stabilizing algorithms solving (Delta+1)-coloring, leader election or constructing a spanning tree in networks of maximum degree Delta
Cluster counting: The Hoshen-Kopelman algorithm vs. spanning tree approaches
Two basic approaches to the cluster counting task in the percolation and
related models are discussed. The Hoshen-Kopelman multiple labeling technique
for cluster statistics is redescribed. Modifications for random and aperiodic
lattices are sketched as well as some parallelised versions of the algorithm
are mentioned. The graph-theoretical basis for the spanning tree approaches is
given by describing the "breadth-first search" and "depth-first search"
procedures. Examples are given for extracting the elastic and geometric
"backbone" of a percolation cluster. An implementation of the "pebble game"
algorithm using a depth-first search method is also described.Comment: LaTeX, uses ijmpc1.sty(included), 18 pages, 3 figures, submitted to
Intern. J. of Modern Physics
Memory lower bounds for deterministic self-stabilization
In the context of self-stabilization, a \emph{silent} algorithm guarantees
that the register of every node does not change once the algorithm has
stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that,
for finding the centers of a graph, for electing a leader, or for constructing
a spanning tree, every silent algorithm must use a memory of
bits per register in -node networks. Similarly, Korman et al. [Dist. Comp.
'07] proved, using the notion of proof-labeling-scheme, that, for constructing
a minimum-weight spanning trees (MST), every silent algorithm must use a memory
of bits per register. It follows that requiring the algorithm
to be silent has a cost in terms of memory space, while, in the context of
self-stabilization, where every node constantly checks the states of its
neighbors, the silence property can be of limited practical interest. In fact,
it is known that relaxing this requirement results in algorithms with smaller
space-complexity.
In this paper, we are aiming at measuring how much gain in terms of memory
can be expected by using arbitrary self-stabilizing algorithms, not necessarily
silent. To our knowledge, the only known lower bound on the memory requirement
for general algorithms, also established at the end of the 90's, is due to
Beauquier et al.~[PODC '99] who proved that registers of constant size are not
sufficient for leader election algorithms. We improve this result by
establishing a tight lower bound of bits per
register for self-stabilizing algorithms solving -coloring or
constructing a spanning tree in networks of maximum degree~. The lower
bound bits per register also holds for leader election
Silent MST approximation for tiny memory
In network distributed computing, minimum spanning tree (MST) is one of the
key problems, and silent self-stabilization one of the most demanding
fault-tolerance properties. For this problem and this model, a polynomial-time
algorithm with memory is known for the state model. This is
memory optimal for weights in the classic range (where
is the size of the network). In this paper, we go below this
memory, using approximation and parametrized complexity.
More specifically, our contributions are two-fold. We introduce a second
parameter~, which is the space needed to encode a weight, and we design a
silent polynomial-time self-stabilizing algorithm, with space . In turn, this allows us to get an approximation algorithm for the problem,
with a trade-off between the approximation ratio of the solution and the space
used. For polynomial weights, this trade-off goes smoothly from memory for an -approximation, to memory for exact solutions,
with for example memory for a 2-approximation
Critical random graphs: limiting constructions and distributional properties
We consider the Erdos-Renyi random graph G(n,p) inside the critical window,
where p = 1/n + lambda * n^{-4/3} for some lambda in R. We proved in a previous
paper (arXiv:0903.4730) that considering the connected components of G(n,p) as
a sequence of metric spaces with the graph distance rescaled by n^{-1/3} and
letting n go to infinity yields a non-trivial sequence of limit metric spaces C
= (C_1, C_2, ...). These limit metric spaces can be constructed from certain
random real trees with vertex-identifications. For a single such metric space,
we give here two equivalent constructions, both of which are in terms of more
standard probabilistic objects. The first is a global construction using
Dirichlet random variables and Aldous' Brownian continuum random tree. The
second is a recursive construction from an inhomogeneous Poisson point process
on R_+. These constructions allow us to characterize the distributions of the
masses and lengths in the constituent parts of a limit component when it is
decomposed according to its cycle structure. In particular, this strengthens
results of Luczak, Pittel and Wierman by providing precise distributional
convergence for the lengths of paths between kernel vertices and the length of
a shortest cycle, within any fixed limit component.Comment: 30 pages, 4 figure
An Algorithmic Framework for Labeling Road Maps
Given an unlabeled road map, we consider, from an algorithmic perspective,
the cartographic problem to place non-overlapping road labels embedded in their
roads. We first decompose the road network into logically coherent road
sections, e.g., parts of roads between two junctions. Based on this
decomposition, we present and implement a new and versatile framework for
placing labels in road maps such that the number of labeled road sections is
maximized. In an experimental evaluation with road maps of 11 major cities we
show that our proposed labeling algorithm is both fast in practice and that it
reaches near-optimal solution quality, where optimal solutions are obtained by
mixed-integer linear programming. In comparison to the standard OpenStreetMap
renderer Mapnik, our algorithm labels 31% more road sections in average.Comment: extended version of a paper to appear at GIScience 201
- âŠ