79,241 research outputs found
Partitioning the vertex set of to make an efficient open domination graph
A graph is an efficient open domination graph if there exists a subset of
vertices whose open neighborhoods partition its vertex set. We characterize
those graphs for which the Cartesian product is an efficient
open domination graph when is a complete graph of order at least 3 or a
complete bipartite graph. The characterization is based on the existence of a
certain type of weak partition of . For the class of trees when is
complete of order at least 3, the characterization is constructive. In
addition, a special type of efficient open domination graph is characterized
among Cartesian products when is a 5-cycle or a 4-cycle.Comment: 16 pages, 2 figure
On optimally partitioning a text to improve its compression
In this paper we investigate the problem of partitioning an input string T in
such a way that compressing individually its parts via a base-compressor C gets
a compressed output that is shorter than applying C over the entire T at once.
This problem was introduced in the context of table compression, and then
further elaborated and extended to strings and trees. Unfortunately, the
literature offers poor solutions: namely, we know either a cubic-time algorithm
for computing the optimal partition based on dynamic programming, or few
heuristics that do not guarantee any bounds on the efficacy of their computed
partition, or algorithms that are efficient but work in some specific scenarios
(such as the Burrows-Wheeler Transform) and achieve compression performance
that might be worse than the optimal-partitioning by a
factor. Therefore, computing efficiently the optimal solution is still open. In
this paper we provide the first algorithm which is guaranteed to compute in
O(n \log_{1+\eps}n) time a partition of T whose compressed output is
guaranteed to be no more than -worse the optimal one, where
may be any positive constant
Simulations of lattice animals and trees
The scaling behaviour of randomly branched polymers in a good solvent is
studied in two to nine dimensions, using as microscopic models lattice animals
and lattice trees on simple hypercubic lattices. As a stochastic sampling
method we use a biased sequential sampling algorithm with re-sampling, similar
to the pruned-enriched Rosenbluth method (PERM) used extensively for linear
polymers. Essentially we start simulating percolation clusters (either site or
bond), re-weigh them according to the animal (tree) ensemble, and prune or
branch the further growth according to a heuristic fitness function. In
contrast to previous applications of PERM, this fitness function is {\it not}
the weight with which the actual configuration would contribute to the
partition sum, but is closely related to it. We obtain high statistics of
animals with up to several thousand sites in all dimension 2 <= d <= 9. In
addition to the partition sum (number of different animals) we estimate
gyration radii and numbers of perimeter sites. In all dimensions we verify the
Parisi-Sourlas prediction, and we verify all exactly known critical exponents
in dimensions 2, 3, 4, and >= 8. In addition, we present the hitherto most
precise estimates for growth constants in d >= 3. For clusters with one site
attached to an attractive surface, we verify the superuniversality of the
cross-over exponent at the adsorption transition predicted by Janssen and
Lyssy. Finally, we discuss the collapse of animals and trees, arguing that our
present version of the algorithm is also efficient for some of the models
studied in this context, but showing that it is {\it not} very efficient for
the `classical' model for collapsing animals.Comment: 17 pages RevTeX, 29 figures include
Beta-trees: Multivariate histograms with confidence statements
Multivariate histograms are difficult to construct due to the curse of
dimensionality. Motivated by -d trees in computer science, we show how to
construct an efficient data-adaptive partition of Euclidean space that
possesses the following two properties: With high confidence the distribution
from which the data are generated is close to uniform on each rectangle of the
partition; and despite the data-dependent construction we can give guaranteed
finite sample simultaneous confidence intervals for the probabilities (and
hence for the average densities) of each rectangle in the partition. This
partition will automatically adapt to the sizes of the regions where the
distribution is close to uniform. The methodology produces confidence intervals
whose widths depend only on the probability content of the rectangles and not
on the dimensionality of the space, thus avoiding the curse of dimensionality.
Moreover, the widths essentially match the optimal widths in the univariate
setting. The simultaneous validity of the confidence intervals allows to use
this construction, which we call {\sl Beta-trees}, for various data-analytic
purposes. We illustrate this by using Beta-trees for visualizing data and for
multivariate mode-hunting
Practical Low-Dimensional Halfspace Range Space Sampling
We develop, analyze, implement, and compare new algorithms for creating epsilon-samples of range spaces defined by halfspaces which have size sub-quadratic in 1/epsilon, and have runtime linear in the input size and near-quadratic in 1/epsilon. The key to our solution is an efficient construction of partition trees. Despite not requiring any techniques developed after the early 1990s, apparently such a result was never explicitly described. We demonstrate that our implementations, including new implementations of several variants of partition trees, do indeed run in time linear in the input, appear to run linear in output size, and observe smaller error for the same size sample compared to the ubiquitous random sample (which requires size quadratic in 1/epsilon). This result has direct applications in speeding up discrepancy evaluation, approximate range counting, and spatial anomaly detection
- …