81,024 research outputs found

    Self-improving Algorithms for Coordinate-wise Maxima

    Full text link
    Computing the coordinate-wise maxima of a planar point set is a classic and well-studied problem in computational geometry. We give an algorithm for this problem in the \emph{self-improving setting}. We have nn (unknown) independent distributions \cD_1, \cD_2, ..., \cD_n of planar points. An input pointset (p1,p2,...,pn)(p_1, p_2, ..., p_n) is generated by taking an independent sample pip_i from each \cD_i, so the input distribution \cD is the product \prod_i \cD_i. A self-improving algorithm repeatedly gets input sets from the distribution \cD (which is \emph{a priori} unknown) and tries to optimize its running time for \cD. Our algorithm uses the first few inputs to learn salient features of the distribution, and then becomes an optimal algorithm for distribution \cD. Let \OPT_\cD denote the expected depth of an \emph{optimal} linear comparison tree computing the maxima for distribution \cD. Our algorithm eventually has an expected running time of O(\text{OPT}_\cD + n), even though it did not know \cD to begin with. Our result requires new tools to understand linear comparison trees for computing maxima. We show how to convert general linear comparison trees to very restricted versions, which can then be related to the running time of our algorithm. An interesting feature of our algorithm is an interleaved search, where the algorithm tries to determine the likeliest point to be maximal with minimal computation. This allows the running time to be truly optimal for the distribution \cD.Comment: To appear in Symposium of Computational Geometry 2012 (17 pages, 2 figures

    A fast recursive coordinate bisection tree for neighbour search and gravity

    Full text link
    We introduce our new binary tree code for neighbour search and gravitational force calculations in an N-particle system. The tree is built in a "top-down" fashion by "recursive coordinate bisection" where on each tree level we split the longest side of a cell through its centre of mass. This procedure continues until the average number of particles in the lowest tree level has dropped below a prescribed value. To calculate the forces on the particles in each lowest-level cell we split the gravitational interaction into a near- and a far-field. Since our main intended applications are SPH simulations, we calculate the near-field by a direct, kernel-smoothed summation, while the far field is evaluated via a Cartesian Taylor expansion up to quadrupole order. Instead of applying the far-field approach for each particle separately, we use another Taylor expansion around the centre of mass of each lowest-level cell to determine the forces at the particle positions. Due to this "cell-cell interaction" the code performance is close to O(N) where N is the number of used particles. We describe in detail various technicalities that ensure a low memory footprint and an efficient cache use. In a set of benchmark tests we scrutinize our new tree and compare it to the "Press tree" that we have previously made ample use of. At a slightly higher force accuracy than the Press tree, our tree turns out to be substantially faster and increasingly more so for larger particle numbers. For four million particles our tree build is faster by a factor of 25 and the time for neighbour search and gravity is reduced by more than a factor of 6. In single processor tests with up to 10^8 particles we confirm experimentally that the scaling behaviour is close to O(N). The current Fortran 90 code version is OpenMP-parallel and scales excellently with the processor number (=24) of our test machine.Comment: 12 pages, 16 figures, 1 table, accepted for publication in MNRAS on July 28, 201

    A rigorous evaluation of crossover and mutation in genetic programming

    Get PDF
    The role of crossover and mutation in Genetic Programming (GP) has been the subject of much debate since the emergence of the field. In this paper, we contribute new empirical evidence to this argument using a rigorous and principled experimental method applied to six problems common in the GP literature. The approach tunes the algorithm parameters to enable a fair and objective comparison of two different GP algorithms, the first using a combination of crossover and reproduction, and secondly using a combination of mutation and reproduction. We find that crossover does not significantly outperform mutation on most of the problems examined. In addition, we demonstrate that the use of a straightforward Design of Experiments methodology is effective at tuning GP algorithm parameters

    Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br> SAT-Based Planning

    Full text link
    In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability (SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations (and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior -- the proof arguments -- illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks
    • …
    corecore