3,490 research outputs found

    Abstraction Heuristics, Cost Partitioning and Network Flows

    Get PDF
    Cost partitioning is a well-known technique to make admissible heuristics for classical planning additive. The optimal cost partitioning of explicit-state abstraction heuristics can be computed in polynomial time with a linear program, but the size of the model is often prohibitive. We study this model from a dual perspective and develop several simplification rules to reduce its size. We use these rules to answer open questions about extensions of the state equation heuristic and their relation to cost partitioning

    Lagrangian Decomposition for Classical Planning (Extended Abstract)

    Get PDF
    Optimal cost partitioning of classical planning heuristics has been shown to lead to excellent heuristic values but is often prohibitively expensive to compute. We analyze the application of Lagrangian decomposition, a classical tool in mathematical programming, to cost partitioning of operator-counting heuristics. This allows us to view the computation as an iterative process that can be seeded with any cost partitioning and that improves over time. In the case of non-negative cost partitioning of abstraction heuristics the computation reduces to independent shortest path problems and does not require an LP solver

    Gunrock: GPU Graph Analytics

    Full text link
    For large-scale graph analytics on the GPU, the irregularity of data access and control flow, and the complexity of programming GPUs, have presented two significant challenges to developing a programmable high-performance graph library. "Gunrock", our graph-processing system designed specifically for the GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. We characterize the performance of various optimization strategies and evaluate Gunrock's overall performance on different GPU architectures on a wide range of graph primitives that span from traversal-based algorithms and ranking algorithms, to triangle counting and bipartite-graph-based algorithms. The results show that on a single GPU, Gunrock has on average at least an order of magnitude speedup over Boost and PowerGraph, comparable performance to the fastest GPU hardwired primitives and CPU shared-memory graph libraries such as Ligra and Galois, and better performance than any other GPU high-level graph library.Comment: 52 pages, invited paper to ACM Transactions on Parallel Computing (TOPC), an extended version of PPoPP'16 paper "Gunrock: A High-Performance Graph Processing Library on the GPU

    Cost Partitioning Heuristics for Stochastic Shortest Path Problems

    Get PDF
    In classical planning, cost partitioning is a powerful method which allows to combine multiple admissible heuristics while retaining an admissible bound. In this paper, we extend the theory of cost partitioning to probabilistic planning by generalizing from deterministic transition systems to stochastic shortest path problems (SSPs). We show that fundamental results related to cost partitioning still hold in our extended theory. We also investigate how to optimally partition costs for a large class of abstraction heuristics for SSPs. Lastly, we analyze occupation measure heuristics for SSPs as well as the theory of approximate linear programming for reward-oriented Markov decision processes. All of these fit our framework and can be seen as cost-partitioned heuristics

    Leveraging Language to Learn Program Abstractions and Search Heuristics

    Full text link
    Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems. Effective program synthesis depends on two key ingredients: a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains -- string editing, image composition, and abstract reasoning about scenes -- even when no natural language hints are available at test time.Comment: appeared in Thirty-eighth International Conference on Machine Learning (ICML 2021

    Error Analysis and Correction for Weighted A*'s Suboptimality (Extended Version)

    Full text link
    Weighted A* (wA*) is a widely used algorithm for rapidly, but suboptimally, solving planning and search problems. The cost of the solution it produces is guaranteed to be at most W times the optimal solution cost, where W is the weight wA* uses in prioritizing open nodes. W is therefore a suboptimality bound for the solution produced by wA*. There is broad consensus that this bound is not very accurate, that the actual suboptimality of wA*'s solution is often much less than W times optimal. However, there is very little published evidence supporting that view, and no existing explanation of why W is a poor bound. This paper fills in these gaps in the literature. We begin with a large-scale experiment demonstrating that, across a wide variety of domains and heuristics for those domains, W is indeed very often far from the true suboptimality of wA*'s solution. We then analytically identify the potential sources of error. Finally, we present a practical method for correcting for two of these sources of error and experimentally show that the correction frequently eliminates much of the error.Comment: Published as a short paper in the 12th Annual Symposium on Combinatorial Search, SoCS 201

    Boolean Abstractions for Realizability Modulo Theories (Extended version)

    Full text link
    In this paper, we address the problem of the (reactive) realizability of specifications of theories richer than Booleans, including arithmetic theories. Our approach transforms theory specifications into purely Boolean specifications by (1) substituting theory literals by Boolean variables, and (2) computing an additional Boolean requirement that captures the dependencies between the new variables imposed by the literals. The resulting specification can be passed to existing Boolean off-the-shelf realizability tools, and is realizable if and only if the original specification is realizable. The first contribution is a brute-force version of our method, which requires a number of SMT queries that is doubly exponential in the number of input literals. Then, we present a faster method that exploits a nested encoding of the search for the extra requirement and uses SAT solving for faster traversing the search space and uses SMT queries internally. Another contribution is a prototype in Z3-Python. Finally, we report an empirical evaluation using specifications inspired in real industrial cases. To the best of our knowledge, this is the first method that succeeds in non-Boolean LTL realizability
    • …
    corecore