28 research outputs found

    Forthcoming Papers

    Get PDF

    A*+BFHS: A Hybrid Heuristic Search Algorithm

    Full text link
    We present a new algorithm A*+BFHS for solving problems with unit-cost operators where A* and IDA* fail due to memory limitations and/or the existence of many distinct paths between the same pair of nodes. A*+BFHS is based on A* and breadth-first heuristic search (BFHS). A*+BFHS combines advantages from both algorithms, namely A*'s node ordering, BFHS's memory savings, and both algorithms' duplicate detection. On easy problems, A*+BFHS behaves the same as A*. On hard problems, it is slower than A* but saves a large amount of memory. Compared to BFIDA*, A*+BFHS reduces the search time and/or memory requirement by several times on a variety of planning domains.Comment: 8 pages, 5 figures, 1 tabl

    An Integrated Toolkit for Modern Action Planning

    Get PDF
    Bützken M, Edelkamp S, Elalaoui A, et al. An Integrated Toolkit for Modern Action Planning. In: 19th Workshop on New Results in Planning, Scheduling and Design (PUK). 2005: 1-11.In this paper we introduce to the architecture and the abilities of our design and analysis workbench for modern action planning. The toolkit provides automated domain analysis tools together with PDDL learning capabilities. New optimal and suboptimal planners extend state-of-the-art technology. With the tool, domain experts assist solving hard combinatorial problems. Approximate or incremental solutions provided by the system are supervised. Intermediate results are accessible to improve domain modeling and to tune exploration in generating high quality plans, which, in turn, can be bootstrapped for domain inference

    From non-negative to general operator cost partitioning

    Get PDF
    Operator cost partitioning is a well-known technique to make admissible heuristics additive by distributing the operator costs among individual heuristics. Planning tasks are usually defined with non-negative operator costs and therefore it appears natural to demand the same for the distributed costs. We argue that this requirement is not necessary and demonstrate the benefit of using general cost partitioning. We show that LP heuristics for operator-counting constraints are cost-partitioned heuristics and that the state equation heuristic computes a cost partitioning over atomic projections. We also introduce a new family of potential heuristics and show their relationship to general cost partitioning

    Probably approximately correct heuristic search

    Get PDF
    Abstract A* is a best-first search algorithm that returns an optimal solution. w-admissible algorithms guarantee that the returned solution is no larger than w times the optimal solution. In this paper we introduce a generalization of the w-admissibility concept that we call PAC search, which is inspired by the PAC learning framework in Machine Learning. The task of a PAC search algorithm is to find a solution that is w-admissible with high probability. In this paper we formally define PAC search, and present a framework for PAC search algorithms that can work on top of any search algorithm that produces a sequence of solutions. Experimental results on the 15-puzzle demonstrate that our framework activated on top of Anytime Weighted A* (AWA*) expands significantly less nodes than regular AWA* while returning solutions that have almost the same quality

    Analyzing the performance of pattern database heuristics

    Get PDF
    Abstract We introduce a model for predicting the performance of IDA* using pattern database heuristics, as a function of the branching factor of the problem, the solution depth, and the size of the pattern databases. While it is known that the larger the pattern database, the more efficient the search, we provide a quantitative analysis of this relationship. In particular, we show that for a single goal state, the number of nodes expanded by IDA* is a fraction of (log b s + 1)/s of the nodes expanded by a brute-force search, where b is the branching factor, and s is the size of the pattern database. We also show that by taking the maximum of at least two pattern databases, the number of node expansions decreases linearly with s compared to a brute-force search. We compare our theoretical predictions with empirical performance data on Rubik's Cube. Our model is conservative, and overestimates the actual number of node expansions
    corecore