932 research outputs found

    Merge-and-Shrink Heuristics for Classical Planning: Efficient Implementation and Partial Abstractions

    Get PDF
    Merge-and-shrink heuristics are a successful class of abstraction heuristics used for optimal classical planning. With the recent addition of generalized label reduction, merge-and-shrink can be understood as an algorithm framework that repeatedly applies transformations to a factored representation of a given planning task to compute an abstraction. In this paper, we describe an efficient implementation of the framework and its transformations, comparing it to its previous implementation in Fast Downward. We further discuss partial merge-and-shrink abstractions that do not consider all aspects of the concrete state space. To compute such partial abstractions, we stop the merge-and-shrink computation early by imposing simple limits on the resource consumption of the algorithm. Our evaluation shows that the efficient implementation indeed improves over the previous one, and that partial merge-and-shrink abstractions further push the efficiency of merge-and-shrink planners

    Merge-and-shrink abstractions for classical planning : theory, strategies, and implementation

    Get PDF
    Classical planning is the problem of finding a sequence of deterministic actions in a state space that lead from an initial state to a state satisfying some goal condition. The dominant approach to optimally solve planning tasks is heuristic search, in particular A* search combined with an admissible heuristic. While there exist many different admissible heuristics, we focus on abstraction heuristics in this thesis, and in particular, on the well-established merge-and-shrink heuristics. Our main theoretical contribution is to provide a comprehensive description of the merge-and-shrink framework in terms of transformations of transition systems. Unlike previous accounts, our description is fully compositional, i.e. can be understood by understanding each transformation in isolation. In particular, in addition to the name-giving merge and shrink transformations, we also describe pruning and label reduction as such transformations. The latter is based on generalized label reduction, a new theory that removes all of the restrictions of the previous definition of label reduction. We study the four types of transformations in terms of desirable formal properties and explain how these properties transfer to heuristics being admissible and consistent or even perfect. We also describe an optimized implementation of the merge-and-shrink framework that substantially improves the efficiency compared to previous implementations. Furthermore, we investigate the expressive power of merge-and-shrink abstractions by analyzing factored mappings, the data structure they use for representing functions. In particular, we show that there exist certain families of functions that can be compactly represented by so-called non-linear factored mappings but not by linear ones. On the practical side, we contribute several non-linear merge strategies to the merge-and-shrink toolbox. In particular, we adapt a merge strategy from model checking to planning, provide a framework to enhance existing merge strategies based on symmetries, devise a simple score-based merge strategy that minimizes the maximum size of transition systems of the merge-and-shrink computation, and describe another framework to enhance merge strategies based on an analysis of causal dependencies of the planning task. In a large experimental study, we show the evolution of the performance of merge-and-shrink heuristics on planning benchmarks. Starting with the state of the art before the contributions of this thesis, we subsequently evaluate all of our techniques and show that state-of-the-art non-linear merge-and-shrink heuristics improve significantly over the previous state of the art

    Merge-and-Shrink Task Reformulation for Classical Planning

    Get PDF
    The performance of domain-independent planning systems heavily depends on how the planning task has been modeled. This makes task reformulation an important tool to get rid of unnecessary complexity and increase the robustness of planners with respect to the model chosen by the user. In this paper, we represent tasks as factored transition systems (FTS), and use the merge-and-shrink (M&S) framework for task reformulation for optimal and satisficing planning. We prove that the flexibility of the underlying representation makes the M&S reformulation methods more powerful than the counterparts based on the more popular finite-domain representation. We adapt delete-relaxation and M&S heuristics to work on the FTS representation and evaluate the impact of our reformulation

    Cost-Partitioned Merge-and-Shrink Heuristics for Optimal Classical Planning

    Get PDF

    Efficient Evaluation of Large Abstractions for Decoupled Search: Merge-and-Shrink and Symbolic Pattern Databases

    Get PDF
    Abstraction heuristics are a state-of-the-art technique to solve classical planning problems optimally. A common approach is to precompute many small abstractions and combine them admissibly using cost partitioning. Recent work has shown that this approach does not work out well when using such heuristics for decoupled state space search, where search nodes represent potentially large sets of states. This is due to the fact that admissibly combining the estimates of several heuristics without sacrificing accuracy is NP-hard for decoupled states. In this work we propose to use a single large abstraction instead. We focus on merge-and-shrink and symbolic pattern database heuristics, which are designed to produce such abstractions. For these heuristics, we prove that the evaluation of decoupled states is NP-hard in general, but we also identify conditions under which it is polynomial. We introduce algorithms for both the general and the polynomial case. Our experimental evaluation shows that single large abstraction heuristics lead to strong performance when the heuristic evaluation is polynomial

    Simplified Planner Selection

    Get PDF
    There exists no planning algorithm that outperforms all oth- ers. Therefore, it is important to know which algorithm works well on a task. A recently published approach uses either im- age or graph convolutional neural networks to solve this prob- lem and achieves top performance. Especially the transforma- tion from the task to an image ignores a lot of information. Thus, we would like to know what the network is learning and if this is reasonable. As this is currently not possible, we take one step back. We identify a small set of simple graph features and show that elementary and interpretable machine learning techniques can use those features to outperform the neural network based approach. Furthermore, we evaluate the importance of those features and verify that the performance of our approach is robust to changes in the training and test data

    An Analysis of Merge Strategies for Merge-and-Shrink Heuristics

    Get PDF
    The merge-and-shrink framework provides a general basis for the computation of abstraction heuristics for factored transition systems. Recent experimental and theoretical research demonstrated the utility of non-linear merge strategies, which have not been studied in depth. We experimentally analyze the quality of state-of-the-art merge strategies by comparing them to random strategies and with respect to tie-breaking, showing that there is considerable room for improvement. We finally describe a new merge strategy that experimentally outperforms the current state of the art

    Narrowing the Gap Between Saturated and Optimal Cost Partitioning for Classical Planning

    Get PDF
    In classical planning, cost partitioning is a method for admissibly combining a set of heuristic estimators by distributing operator costs among the heuristics. An optimal cost partitioning is often prohibitively expensive to compute. Saturated cost partitioning is an alternative that is much faster to compute and has been shown to offer high-quality heuristic guidance on Cartesian abstractions. However, its greedy nature makes it highly susceptible to the order in which the heuristics are considered. We show that searching in the space of orders leads to significantly better heuristic estimates than with previously considered orders. Moreover, using multiple orders leads to a heuristic that is significantly better informed than any single-order heuristic. In experiments with Cartesian abstractions, the resulting heuristic approximates the optimal cost partitioning very closely

    Machine Learning for Classical Planning: Neural Network Heuristics, Online Portfolios, and State Space Topologies

    Get PDF
    State space search solves navigation tasks and many other real world problems. Heuristic search, especially greedy best-first search, is one of the most successful algorithms for state space search. We improve the state of the art in heuristic search in three directions. In Part I, we present methods to train neural networks as powerful heuristics for a given state space. We present a universal approach to generate training data using random walks from a (partial) state. We demonstrate that our heuristics trained for a specific task are often better than heuristics trained for a whole domain. We show that the performance of all trained heuristics is highly complementary. There is no clear pattern, which trained heuristic to prefer for a specific task. In general, model-based planners still outperform planners with trained heuristics. But our approaches exceed the model-based algorithms in the Storage domain. To our knowledge, only once before in the Spanner domain, a learning-based planner exceeded the state-of-the-art model-based planners. A priori, it is unknown whether a heuristic, or in the more general case a planner, performs well on a task. Hence, we trained online portfolios to select the best planner for a task. Today, all online portfolios are based on handcrafted features. In Part II, we present new online portfolios based on neural networks, which receive the complete task as input, and not just a few handcrafted features. Additionally, our portfolios can reconsider their choices. Both extensions greatly improve the state-of-the-art of online portfolios. Finally, we show that explainable machine learning techniques, as the alternative to neural networks, are also good online portfolios. Additionally, we present methods to improve our trust in their predictions. Even if we select the best search algorithm, we cannot solve some tasks in reasonable time. We can speed up the search if we know how it behaves in the future. In Part III, we inspect the behavior of greedy best-first search with a fixed heuristic on simple tasks of a domain to learn its behavior for any task of the same domain. Once greedy best-first search expanded a progress state, it expands only states with lower heuristic values. We learn to identify progress states and present two methods to exploit this knowledge. Building upon this, we extract the bench transition system of a task and generalize it in such a way that we can apply it to any task of the same domain. We can use this generalized bench transition system to split a task into a sequence of simpler searches. In all three research directions, we contribute new approaches and insights to the state of the art, and we indicate interesting topics for future work

    Machine learning for classical planning : neural network heuristics, online portfolios, and state space topologies

    Get PDF
    State space search solves navigation tasks and many other real world problems. Heuristic search, especially greedy best-first search, is one of the most successful algorithms for state space search. We improve the state of the art in heuristic search in three directions. In Part I, we present methods to train neural networks as powerful heuristics for a given state space. We present a universal approach to generate training data using random walks from a (partial) state. We demonstrate that our heuristics trained for a specific task are often better than heuristics trained for a whole domain. We show that the performance of all trained heuristics is highly complementary. There is no clear pattern, which trained heuristic to prefer for a specific task. In general, model-based planners still outperform planners with trained heuristics. But our approaches exceed the model-based algorithms in the Storage domain. To our knowledge, only once before in the Spanner domain, a learning-based planner exceeded the state-of-the-art model-based planners. A priori, it is unknown whether a heuristic, or in the more general case a planner, performs well on a task. Hence, we trained online portfolios to select the best planner for a task. Today, all online portfolios are based on handcrafted features. In Part II, we present new online portfolios based on neural networks, which receive the complete task as input, and not just a few handcrafted features. Additionally, our portfolios can reconsider their choices. Both extensions greatly improve the state-of-the-art of online portfolios. Finally, we show that explainable machine learning techniques, as the alternative to neural networks, are also good online portfolios. Additionally, we present methods to improve our trust in their predictions. Even if we select the best search algorithm, we cannot solve some tasks in reasonable time. We can speed up the search if we know how it behaves in the future. In Part III, we inspect the behavior of greedy best-first search with a fixed heuristic on simple tasks of a domain to learn its behavior for any task of the same domain. Once greedy best- first search expanded a progress state, it expands only states with lower heuristic values. We learn to identify progress states and present two methods to exploit this knowledge. Building upon this, we extract the bench transition system of a task and generalize it in such a way that we can apply it to any task of the same domain. We can use this generalized bench transition system to split a task into a sequence of simpler searches. In all three research directions, we contribute new approaches and insights to the state of the art, and we indicate interesting topics for future work.Viele Alltagsprobleme können mit Hilfe der Zustandsraumsuche gelöst werden. Heuristische Suche, insbesondere die gierige Bestensuche, ist einer der erfolgreichsten Algorithmen fĂŒr die Zustandsraumsuche. Wir verbessern den aktuellen Stand der Wissenschaft bezĂŒglich heuristischer Suche auf drei Arten. Eine der wichtigsten Komponenten der heuristischen Suche ist die Heuristik. Mit einer guten Heuristik findet die Suche schnell eine Lösung. Eine gute Heuristik fĂŒr ein Problem zu modellieren ist mĂŒhsam. In Teil I prĂ€sentieren wir Methoden, um automatisiert gute Heuristiken fĂŒr ein Problem zu lernen. HierfĂŒr generieren wird die Trainingsdaten mittels Zufallsbewegungen ausgehend von (Teil-) ZustĂ€nden des Problems. Wir zeigen, dass die Heuristiken, die wir fĂŒr einen einzigen Zustandsraum trainieren, oft besser sind als Heuristiken, die fĂŒr eine Problemklasse trainiert wurden. Weiterhin zeigen wir, dass die QualitĂ€t aller trainierten Heuristiken je nach Problemklasse stark variiert, keine Heuristik eine andere dominiert, und es nicht vorher erkennbar ist, ob eine trainierte Heuristik gut funktioniert. Wir stellen fest, dass in fast allen getesteten Problemklassen die modellbasierte Suchalgorithmen den trainierten Heuristiken ĂŒberlegen sind. Lediglich in der Storage Problemklasse sind unsere Heuristiken ĂŒberlegen. Oft ist es unklar, welche Heuristik oder Suchalgorithmus man fĂŒr ein Problem nutzen sollte. Daher trainieren wir online Portfolios, die fĂŒr ein gegebenes Problem den besten Algorithmus vorherzusagen. Die Eingabe fĂŒr das online Portfolio sind bisher immer von Menschen ausgewĂ€hlte Eigenschaften des Problems. In Teil II prĂ€sentieren wir neue online Portfolios, die das gesamte Problem als Eingabe bekommen. DarĂŒber hinaus können unsere online Portfolios ihre Entscheidung einmal korrigieren. Beide Änderungen verbessern die QualitĂ€t von online Portfolios erheblich. Weiterhin zeigen wir, dass wir auch gute online Portfolios mit erklĂ€rbaren Techniken des maschinellen Lernens trainieren können. Selbst wenn wir den besten Algorithmus fĂŒr ein Problem auswĂ€hlen, kann es sein, dass das Problem zu schwierig ist, um in akzeptabler Zeit gelöst zu werden. In Teil III zeigen wir, wie wir von dem Verhalten einer gierigen Bestensuche auf einfachen Problemen ihr Verhalten auf schwierigeren Problemen der gleichen Problemklasse vorhersagen können. Dieses Wissen nutzen wir, um die Suche zu verbessern. Zuerst zeigen wir, wie man FortschrittszustĂ€nde erkennt. Immer wenn gierige Bestensuche einen Fortschrittszustand expandiert, wissen wir, dass es nie wieder einen Zustand mit gleichem oder höheren heuristischen Wert expandieren wird.Wir prĂ€sentieren zwei Methoden, die diesesWissen verwenden. Aufbauend auf dieser Arbeit lernen wir von einem Problem, wie man jegliches Problem der gleichen Problemklasse in eine Reihe von einfacheren Suchen aufteilen kann
    • 

    corecore