96,902 research outputs found

    Theory and Algorithms for Partial Order Based Reduction in Planning

    Full text link
    Search is a major technique for planning. It amounts to exploring a state space of planning domains typically modeled as a directed graph. However, prohibitively large sizes of the search space make search expensive. Developing better heuristic functions has been the main technique for improving search efficiency. Nevertheless, recent studies have shown that improving heuristics alone has certain fundamental limits on improving search efficiency. Recently, a new direction of research called partial order based reduction (POR) has been proposed as an alternative to improving heuristics. POR has shown promise in speeding up searches. POR has been extensively studied in model checking research and is a key enabling technique for scalability of model checking systems. Although the POR theory has been extensively studied in model checking, it has never been developed systematically for planning before. In addition, the conditions for POR in the model checking theory are abstract and not directly applicable in planning. Previous works on POR algorithms for planning did not establish the connection between these algorithms and existing theory in model checking. In this paper, we develop a theory for POR in planning. The new theory we develop connects the stubborn set theory in model checking and POR methods in planning. We show that previous POR algorithms in planning can be explained by the new theory. Based on the new theory, we propose a new, stronger POR algorithm. Experimental results on various planning domains show further search cost reduction using the new algorithm

    Partial Order Based Reduction in Planning: A Unifying Theory and New Algorithms

    Get PDF
    Partial order based reduction (POR) has recently attracted research in planning. POR algorithms reduce search space by recognizing interchangable orders between actions and expanding only a subset of all possible orders during the search. POR has been extensively studied in model checking and proved to be an enabling technique for reducing the search space and costs. Recently, two POR algorithms, including the expansion core (EC) and stratified planning (SP) algorithms, have been proposed. Being orthogonal to the development of accurate heuristic functions, these reduction methods show great potential to improve the planning efficiency from a new perspective. However, it is unclear how these POR methods relate to each other and whether there exist stronger reduction methods. We propose a unifying theory for POR. The theory gives a necessary and sufficient condition for two actions to be semi-commutative, a condition that enables POR. We interpret both EC and SP in the theoretical framework. Further, based on the new theory, we propose new, stronger POR algorithms. Experimental results on various planning domains show significant search cost reduction

    Accelerating Heuristic Search for AI Planning

    Get PDF
    AI Planning is an important research field. Heuristic search is the most commonly used method in solving planning problems. Despite recent advances in improving the quality of heuristics and devising better search strategies, the high computational cost of heuristic search remains a barrier that severely limits its application to real world problems. In this dissertation, we propose theories, algorithms and systems to accelerate heuristic search for AI planning. We make four major contributions in this dissertation. First, we propose a state-space reduction method called Stratified Planning to accelerate heuristic search. Stratified Planning can be combined with any heuristic search to prune redundant paths in state space, without sacrificing the optimality and completeness of search algorithms. Second, we propose a general theory for partial order reduction in planning. The proposed theory unifies previous reduction algorithms for planning, and ushers in new partial order reduction algorithms that can further accelerate heuristic search by pruning more nodes in state space than previously proposed algorithms. Third, we study the local structure of state space and propose using random walks to accelerate plateau exploration for heuristic search. We also implement two state-of-the-art planners that perform competitively in the Seventh International Planning Competition. Last, we utilize cloud computing to further accelerate search for planning. We propose a portfolio stochastic search algorithm that takes advantage of the cloud. We also implement a cloud-based planning system to which users can submit planning tasks and make full use of the computational resources provided by the cloud. We push the state of the art in AI planning by developing theories and algorithms that can accelerate heuristic search for planning. We implement state-of-the-art planning systems that have strong speed and quality performance

    On Different Strategies for Eliminating Redundant Actions from Plans

    Get PDF
    Satisficing planning engines are often able to generate plans in a reasonable time, however, plans are often far from optimal. Such plans often contain a high number of redundant actions, that are actions, which can be removed without affecting the validity of the plans. Existing approaches for determining and eliminating redundant actions work in polynomial time, however, do not guarantee eliminating the "best" set of redundant actions, since such a problem is NP-complete. We introduce an approach which encodes the problem of determining the "best" set of redundant actions (i.e. having the maximum total-cost) as a weighted MaxSAT problem. Moreover, we adapt the existing polynomial technique which greedily tries to eliminate an action and its dependants from the plan in order to eliminate more expensive redundant actions. The proposed approaches are empirically compared to existing approaches on plans generated by state-of-the-art planning engines on standard planning benchmark

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Symblicit algorithms for optimal strategy synthesis in monotonic Markov decision processes

    Full text link
    When treating Markov decision processes (MDPs) with large state spaces, using explicit representations quickly becomes unfeasible. Lately, Wimmer et al. have proposed a so-called symblicit algorithm for the synthesis of optimal strategies in MDPs, in the quantitative setting of expected mean-payoff. This algorithm, based on the strategy iteration algorithm of Howard and Veinott, efficiently combines symbolic and explicit data structures, and uses binary decision diagrams as symbolic representation. The aim of this paper is to show that the new data structure of pseudo-antichains (an extension of antichains) provides another interesting alternative, especially for the class of monotonic MDPs. We design efficient pseudo-antichain based symblicit algorithms (with open source implementations) for two quantitative settings: the expected mean-payoff and the stochastic shortest path. For two practical applications coming from automated planning and LTL synthesis, we report promising experimental results w.r.t. both the run time and the memory consumption.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    On the computation of π\pi-flat outputs for differential-delay systems

    Full text link
    We introduce a new definition of π\pi-flatness for linear differential delay systems with time-varying coefficients. We characterize π\pi- and π\pi-0-flat outputs and provide an algorithm to efficiently compute such outputs. We present an academic example of motion planning to discuss the pertinence of the approach.Comment: Minor corrections to fit with the journal versio
    • …
    corecore