1,548 research outputs found

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Towards a Reformulation Based Approach for Efficient Numeric Planning: Numeric Outer Entanglements

    Get PDF
    Restricting the search space has shown to be an effective approach for improving the performance of automated planning systems. A planner-independent technique for pruning the search space is domain and problem reformulation. Recently, Outer Entanglements, which are relations between planning operators and initial or goal predicates, have been introduced as a reformulation technique for eliminating potential undesirable instances of planning operators, and thus restricting the search space. Reformulation techniques, however, have been mainly applied in classical planning, although many real-world planning applications require to deal with numerical information. In this paper, we investigate the usefulness of reformulation approaches in planning with numerical fluents. In particular, we propose and extension of the notion of outer entanglements for handling numeric fluents. An empirical evaluation, which involves 150 instances from 5 domains, shows promising results

    Multi-objective optimisation of machine tool error mapping using automated planning

    Get PDF
    Error mapping of machine tools is a multi-measurement task that is planned based on expert knowledge. There are no intelligent tools aiding the production of optimal measurement plans. In previous work, a method of intelligently constructing measurement plans demonstrated that it is feasible to optimise the plans either to reduce machine tool downtime or the estimated uncertainty of measurement due to the plan schedule. However, production scheduling and a continuously changing environment can impose conflicting constraints on downtime and the uncertainty of measurement. In this paper, the use of the produced measurement model to minimise machine tool downtime, the uncertainty of measurement and the arithmetic mean of both is investigated and discussed through the use of twelve different error mapping instances. The multi-objective search plans on average have a 3% reduction in the time metric when compared to the downtime of the uncertainty optimised plan and a 23% improvement in estimated uncertainty of measurement metric when compared to the uncertainty of the temporally optimised plan. Further experiments on a High Performance Computing (HPC) architecture demonstrated that there is on average a 3% improvement in optimality when compared with the experiments performed on the PC architecture. This demonstrates that even though a 4% improvement is beneficial, in most applications a standard PC architecture will result in valid error mapping plan

    Planning through Automatic Portfolio Configuration: The PbP Approach

    Get PDF
    In the field of domain-independent planning, several powerful planners implementing different techniques have been developed. However, no one of these systems outperforms all others in every known benchmark domain. In this work, we propose a multi-planner approach that automatically configures a portfolio of planning techniques for each given domain. The configuration process for a given domain uses a set of training instances to: (i) compute and analyze some alternative sets of macro-actions for each planner in the portfolio identifying a (possibly empty) useful set, (ii) select a cluster of planners, each one with the identified useful set of macro-actions, that is expected to perform best, and (iii) derive some additional information for configuring the execution scheduling of the selected planners at planning time. The resulting planning system, called PbP (Portfolio- based Planner), has two variants focusing on speed and plan quality. Different versions of PbP entered and won the learning track of the sixth and seventh International Planning Competitions. In this paper, we experimentally analyze PbP considering planning speed and plan quality in depth. We provide a collection of results that help to understand PbP�s behavior, and demonstrate the effectiveness of our approach to configuring a portfolio of planners with macro-actions

    Neural Networks for Predicting Algorithm Runtime Distributions

    Full text link
    Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance

    EmergencyGrid:Planning in Convergence Environments

    Get PDF
    Government agencies are often responsible for event handling, planning, coordination, and status reporting during emergency response in natural disaster events such as floods, tsunamis and earthquakes. Across such a range of emergency response scenarios, there is a common set of requirements that distributed intelligent computer systems generally address. To support the implementation of these requirements, some researchers are proposing the creation of grids, where final interface and processing nodes perform joint work supported by a network infrastructure. The aim of this project is to extend the concepts of emergency response grids, using a convergence scenario between web and other computational platforms. Our initial work focuses on the Interactive Digital TV platform, where we intend to transform individual TV devices into active final nodes, using a hierarchical planning structure. We describe the architecture of this approach and an initial prototype specification that is being developed to validate some concepts and illustrate the advantages of this convergence planning environment

    Progress in AI Planning Research and Applications

    Get PDF
    Planning has made significant progress since its inception in the 1970s, in terms both of the efficiency and sophistication of its algorithms and representations and its potential for application to real problems. In this paper we sketch the foundations of planning as a sub-field of Artificial Intelligence and the history of its development over the past three decades. Then some of the recent achievements within the field are discussed and provided some experimental data demonstrating the progress that has been made in the application of general planners to realistic and complex problems. The paper concludes by identifying some of the open issues that remain as important challenges for future research in planning

    Short Term Unit Commitment as a Planning Problem

    Get PDF
    ‘Unit Commitment’, setting online schedules for generating units in a power system to ensure supply meets demand, is integral to the secure, efficient, and economic daily operation of a power system. Conflicting desires for security of supply at minimum cost complicate this. Sustained research has produced methodologies within a guaranteed bound of optimality, given sufficient computing time. Regulatory requirements to reduce emissions in modern power systems have necessitated increased renewable generation, whose output cannot be directly controlled, increasing complex uncertainties. Traditional methods are thus less efficient, generating more costly schedules or requiring impractical increases in solution time. Meta-Heuristic approaches are studied to identify why this large body of work has had little industrial impact despite continued academic interest over many years. A discussion of lessons learned is given, and should be of interest to researchers presenting new Unit Commitment approaches, such as a Planning implementation. Automated Planning is a sub-field of Artificial Intelligence, where a timestamped sequence of predefined actions manipulating a system towards a goal configuration is sought. This differs from previous Unit Commitment formulations found in the literature. There are fewer times when a unit’s online status switches, representing a Planning action, than free variables in a traditional formulation. Efficient reasoning about these actions could reduce solution time, enabling Planning to tackle Unit Commitment problems with high levels of renewable generation. Existing Planning formulations for Unit Commitment have not been found. A successful formulation enumerating open challenges would constitute a good benchmark problem for the field. Thus, two models are presented. The first demonstrates the approach’s strength in temporal reasoning over numeric optimisation. The second balances this but current algorithms cannot handle it. Extensions to an existing algorithm are proposed alongside a discussion of immediate challenges and possible solutions. This is intended to form a base from which a successful methodology can be developed

    MAGPIE: Machine Automated General Performance Improvement via Evolution of Software

    Get PDF
    Performance is one of the most important qualities of software. Several techniques have thus been proposed to improve it, such as program transformations, optimisation of software parameters, or compiler flags. Many automated software improvement approaches use similar search strategies to explore the space of possible improvements, yet available tooling only focuses on one approach at a time. This makes comparisons and exploration of interactions of the various types of improvement impractical. We propose MAGPIE, a unified software improvement framework. It provides a common edit sequence based representation that isolates the search process from the specific improvement technique, enabling a much simplified synergistic workflow. We provide a case study using a basic local search to compare compiler optimisation, algorithm configuration, and genetic improvement. We chose running time as our efficiency measure and evaluated our approach on four real-world software, written in C, C++, and Java. Our results show that, used independently, all techniques find significant running time improvements: up to 25% for compiler optimisation, 97% for algorithm configuration, and 61% for evolving source code using genetic improvement. We also show that up to 10% further increase in performance can be obtained with partial combinations of the variants found by the different techniques. Furthermore, the common representation also enables simultaneous exploration of all techniques, providing a competitive alternative to using each technique individually.Comment: 19 page
    corecore