44 research outputs found

    IPC: A Benchmark Data Set for Learning with Graph-Structured Data

    Get PDF
    Benchmark data sets are an indispensable ingredient of the evaluation of graph-based machine learning methods. We release a new data set, compiled from International Planning Competitions (IPC), for benchmarking graph classification, regression, and related tasks. Apart from the graph construction (based on AI planning problems) that is interesting in its own right, the data set possesses distinctly different characteristics from popularly used benchmarks. The data set, named IPC, consists of two self-contained versions, grounded and lifted, both including graphs of large and skewedly distributed sizes, posing substantial challenges for the computation of graph models such as graph kernels and graph neural networks. The graphs in this data set are directed and the lifted version is acyclic, offering the opportunity of benchmarking specialized models for directed (acyclic) structures. Moreover, the graph generator and the labeling are computer programmed; thus, the data set may be extended easily if a larger scale is desired. The data set is accessible from \url{https://github.com/IBM/IPC-graph-data}.Comment: ICML 2019 Workshop on Learning and Reasoning with Graph-Structured Data. The data set is accessible from https://github.com/IBM/IPC-graph-dat

    Vision-based deep execution monitoring

    Full text link
    Execution monitor of high-level robot actions can be effectively improved by visual monitoring the state of the world in terms of preconditions and postconditions that hold before and after the execution of an action. Furthermore a policy for searching where to look at, either for verifying the relations that specify the pre and postconditions or to refocus in case of a failure, can tremendously improve the robot execution in an uncharted environment. It is now possible to strongly rely on visual perception in order to make the assumption that the environment is observable, by the amazing results of deep learning. In this work we present visual execution monitoring for a robot executing tasks in an uncharted Lab environment. The execution monitor interacts with the environment via a visual stream that uses two DCNN for recognizing the objects the robot has to deal with and manipulate, and a non-parametric Bayes estimation to discover the relations out of the DCNN features. To recover from lack of focus and failures due to missed objects we resort to visual search policies via deep reinforcement learning

    Exploring lifted planning encodings in Essence Prime

    Get PDF
    This work is supported by UK EPSRC EP/P015638/1 and EP/V027182/1, by the MICINN/FEDER, UE (RTI2018-095609-B-I00), by the French Agence Nationale de la Recherche, reference ANR-19-CHIA-0013-01, and by Archimedes institute, Aix-Marseille University.State-space planning is the de-facto search method of the automated planning community. Planning problems are typically expressed in the Planning Domain Definition Language (PDDL), where action and variable templates describe the sets of actions and variables that occur in the problem. Typically, a planner begins by generating the full set of instantiations of these templates, which in turn are used to derive useful heuristics that guide the search. Thanks to this success, there has been limited research in other directions. We explore a different approach, keeping the compact representation by directly reformulating the problem in PDDL into ESSENCE PRIME, a Constraint Programming language with support for distinct solving technologies including SAT and SMT. In particular, we explore two different encodings from PDDL to ESSENCE PRIME, how they represent action parameters, and their performance. The encodings are able to maintain the compactness of the PDDL representation, and while they differ slightly, they perform quite differently on various instances from the International Planning Competition.Publisher PD

    Reinforcement Learning for Planning Heuristics

    Get PDF
    Informed heuristics are essential for the success of heuristic search algorithms. But, it is difficult to develop a new heuris- tic which is informed on various tasks. Instead, we propose a framework that trains a neural network as heuristic for the tasks it is supposed to solve. We present two reinforcement learning approaches to learn heuristics for fixed state spaces and fixed goals. Our first approach uses approximate value iteration, our second ap- proach uses searches to generate training data. We show that in some domains our approaches outperform previous work, and we point out potentials for future improvements

    Learning Generalized Reactive Policies using Deep Neural Networks

    Full text link
    We present a new approach to learning for planning, where knowledge acquired while solving a given set of planning problems is used to plan faster in related, but new problem instances. We show that a deep neural network can be used to learn and represent a \emph{generalized reactive policy} (GRP) that maps a problem instance and a state to an action, and that the learned GRPs efficiently solve large classes of challenging problem instances. In contrast to prior efforts in this direction, our approach significantly reduces the dependence of learning on handcrafted domain knowledge or feature selection. Instead, the GRP is trained from scratch using a set of successful execution traces. We show that our approach can also be used to automatically learn a heuristic function that can be used in directed search algorithms. We evaluate our approach using an extensive suite of experiments on two challenging planning problem domains and show that our approach facilitates learning complex decision making policies and powerful heuristic functions with minimal human input. Videos of our results are available at goo.gl/Hpy4e3

    Answer Set Solving with Generalized Learned Constraints

    Get PDF
    Conflict learning plays a key role in modern Boolean constraint solving. Advanced in satisfiability testing, it has meanwhile become a base technology in many neighboring fields, among them answer set programming (ASP). However, learned constraints are only valid for a currently solved problem instance and do not carry over to similar instances. We address this issue in ASP and introduce a framework featuring an integrated feedback loop that allows for reusing conflict constraints. The idea is to extract (propositional) conflict constraints, generalize and validate them, and reuse them as integrity constraints. Although we explore our approach in the context of dynamic applications based on transition systems, it is driven by the ultimate objective of overcoming the issue that learned knowledge is bound to specific problem instances. We implemented this workflow in two systems, namely, a variant of the ASP solver clasp that extracts integrity constraints along with a downstream system for generalizing and validating them

    Exploring instance generation for automated planning

    Get PDF
    Funding: This work is supported by EPSRC grant EP/P015638/1. Nguyen Dang is a Leverhulme Early Career Fellow.Many of the core disciplines of artificial intelligence have sets of standard benchmark problems well known and widely used by the community when developing new algorithms. Constraint programming and automated planning are examples of these areas, where the behaviour of a new algorithm is measured by how it performs on these instances. Typically the efficiency of each solving method varies not only between problems, but also between instances of the same problem. Therefore, having a diverse set of instances is crucial to be able to effectively evaluate a new solving method. Current methods for automatic generation of instances for Constraint Programming problems start with a declarative model and search for instances with some desired attributes, such as hardness or size. We first explore the difficulties of adapting this approach to generate instances starting from problem specifications written in PDDL, the de-facto standard language of the automated planning community. We then propose a new approach where the whole planning problem description is modelled using Essence, an abstract modelling language that allows expressing high-level structures without committing to a particular low level representation in PDDL.Publisher PD

    Visual search and recognition for robot task execution and monitoring

    Full text link
    Visual search of relevant targets in the environment is a crucial robot skill. We propose a preliminary framework for the execution monitor of a robot task, taking care of the robot attitude to visually searching the environment for targets involved in the task. Visual search is also relevant to recover from a failure. The framework exploits deep reinforcement learning to acquire a "common sense" scene structure and it takes advantage of a deep convolutional network to detect objects and relevant relations holding between them. The framework builds on these methods to introduce a vision-based execution monitoring, which uses classical planning as a backbone for task execution. Experiments show that with the proposed vision-based execution monitor the robot can complete simple tasks and can recover from failures in autonomy
    corecore