35 research outputs found

    Evolving macro-actions for planning

    Get PDF
    Domain re-engineering through macro-actions (i.e. macros) provides one potential avenue for research into learning for planning. However, most existing work learns macros that are reusable plan fragments and so observable from planner behaviours online or plan characteristics offline. Also, there are learning methods that learn macros from domain analysis. Nevertheless, most of these methods explore restricted macro spaces and exploit specific features of planners or domains. But, the learning examples, especially that are used to acquire previous experiences, might not cover many aspects of the system, or might not always reflect that better choices have been made during the search. Moreover, any specific properties are not likely to be common with many planners or domains. This paper presents an offline evolutionary method that learns macros for arbitrary planners and domains. Our method explores a wider macro space and learns macros that are somehow not observable from the examples. Our method also represents a generalised macro learning framework as it does not discover or utilise any specific structural properties of planners or domains

    Knowledge and regularity in planning

    Get PDF
    The field of planning has focused on several methods of using domain-specific knowledge. The three most common methods, use of search control, use of macro-operators, and analogy, are part of a continuum of techniques differing in the amount of reused plan information. This paper describes TALUS, a planner that exploits this continuum, and is used for comparing the relative utility of these methods. We present results showing how search control, macro-operators, and analogy are affected by domain regularity and the amount of stored knowledge

    DynoPlan: Combining Motion Planning and Deep Neural Network based Controllers for Safe HRL

    Get PDF
    Many realistic robotics tasks are best solved compositionally, through control architectures that sequentially invoke primitives and achieve error correction through the use of loops and conditionals taking the system back to alternative earlier states. Recent end-to-end approaches to task learning attempt to directly learn a single controller that solves an entire task, but this has been difficult for complex control tasks that would have otherwise required a diversity of local primitive moves, and the resulting solutions are also not easy to inspect for plan monitoring purposes. In this work, we aim to bridge the gap between hand designed and learned controllers, by representing each as an option in a hybrid hierarchical Reinforcement Learning framework - DynoPlan. We extend the options framework by adding a dynamics model and the use of a nearness-to-goal heuristic, derived from demonstrations. This translates the optimization of a hierarchical policy controller to a problem of planning with a model predictive controller. By unrolling the dynamics of each option and assessing the expected value of each future state, we can create a simple switching controller for choosing the optimal policy within a constrained time horizon similarly to hill climbing heuristic search. The individual dynamics model allows each option to iterate and be activated independently of the specific underlying instantiation, thus allowing for a mix of motion planning and deep neural network based primitives. We can assess the safety regions of the resulting hybrid controller by investigating the initiation sets of the different options, and also by reasoning about the completeness and performance guarantees of the underpinning motion planners.Comment: RLD

    Offline Skill Graph (OSG): A Framework for Learning and Planning using Offline Reinforcement Learning Skills

    Full text link
    Reinforcement Learning has received wide interest due to its success in competitive games. Yet, its adoption in everyday applications is limited (e.g. industrial, home, healthcare, etc.). In this paper, we address this limitation by presenting a framework for planning over offline skills and solving complex tasks in real-world environments. Our framework is comprised of three modules that together enable the agent to learn from previously collected data and generalize over it to solve long-horizon tasks. We demonstrate our approach by testing it on a robotic arm that is required to solve complex tasks

    Learning Useful Macro-actions for Planning with N-Grams

    Get PDF
    International audienceAutomated planning has achieved significant breakthroughs in recent years. Nonetheless, attempts to improve search algorithm efficiency remain the primary focus of most research. However, it is also possible to build on previous searches and learn from previously found solutions. Our approach consists in learning macro-actions and adding them into the planner's domain. A macro-action is an action sequence selected for application at search time and applied as a single indivisible action. Carefully chosen macros can drastically improve the planning performances by reducing the search space depth. However, macros also increase the branching factor. Therefore, the use of macros entails a utility problem: a trade-off has to be addressed between the benefit of adding macros to speed up the goal search and the overhead caused by increasing the branching factor in the search space. In this paper, we propose an online domain and planner-independent approach to learn 'useful' macros, i.e. macros that address the utility problem. These useful macros are obtained by statistical and heuristic filtering of a domain specific macro library. The library is created from the most frequent action sequences derived from an n-gram analysis on successful plans previously computed by the planner. The relevance of this approach is proven by experiments on International Planning Competition domains

    Accelerating Reinforcement Learning through the Discovery of Useful Subgoals

    Get PDF
    An ability to adjust to changing environments and unforeseen circumstances is likely to be an important component of a successful autonomous space robot. This paper shows how to augment reinforcement learning algorithms with a method for automatically discovering certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on a current task and to transfer its expertise to related tasks through the reuse of its ability to attain subgoals. Subgoals are created based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We introduced this approach in [10] and here we present additional results for a simulated mobile robot task
    corecore