72,227 research outputs found

    Evolving macro-actions for planning

    Get PDF
    Domain re-engineering through macro-actions (i.e. macros) provides one potential avenue for research into learning for planning. However, most existing work learns macros that are reusable plan fragments and so observable from planner behaviours online or plan characteristics offline. Also, there are learning methods that learn macros from domain analysis. Nevertheless, most of these methods explore restricted macro spaces and exploit specific features of planners or domains. But, the learning examples, especially that are used to acquire previous experiences, might not cover many aspects of the system, or might not always reflect that better choices have been made during the search. Moreover, any specific properties are not likely to be common with many planners or domains. This paper presents an offline evolutionary method that learns macros for arbitrary planners and domains. Our method explores a wider macro space and learns macros that are somehow not observable from the examples. Our method also represents a generalised macro learning framework as it does not discover or utilise any specific structural properties of planners or domains

    Integrating Evolutionary Computation with Neural Networks

    Get PDF
    There is a tremendous interest in the development of the evolutionary computation techniques as they are well suited to deal with optimization of functions containing a large number of variables. This paper presents a brief review of evolutionary computing techniques. It also discusses briefly the hybridization of evolutionary computation and neural networks and presents a solution of a classical problem using neural computing and evolutionary computing technique

    Planning as Optimization: Dynamically Discovering Optimal Configurations for Runtime Situations

    Full text link
    The large number of possible configurations of modern software-based systems, combined with the large number of possible environmental situations of such systems, prohibits enumerating all adaptation options at design time and necessitates planning at run time to dynamically identify an appropriate configuration for a situation. While numerous planning techniques exist, they typically assume a detailed state-based model of the system and that the situations that warrant adaptations are known. Both of these assumptions can be violated in complex, real-world systems. As a result, adaptation planning must rely on simple models that capture what can be changed (input parameters) and observed in the system and environment (output and context parameters). We therefore propose planning as optimization: the use of optimization strategies to discover optimal system configurations at runtime for each distinct situation that is also dynamically identified at runtime. We apply our approach to CrowdNav, an open-source traffic routing system with the characteristics of a real-world system. We identify situations via clustering and conduct an empirical study that compares Bayesian optimization and two types of evolutionary optimization (NSGA-II and novelty search) in CrowdNav
    • …
    corecore