3 research outputs found

    On the Benefit of SubOptimality within the Divide-and-Evolve Scheme

    Get PDF
    Abstract. Divide-and-Evolve (DaE) is an original “memeticization ” of Evolutionary Computation and Artificial Intelligence Planning. DaE optimizes either the number of actions, or the total cost of actions, or the total makespan, by generating ordered sequences of intermediate goals via artificial evolution. The evolutionary part of DaE is based on the Evolving Objects (EO) library, and can theorically use any embedded planner. However, since the introduction of this approach only one embedded planner has been used: the temporal optimal planner CPT. In this paper, we built a new version of DaE based on time-based Atom Choice and we embarked another planner (the sub-optimal planner YAHSP) in order to test the technical robustness of the approach and to compare the impact of using an optimal planner versus using a sub-optimal planner for all kinds of planning problems.

    Learn-and-Optimize: a Parameter Tuning Framework for Evolutionary AI Planning

    No full text
    Abstract. Learn-and-Optimize (LaO) is a generic surrogate based method for parameter tuning combining learning and optimization. In this paper LaO is used to tune Divide-and-Evolve (DaE), an Evolutionary Algorithm for AI Planning. The LaO framework makes it possible to learn the relation between some features describing a given instance and the optimal parameters for this instance, thus it enables to extrapolate this relation to unknown instances in the same domain. Moreover, the learned knowledge is used as a surrogate-model to accelerate the search for the optimal parameters. The proposed implementation of LaO uses an Artificial Neural Network for learning the mapping between features and optimal parameters, and the Covariance Matrix Adaptation Evolution Strategy for optimization. Results demonstrate that LaO is capable of improving the quality of the DaE results even with only a few iterations. The main limitation of the DaE case-study is the limited amount of meaningful features that are available to describe the instances. However, the learned model reaches almost the same performance on the test instances, which means that it is capable of generalization.
    corecore