45 research outputs found

    Evolving Static Representations for Task Transfer

    Get PDF
    An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Previous approaches to transfer in Keepaway have focused on transforming the original representation to fit the new task. In contrast, this paper explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To demonstrate this point, a bird\u27s eye view (BEV) representation is introduced that can represent different tasks on the same two-dimensional map. For example, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV. Yet the problem is that a raw two-dimensional map is high-dimensional and unstructured. This paper shows how this problem is addressed naturally by an idea from evolutionary computation called indirect encoding, which compresses the representation by exploiting its geometry. The result is that the BEV learns a Keepaway policy that transfers without further learning or manipulation. It also facilitates transferring knowledge learned in a different domain, Knight Joust, into Keepaway. Finally, the indirect encoding of the BEV means that its geometry can be changed without altering the solution. Thus static representations facilitate several kinds of transfer

    Cyclic evolution: a new strategy for improving controllers obtained by layered evolution

    Get PDF
    Complex control tasks may be solved by dividing them into a more specific and more easily handled subtasks hierarchy. Several authors have demonstrated that the incremental layered evolution paradigm allows obtaining controllers capable of solving this type of tasks. In this direction, different solutions combining Incremental Evolution with Evolving Neural Networks have been developed in order to provide an adaptive mechanism minimizing the previous knowledge necessary to obtain a good performance giving place to controllers made up of several networks. This paper is focused on the presentation of a new mechanism, called Cyclic Evolution, which allows improving controllers based on neural networks obtained through layered evolution. Its performance is based on continuing the cyclic improvement of each of the networks making up the controller within the whole domain of the problem. The proposed method of this paper has been used to solve the Keepaway game with successful results compared to other solutions recently proposed. Finally, some conclusions are included together with some future lines of work.Facultad de Informátic

    Cyclic evolution : A new strategy for improving controllers obtained by layered evolution

    Get PDF
    Complex control tasks may be solved by dividing them into a more specific and more easily handled subtasks hierarchy. Several authors have demonstrated that the incremental layered evolution paradigm allows obtaining controllers capable of solving this type of tasks. In this direction, different solutions combining Incremental Evolution with Evolving Neural Networks have been developed in order to provide an adaptive mechanism minimizing the previous knowledge necessary to obtain a good performance giving place to controllers made up of several networks. This paper is focused on the presentation of a new mechanism, called Cyclic Evolution, which allows improving controllers based on neural networks obtained through layered evolution. Its performance is based on continuing the cyclic improvement of each of the networks making up the controller within the whole domain of the problem. The proposed method of this paper has been used to solve the Keepaway game with successful results compared to other solutions recently proposed. Finally, some conclusions are included together with some future lines of workVI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Effective Task Transfer Through Indirect Encoding

    Get PDF
    An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the bird’s eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation. Yet a challenge for such representation is that a raw two-dimensional map is highdimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on iii modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded. Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain

    Neuroevolution in Games: State of the Art and Open Challenges

    Get PDF
    This paper surveys research on applying neuroevolution (NE) to games. In neuroevolution, artificial neural networks are trained through evolutionary algorithms, taking inspiration from the way biological brains evolved. We analyse the application of NE in games along five different axes, which are the role NE is chosen to play in a game, the different types of neural networks used, the way these networks are evolved, how the fitness is determined and what type of input the network receives. The article also highlights important open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table (Table 1

    Cyclic evolution : A new strategy for improving controllers obtained by layered evolution

    Get PDF
    Complex control tasks may be solved by dividing them into a more specific and more easily handled subtasks hierarchy. Several authors have demonstrated that the incremental layered evolution paradigm allows obtaining controllers capable of solving this type of tasks. In this direction, different solutions combining Incremental Evolution with Evolving Neural Networks have been developed in order to provide an adaptive mechanism minimizing the previous knowledge necessary to obtain a good performance giving place to controllers made up of several networks. This paper is focused on the presentation of a new mechanism, called Cyclic Evolution, which allows improving controllers based on neural networks obtained through layered evolution. Its performance is based on continuing the cyclic improvement of each of the networks making up the controller within the whole domain of the problem. The proposed method of this paper has been used to solve the Keepaway game with successful results compared to other solutions recently proposed. Finally, some conclusions are included together with some future lines of workVI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Cyclic evolution: a new strategy for improving controllers obtained by layered evolution

    Get PDF
    Complex control tasks may be solved by dividing them into a more specific and more easily handled subtasks hierarchy. Several authors have demonstrated that the incremental layered evolution paradigm allows obtaining controllers capable of solving this type of tasks. In this direction, different solutions combining Incremental Evolution with Evolving Neural Networks have been developed in order to provide an adaptive mechanism minimizing the previous knowledge necessary to obtain a good performance giving place to controllers made up of several networks. This paper is focused on the presentation of a new mechanism, called Cyclic Evolution, which allows improving controllers based on neural networks obtained through layered evolution. Its performance is based on continuing the cyclic improvement of each of the networks making up the controller within the whole domain of the problem. The proposed method of this paper has been used to solve the Keepaway game with successful results compared to other solutions recently proposed. Finally, some conclusions are included together with some future lines of work.Facultad de Informátic
    corecore