15 research outputs found

    Learning basic navigation for personal satellite assistant using neuroevolution

    Full text link
    The Personal Satellite Assistant (PSA) is a small robot proposed by NASA to assist astronauts who are living and working aboard the space shuttle or space station. To help the astronaut, it has to move around safely. Navigation is made difficult by the arrangement of thrusters. Only forward and leftward thrust is available and rotation will introduce translation. This paper shows how stable navigation can be achieved through neuroevolution in three basic navigation tasks: (1) Stopping autorotation, (2) Turning 90 degrees, and (3) Moving forward to a position. The results show that it is possible to learn to control the PSA stably and efficiently through neuroevolution

    A Survey and Analysis of Multi-Robot Coordination

    Get PDF
    International audienceIn the field of mobile robotics, the study of multi-robot systems (MRSs) has grown significantly in size and importance in recent years. Having made great progress in the development of the basic problems concerning single-robot control, many researchers shifted their focus to the study of multi-robot coordination. This paper presents a systematic survey and analysis of the existing literature on coordination, especially in multiple mobile robot systems (MMRSs). A series of related problems have been reviewed, which include a communication mechanism, a planning strategy and a decision-making structure. A brief conclusion and further research perspectives are given at the end of the paper

    Neuroevolutionary reinforcement learning for generalized control of simulated helicopters

    Get PDF
    This article presents an extended case study in the application of neuroevolution to generalized simulated helicopter hovering, an important challenge problem for reinforcement learning. While neuroevolution is well suited to coping with the domainā€™s complex transition dynamics and high-dimensional state and action spaces, the need to explore efficiently and learn on-line poses unusual challenges. We propose and evaluate several methods for three increasingly challenging variations of the task, including the method that won first place in the 2008 Reinforcement Learning Competition. The results demonstrate that (1) neuroevolution can be effective for complex on-line reinforcement learning tasks such as generalized helicopter hovering, (2) neuroevolution excels at finding effective helicopter hovering policies but not at learning helicopter models, (3) due to the difficulty of learning reliable models, model-based approaches to helicopter hovering are feasible only when domain expertise is available to aid the design of a suitable model representation and (4) recent advances in efficient resampling can enable neuroevolution to tackle more aggressively generalized reinforcement learning tasks

    Evolving Keepaway Soccer Players through Task Decomposition

    No full text
    In some complex control tasks, learning a direct mapping from an agent's sensors to its actuators is very difficult. For such tasks, decomposing the problem into more manageable components can make learning feasible. In this paper, we provide a task decomposition, in the form of a decision tree, for one such task. We investigate two different methods of learning the resulting subtasks. The first approach, layered learning, trains each component sequentially in its own training environment, aggressively constraining the search. The second approach, coevolution, learns all the subtasks simultaneously from the same experiences and puts few restrictions on the learning algorithm. We empirically compare these two training methodologies using neuro-evolution, a machine learning algorithm that evolves neural networks. Our experiments, conducted in the domain of simulated robotic soccer keepaway, indicate that neuro-evolution can learn effective behaviors and that the less constrained coevolutionary approach outperforms the sequential approach

    Evolving keepaway soccer players through task decomposition

    No full text
    1 Introduction One of the goals of machine learning algorithms is to facilitate the discovery of novel solutions to problems, particularly those that might be unforeseen by human problem-solvers. As such, there is a certain appeal to "tabula rasa learning, " in which the algorithms are turned loose on learning tasks with no (or minimal) guidance from humans. However, the complexity of tasks that can be successfully addressed with tabula rasa learning given current machine learning technology is limited
    corecore