4 research outputs found

    Distributed vs. Centralized Particle Swarm Optimization for Learning Flocking Behaviors

    Get PDF
    In this paper we address the automatic synthesis of controllers for the coordinated movement of multiple mobile robots. We use a noise-resistant version of Particle Swarm Optimization to learn in simulation a set of 50 weights of a plastic artificial neural network. Two learning strategies are applied: homogeneous centralized learning, in which every robot runs the same controller and the performance is evaluated externally with a global metric, and heterogeneous distributed learning, in which robots run different controllers and the performance is evaluated independently on each robot with a local metric. The two sets of metrics enforce Reynolds’ flocking rules, resulting in a good correspondence between the metrics and the flocking behaviors obtained. Results demonstrate that it is possible to learn the collective task using both learning approaches. The solutions from the centralized learning have higher fitness and lower standard deviation than those learned in a distributed manner. We test the learned controllers in real robot experiments and also show in simulation the performance of the controllers with increasing number of robots

    Noise-Resistant Particle Swarm Optimization for the Learning of Robust Obstacle Avoidance Controllers using a Depth Camera

    Get PDF
    The Ranger robot was designed to interact with children in order to motivate them to tidy up their room. Its mechanical configuration, together with the limited field of view of its depth camera, make the learning of obstacle avoidance behaviors a hard problem. In this article we introduce two new Particle Swarm Optimization (PSO) algorithms designed to address this noisy, high-dimensional optimization problem. Their aim is to increase the robustness of the generated robotic controllers, as compared to previous PSO algorithms. We show that we can successfully apply this set of PSO algorithms to learn 166 parameters of a robotic controller for the obstacle avoidance task. We also study the impact that an increased evaluation budget has on the robustness and average performance of the optimized controllers. Finally, we validate the control solutions learned in simulation by testing the most robust controller in three different real arenas

    Distributed Particle Swarm Optimization - Particle Allocation and Neighborhood Topologies for the Learning of Cooperative Robotic Behaviors

    Get PDF
    In this article we address the automatic synthesis of controllers for the coordinated movement of multiple mobile robots, as a canonical example of cooperative robotic behavior. We use five distributed noise-resistant variations of Particle Swarm Optimization (PSO) to learn in simulation a set of 50 weights of an artificial neural network. They differ on the way the particles are allocated and evaluated on the robots, and on how the PSO neighborhood is implemented. In addition, we use a centralized approach that allows for benchmarking with the distributed versions. Regardless of the learning approach, each robot measures locally and individually the performance of the group using exclusively on-board resources. Results show that four of the distributed variations obtain similar fitnesses as the centralized version, and are always able to learn. The other distributed variation fails to properly learn on some of the runs, and results in lower fitness when it succeeds. We test systematically the controllers learned in simulation in real robot experiments

    Distributed Multi-Robot Learning using Particle Swarm Optimization

    Get PDF
    This thesis studies the automatic design and optimization of high-performing robust controllers for mobile robots using exclusively on-board resources. Due to the often large parameter space and noisy performance metrics, this constitutes an expensive optimization problem. Population-based learning techniques have been proven to be effective in dealing with noise and are thus promising tools to approach this problem. We focus this research on the Particle Swarm Optimization (PSO) algorithm, which, in addition to dealing with noise, allows a distributed implementation, speeding up the optimization process and adding robustness to failure of individual agents. In this thesis, we systematically analyze the different variables that affect the learning process for a multi-robot obstacle avoidance benchmark. These variables include algorithmic parameters, controller architecture, and learning and testing environments. The analysis is performed on experimental setups of increasing evaluation time and complexity: numerical benchmark functions, high-fidelity simulations, and experiments with real robots. Based on this analysis, we apply the distributed PSO framework to learn a more complex, collaborative task: flocking. This attempt to learn a collaborative task in a distributed manner on a large parameter space is, to our knowledge, the first of such kind. In addition, we address the problem of noisy performance evaluations encountered in these robotic tasks and present a %new distributed PSO algorithm for dealing with noise suitable for resource-constrained mobile robots due to its low requirements in terms of memory and limited local communication
    corecore