2,675 research outputs found

    Cost Adaptation for Robust Decentralized Swarm Behaviour

    Full text link
    Decentralized receding horizon control (D-RHC) provides a mechanism for coordination in multi-agent settings without a centralized command center. However, combining a set of different goals, costs, and constraints to form an efficient optimization objective for D-RHC can be difficult. To allay this problem, we use a meta-learning process -- cost adaptation -- which generates the optimization objective for D-RHC to solve based on a set of human-generated priors (cost and constraint functions) and an auxiliary heuristic. We use this adaptive D-RHC method for control of mesh-networked swarm agents. This formulation allows a wide range of tasks to be encoded and can account for network delays, heterogeneous capabilities, and increasingly large swarms through the adaptation mechanism. We leverage the Unity3D game engine to build a simulator capable of introducing artificial networking failures and delays in the swarm. Using the simulator we validate our method on an example coordinated exploration task. We demonstrate that cost adaptation allows for more efficient and safer task completion under varying environment conditions and increasingly large swarm sizes. We release our simulator and code to the community for future work.Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201

    Adaptive intelligence: essential aspects

    Get PDF
    The article discusses essential aspects of Adaptive Intelligence. Experimental results on optimisation of global test functions by Free Search, Differential Evolution, and Particle Swarm Optimisation clarify how these methods can adapt to multi-modal landscape and space dominated by sub-optimal regions, without supervisors’ control. The achieved results are compared and analysed

    Embodied Evolution in Collective Robotics: A Review

    Full text link
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    Bio-Inspired Obstacle Avoidance: from Animals to Intelligent Agents

    Get PDF
    A considerable amount of research in the field of modern robotics deals with mobile agents and their autonomous operation in unstructured, dynamic, and unpredictable environments. Designing robust controllers that map sensory input to action in order to avoid obstacles remains a challenging task. Several biological concepts are amenable to autonomous navigation and reactive obstacle avoidance. We present an overview of most noteworthy, elaborated, and interesting biologically-inspired approaches for solving the obstacle avoidance problem. We categorize these approaches into three groups: nature inspired optimization, reinforcement learning, and biorobotics. We emphasize the advantages and highlight potential drawbacks of each approach. We also identify the benefits of using biological principles in artificial intelligence in various research areas

    Q-Learning Adjusted Bio-Inspired Multi-Robot Coordination

    Get PDF
    • …
    corecore