673 research outputs found

    The impact of agent density on scalability in collective systems : noise-induced versus majority-based bistability

    Get PDF
    In this paper, we show that non-uniform distributions in swarms of agents have an impact on the scalability of collective decision-making. In particular, we highlight the relevance of noise-induced bistability in very sparse swarm systems and the failure of these systems to scale. Our work is based on three decision models. In the first model, each agent can change its decision after being recruited by a nearby agent. The second model captures the dynamics of dense swarms controlled by the majority rule (i.e., agents switch their opinion to comply with that of the majority of their neighbors). The third model combines the first two, with the aim of studying the role of non-uniform swarm density in the performance of collective decision-making. Based on the three models, we formulate a set of requirements for convergence and scalability in collective decision-making

    Cooperation of Nature and Physiologically Inspired Mechanism in Visualisation

    Get PDF
    A novel approach of integrating two swarm intelligence algorithms is considered, one simulating the behaviour of birds flocking (Particle Swarm Optimisation) and the other one (Stochastic Diffusion Search) mimics the recruitment behaviour of one species of ants – Leptothorax acervorum. This hybrid algorithm is assisted by a biological mechanism inspired by the behaviour of blood flow and cells in blood vessels, where the concept of high and low blood pressure is utilised. The performance of the nature-inspired algorithms and the biologically inspired mechanisms in the hybrid algorithm is reflected through a cooperative attempt to make a drawing on the canvas. The scientific value of the marriage between the two swarm intelligence algorithms is currently being investigated thoroughly on many benchmarks and the results reported suggest a promising prospect (al-Rifaie, Bishop & Blackwell, 2011). We also discuss whether or not the ‘art works’ generated by nature and biologically inspired algorithms can possibly be considered as ‘computationally creative’

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation
    • …
    corecore