34 research outputs found

    Immune Inspired Cooperative Mechanism with Refined Low-level Behaviors for Multi-Robot Shepherding

    Get PDF
    In this paper, immune systems and its relationships with multi-robot shepherding problems are discussed. The proposed algorithm is based on immune network theories that have many similarities with the multi-robot systems domain. The underlying immune inspired cooperative mechanism of the algorithm is simulated and evaluated. The paper also describes a refinement of the memory-based immune network that enhances a robot’s action-selection process. A refined model, which is based on the Immune Network T-cell-regulated—with Memory (INT-M) model, is applied to the dog-sheep scenario. The refinements involves the low-level behaviors of the robot dogs, namely shepherds’ formation and shepherds’ approach. These behaviors would make the shepherds to form a line behind the group of sheep and also obey a safety zone of each flock, thus achieving better control of the flock and minimize flock separation occurrences. Simulation experiments are conducted on the Player/Stage robotics platform

    Immune systems inspired multi-robot cooperative shepherding

    Get PDF
    Certain tasks require multiple robots to cooperate in order to solve them. The main problem with multi-robot systems is that they are inherently complex and usually situated in a dynamic environment. Now, biological immune systems possess a natural distributed control and exhibit real-time adaptivity, properties that are required to solve problems in multi-robot systems. In this thesis, biological immune systems and their response to external elements to maintain an organism's health state are researched. The objective of this research is to propose immune-inspired approaches to cooperation, to establish an adaptive cooperation algorithm, and to determine the refinements that can be applied in relation to cooperation. Two immune-inspired models that are based on the immune network theory are proposed, namely the Immune Network T-cell-regulated---with Memory (INT-M) and the Immune Network T-cell-regulated---Cross-Reactive (INT-X) models. The INT-M model is further studied where the results have suggested that the model is feasible and suitable to be used, especially in the multi-robot cooperative shepherding domain. The Collecting task in the RoboShepherd scenario and the application of the INT-M algorithm for multi-robot cooperation are discussed. This scenario provides a highly dynamic and complex situation that has wide applicability in real-world problems. The underlying 'mechanism of cooperation' in the immune inspired model (INT-M) is verified to be adaptive in this chosen scenario. Several multi-robot cooperative shepherding factors are studied and refinements proposed, notably methods used for Shepherds' Approach, Shepherds' Formation and Steering Points' Distance. This study also recognises the importance of flock identification in relation to cooperative shepherding, and the Connected Components Labelling method to overcome the related problem is presented. Further work is suggested on the proposed INT-X model that was not implemented in this study, since it builds on top of the INT-M algorithm and its refinements. This study can also be extended to include other shepherding behaviours, further investigation of other useful features of biological immune systems, and the application of the proposed models to other cooperative tasks

    A Refined Immune Systems Inspired Model for Multi-Robot Shepherding

    Get PDF
    In this paper, basic biological immune systems and their responses to external elements to maintain an organism's health state are described. The relationship between immune systems and multi-robot systems are also discussed. The proposed algorithm is based on immune network theories that have many similarities with the multi-robot systems domain. The paper describes a refinement of the memory-based immune network that enhances a robot's action-selection process. The refined model; which is based on the Immune Network T-cell-regulated - with Memory (INT-M) model; is applied onto the dog and sheep scenario. The refinements involves the low-level behaviors of the robot dogs, namely Shepherds' Formation and Shepherds' Approach. The shepherds would form a line behind the group of sheep and also obey a safe zone of each sheep, thus achieving better control of the flock. Simulation experiments are conducted on the Player/Stage platform

    Synthesis and Analysis of Minimalist Control Strategies for Swarm Robotic Systems

    Get PDF
    The field of swarm robotics studies bio-inspired cooperative control strategies for large groups of relatively simple robots. The robots are limited in their individual capabilities, however, by inducing cooperation amongst them, the limitations can be overcome. Local sensing and interactions within the robotic swarm promote scalable, robust, and flexible behaviours. This thesis focuses on synthesising and analysing minimalist control strategies for swarm robotic systems. Using a computation-free swarming framework, multiple decentralised control strategies are synthesised and analysed. The control strategies enable the robots—equipped with only discrete-valued sensors—to reactively respond to their environment. We present the simplest control solutions to date to four multi-agent problems: finding consensus, gathering on a grid, shepherding, and spatial coverage. The control solutions—obtained by employing an offline evolutionary robotics approach—are tested, either in computer simulation or by physical experiment. They are shown to be—up to a certain extent—scalable, robust against sensor noise, and flexible to the changes in their environment. The investigated gathering problem is proven to be unsolvable using the deterministic framework. The extended framework, using stochastic reactive controllers, is applied to obtain provably correct solutions. Using no run-time memory and only limited sensing make it possible to realise implementations that are arguably free of arithmetic computation. Due to the low computational demands, the control solutions may enable or inspire novel applications, for example, in nanomedicine

    Learning shepherding behavior

    Get PDF
    Roboter, die Schafe hüten sowie die dazu nötigen Strategien zum Bewegen von Individuen zu einem Ziel, bieten vielseitige Anwendungen wie z. B. die Rettung von Menschen aus bedrohlichen Lagen oder der Einsatz schwimmender Roboter zur Beseitigung von Ölteppichen. In dieser Arbeit nutzen wir ein Multiagentensystem als Modell der Roboter und Schafe. Wir untersuchen die Komplexität des Schafehütens und zeigen einen Greedy-Algorithmus, der in linearer Laufzeit eine fast optimale Lösung berechnet. Weiterhin analysieren wir, wie solche Strategien gelernt werden können, da maschinelles Lernen oftmals vorteilhafte Lösungen findet. Im Folgenden nutzen wir Reinforcement Learning (RL) als Lernmethode. Damit RL Agenten ihr gelerntes Wissen auch in kontinuierlichen oder sehr großen Zustandsräumen (wie im betrachteten Szenario) vorhalten können, sind Methoden zur Wissensabstraktion nötig. Unsere Methoden kombinieren RL mit adaptiven neuronalen Verfahren und erlauben dem Agenten gleichzeitig Strategien sowie Darstellungen dieses Wissens zu lernen. Beide Verfahren basieren auf dem unüberwachten Lernverfahren Growing Neural Gas, das eine Vektorquantisierung lernt, indem es neuronale Einheiten im Eingaberaums platziert und bewegt. GNG-Q gruppiert benachbarte Zustände die gleiches Verhalten erfordern (Zustandsraumapproximation); I-GNG-Q wiederum kombiniert Wissen, um eine glatte Bewertungsfunktion zu erhalten (Approximation der Bewertungsfunktion des RL-Agenten). Beide Verfahren beobachten das Verhalten des Lerners um Stellen der Approximation zu finden, die noch verfeinert werden müssen. Die Hauptvorteile unserer Verfahren sind u.a., dass sie ohne Kenntnis des Modells der Umgebung automatisch eine passende Auflösung der Approximation bestimmen. Die experimentelle Analyse unterstreicht, dass unsere Methoden sehr effiziente und effektive Strategien erzeugen.Artificial shepherding strategies, i.e. using robots to move individuals to given locations, have many applications. For example, people can be guided by mobile robots from dangerous places or swimming robots may help to clean up oil spills. This thesis uses a multiagent system to model the robots and sheep. We analyze the complexity of the shepherding task and present a greedy algorithm that only needs linear time to compute a solution that is proven to be close to optimal. Additionally, we analyze to what extend such strategies can be learned as learning usually provides powerful solutions. This thesis focuses on reinforcement learning (RL) as learning method. To enable RL agents to use their knowledge more efficiently in continuous or large state spaces (as e.g. in the shepherding task), methods to transfer knowledge to unseen but similar situations are required. The approaches developed in this thesis, GNG-Q and I-GNG-Q, combine RL with adaptive neural algorithms and enable the agent to learn behavior in parallel with its representation. Both are based upon the growing neural gas, which is an unsupervised learning approach that learns a vector quantization by placing and adjusting units in the input space. GNG-Q groups states that are spatial close and share the same behavior while I-GNG-Q combines the learned behavior from a larger area of the approximation which results in smoother value functions. Thus, GNG-Q performs a state-space abstraction and I-GNG-Q approximates the value function. Both methods monitor the agent's policy during learning to find regions of the approximation that have to be refined. Amongst many others, the core advantages of our approaches are that they do not need the model of the environment and that the resolution of the approximation is determined automatically. The experimental evaluation underlines that the behaviors learned using our approaches are highly efficient and effective.Michael BaumannTag der Verteidigung: 22.01.2016Fakultät für Elektrotechnik, Informatik und Mathematik, Universität Paderborn, Univ., Dissertation, 201

    Task Allocation in Foraging Robot Swarms:The Role of Information Sharing

    Get PDF
    Autonomous task allocation is a desirable feature of robot swarms that collect and deliver items in scenarios where congestion, caused by accumulated items or robots, can temporarily interfere with swarm behaviour. In such settings, self-regulation of workforce can prevent unnecessary energy consumption. We explore two types of self-regulation: non-social, where robots become idle upon experiencing congestion, and social, where robots broadcast information about congestion to their team mates in order to socially inhibit foraging. We show that while both types of self-regulation can lead to improved energy efficiency and increase the amount of resource collected, the speed with which information about congestion flows through a swarm affects the scalability of these algorithms
    corecore