3 research outputs found

    Autonomous Recognition of Collective Motion Behaviours in Robot Swarms from Vision Data Using Deep Neural Networks

    Full text link
    The study of natural swarms and the attempt to replicate their behaviours in artificial systems have been an active area of research for many years. The complexity of such systems, arising from simple interactions of many similar units, is fascinating and has inspired researchers from various disciplines to study and understand the underlying mechanisms. In robotics, implementing swarm behaviours in embodied agents (robots) is challenging due to the need to design simple rules for interaction between individual robots that can lead to complex collective behaviours. Every new behaviour designed needs to be manually tuned to function well on any given robotic platform. While it is relatively easy to design rule-based systems that can display structured collective behaviour (such as collective motion or grouping), computers still need to recognise such behaviour when it occurs. Recognition of swarm behaviour is useful in at least two cases. In Case 1, it permits a party to recognise a swarm controlled by another party in an adversarial interaction. Case 2, it permits a machine to develop collective behaviours autonomously by recognising when desirable behaviour emerges. Existing work has examined collective behaviour recognition using feature-based data describing a swarm. However, this may not be feasible in Case 1 if feature-based data is not available for an adversarial swarm. This thesis proposes deep neural network approaches to recognising collective behaviour from video data. The work contributes four datasets comprising examples of both collective flocking behaviour and random behaviour in groups of Pioneer 3DX robots. The first dataset captures the behaviours from the perspective of a top-down video to address Case 1. The second and third datasets capture the behaviours from the perspective of forward-facing cameras on each robot as an approach to Case 2. As well, the fourth dataset captures behaviours using spherical cameras that contribute to Case 2. We also make use of feature-based data describing the same behaviours for comparative purposes. This thesis contributes the design of a deep neural network appropriate for learning to recognise collective behaviour from video data. We compare the performance of this network to that of a shallow network trained on feature-based data in terms of distinguishing collective from random motion and distinguishing various grouping parameters of collective behaviour. Results show that video data can be as accurate as feature-based data for distinguishing flocking collective motion from random motion. We also present a case study showing that our approach to the recognition of collective motion can transfer from simulated robots to real robots
    corecore