782 research outputs found

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots

    Brain Computer Interfaces for the Control of Robotic Swarms

    Get PDF
    abstract: A robotic swarm can be defined as a large group of inexpensive, interchangeable robots with limited sensing and/or actuating capabilities that cooperate (explicitly or implicitly) based on local communications and sensing in order to complete a mission. Its inherent redundancy provides flexibility and robustness to failures and environmental disturbances which guarantee the proper completion of the required task. At the same time, human intuition and cognition can prove very useful in extreme situations where a fast and reliable solution is needed. This idea led to the creation of the field of Human-Swarm Interfaces (HSI) which attempts to incorporate the human element into the control of robotic swarms for increased robustness and reliability. The aim of the present work is to extend the current state-of-the-art in HSI by applying ideas and principles from the field of Brain-Computer Interfaces (BCI), which has proven to be very useful for people with motor disabilities. At first, a preliminary investigation about the connection of brain activity and the observation of swarm collective behaviors is conducted. After showing that such a connection may exist, a hybrid BCI system is presented for the control of a swarm of quadrotors. The system is based on the combination of motor imagery and the input from a game controller, while its feasibility is proven through an extensive experimental process. Finally, speech imagery is proposed as an alternative mental task for BCI applications. This is done through a series of rigorous experiments and appropriate data analysis. This work suggests that the integration of BCI principles in HSI applications can be successful and it can potentially lead to systems that are more intuitive for the users than the current state-of-the-art. At the same time, it motivates further research in the area and sets the stepping stones for the potential development of the field of Brain-Swarm Interfaces (BSI).Dissertation/ThesisMasters Thesis Mechanical Engineering 201

    Autonomous Recognition of Collective Motion Behaviours in Robot Swarms from Vision Data Using Deep Neural Networks

    Full text link
    The study of natural swarms and the attempt to replicate their behaviours in artificial systems have been an active area of research for many years. The complexity of such systems, arising from simple interactions of many similar units, is fascinating and has inspired researchers from various disciplines to study and understand the underlying mechanisms. In robotics, implementing swarm behaviours in embodied agents (robots) is challenging due to the need to design simple rules for interaction between individual robots that can lead to complex collective behaviours. Every new behaviour designed needs to be manually tuned to function well on any given robotic platform. While it is relatively easy to design rule-based systems that can display structured collective behaviour (such as collective motion or grouping), computers still need to recognise such behaviour when it occurs. Recognition of swarm behaviour is useful in at least two cases. In Case 1, it permits a party to recognise a swarm controlled by another party in an adversarial interaction. Case 2, it permits a machine to develop collective behaviours autonomously by recognising when desirable behaviour emerges. Existing work has examined collective behaviour recognition using feature-based data describing a swarm. However, this may not be feasible in Case 1 if feature-based data is not available for an adversarial swarm. This thesis proposes deep neural network approaches to recognising collective behaviour from video data. The work contributes four datasets comprising examples of both collective flocking behaviour and random behaviour in groups of Pioneer 3DX robots. The first dataset captures the behaviours from the perspective of a top-down video to address Case 1. The second and third datasets capture the behaviours from the perspective of forward-facing cameras on each robot as an approach to Case 2. As well, the fourth dataset captures behaviours using spherical cameras that contribute to Case 2. We also make use of feature-based data describing the same behaviours for comparative purposes. This thesis contributes the design of a deep neural network appropriate for learning to recognise collective behaviour from video data. We compare the performance of this network to that of a shallow network trained on feature-based data in terms of distinguishing collective from random motion and distinguishing various grouping parameters of collective behaviour. Results show that video data can be as accurate as feature-based data for distinguishing flocking collective motion from random motion. We also present a case study showing that our approach to the recognition of collective motion can transfer from simulated robots to real robots

    Study of artificial intelligence and computer vision methods for tracking transmission lines with the AID of UAVs

    Get PDF
    Currently, Unmanned Aerial Vehicles (UAVs) have been used in the most diverse applications in both the civil and military sectors. In the civil sector, aerial inspection services have been gaining a lot of attention, especially in the case of inspections of high voltage electrical systems transmission lines. This type of inspection involves a helicopter carrying three or more people (technicians, pilot, etc.) flying over the transmission line along its entire length which is a dangerous service especially due to the proximity of the transmission line and possible environmental conditions (wind gusts, for example). In this context, the use of UAVs has shown considerable interest due to their low cost and safety for transmission line inspection technicians. This work presents research results related to the application of UAVs for transmission lines inspection, autonomously, allowing the identification of invasions of the transmission line area as well as possible defects in components (cables, insulators, connection, etc.) through the use of Convolutional Neural Networks (CNN) for fault detection and identification. This thesis proposes the development of an autonomous system to track power transmission lines using UAVs efficiently and with low implementation and operation costs, based exclusively on rea-time image processing that identifies the structure of the towers and transmission lines durin the flight and controls the aircraft´s movements, guiding it along the closest possible path. A sumary of the work developed will be presented in the next sections.Atualmente, os Veículos Aéreos Não Tripulados – VANTs têm sido utilizados nas mais diversas aplicações tanto no setor civil quanto militar. No setor civil, os serviços de inspeção aérea vêm ganhando bastante atenção, principalmente no caso de inspeções de linhas de transmissão de sistemas elétricos de alta tensão. Este tipo de inspeção envolve um helicóptero transportando três ou mais pessoas (técnicos, pilotos, etc.) sobrevoando a linha de transmissão em toda a sua extensão, o que constitui um serviço perigoso principalmente pela proximidade da linha de transmissão e possíveis condições ambientais (rajadas de vento, por exemplo). Neste contexto, a utilização de VANTs tem demonstrado considerável interesse devido ao seu baixo custo e segurança para técnicos de inspeção de linhas de transmissão. Este trabalho apresenta resultados de pesquisas relacionadas à aplicação de VANTs para inspeção de linhas de transmissão, de forma autônoma, permitindo a identificação de invasões da área da linha de transmissão bem como possíveis defeitos em componentes (cabos, isoladores, conexões, etc.) através do uso de Convolucional. Redes Neurais - CNN para detecção e identificação de falhas. Esta tese propõe o desenvolvimento de um sistema autônomo para rastreamento de linhas de transmissão de energia utilizando VANTs de forma eficiente e com baixos custos de implantação e operação, baseado exclusivamente no processamento de imagens em tempo real que identifica a estrutura das torres e linhas de transmissão durante o voo e controla a velocidade da aeronave. movimentos, guiando-o pelo caminho mais próximo possível. Um resumo do trabalho desenvolvido será apresentado nas próximas seções

    On the Increase of Background Seismicity Rate during the 1997-1998 Umbria-Marche, Central Italy, Sequence: Apparent Variation or Fluid-Driven Triggering?

    Get PDF
    We investigate the temporal evolution of background seismicity rate in the Umbria-Marche sector of the northern Apennines that was struck by the 1997-98 Colfiorito seismic sequence. Specifically we apply the ETAS model to separate the background seismicity rate from the coseismic triggered rate of earthquake production. Analyzed data are extracted from the CSI1.1 catalog of Italian seismicity (1981-2002), which contains for the study area 12.163 events with ML > 1.5. The capability of the ETAS model to match the observed seismicity rate is tested by analyzing the model residuals and by applying two non-parametric statistical tests (the RUNS and the Kolmogorov-Smirnov tests) to verify the fit of residuals to Poisson hypothesis. We first apply the ETAS model to the seismicity occurred in the study area during the whole period covered by the CSI1.1 catalog. Our results show that the ETAS model does not explain the temporal evolution of seismicity in a time interval defined by change points identified from time-evolution of residuals and encompassing the Colfiorito seismic sequence. We therefore restrict our analysis to this period and analyze only those events belonging to the 1997-1998 seismic sequence. We again obtain the inadequacy of a stationary ETAS model with constant background rate to reproduce the temporal pattern of observed seismicity. We verify that the failure of ETAS model to fit the observed data is caused by the increase of the background seismicity rate associated with the repeated Colfiorito main shocks. We interpret the inferred increase of background rate as a consequence of the perturbation to the coseismic stress field caused by fluid flow and/or pore pressure relaxation. In particular we show that the transient perturbation caused by poroelastic relaxation can explain the temporal increase of background rate that therefore represents a fluid signal in the seismicity pattern

    HEURISTIC OPTIMIZATION OF BAT ALGORITHM FOR HETEROGENEOUS SWARMS USING PERCEPTION

    Get PDF
    In swarm robotics, a group of robots coordinate with each other to solve a problem. Swarm systems can be heterogeneous or homogeneous. Heterogeneous swarms consist of multiple types of robots as opposed to Homogeneous swarms, which are made up of identical robots. There are cases where a Heterogeneous swarm system may consist of multiple Homogeneous swarm systems. Swarm robots can be used for a variety of applications. Swarm robots are majorly used in applications involving the exploration of unknown environments. Swarm systems are dynamic and intelligent. Swarm Intelligence is inspired by naturally occurring swarm systems such as Ant Colony, Bees Hive, or Bats. The Bat Algorithm is a population-based meta-heuristic algorithm for solving continuous optimization problems. In this paper, we study the advantages of fusing the Meta-Heuristic Bat Algorithm with Heuristic Optimization. We have implemented the Meta- Heuristic Bat Algorithm and tested it on a heterogeneous swarm. The same swarm has also been tested by segregating it into different homogeneous swarms by subjecting the heterogeneous swarm to a heuristic optimization

    Cognitive Task Planning for Smart Industrial Robots

    Get PDF
    This research work presents a novel Cognitive Task Planning framework for Smart Industrial Robots. The framework makes an industrial mobile manipulator robot Cognitive by applying Semantic Web Technologies. It also introduces a novel Navigation Among Movable Obstacles algorithm for robots navigating and manipulating inside a firm. The objective of Industrie 4.0 is the creation of Smart Factories: modular firms provided with cyber-physical systems able to strong customize products under the condition of highly flexible mass-production. Such systems should real-time communicate and cooperate with each other and with humans via the Internet of Things. They should intelligently adapt to the changing surroundings and autonomously navigate inside a firm while moving obstacles that occlude free paths, even if seen for the first time. At the end, in order to accomplish all these tasks while being efficient, they should learn from their actions and from that of other agents. Most of existing industrial mobile robots navigate along pre-generated trajectories. They follow ectrified wires embedded in the ground or lines painted on th efloor. When there is no expectation of environment changes and cycle times are critical, this planning is functional. When workspaces and tasks change frequently, it is better to plan dynamically: robots should autonomously navigate without relying on modifications of their environments. Consider the human behavior: humans reason about the environment and consider the possibility of moving obstacles if a certain goal cannot be reached or if moving objects may significantly shorten the path to it. This problem is named Navigation Among Movable Obstacles and is mostly known in rescue robotics. This work transposes the problem on an industrial scenario and tries to deal with its two challenges: the high dimensionality of the state space and the treatment of uncertainty. The proposed NAMO algorithm aims to focus exploration on less explored areas. For this reason it extends the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration algorithm. The extension does not impose obstacles avoidance: it assigns an importance to each cell by combining the efforts necessary to reach it and that needed to free it from obstacles. The obtained algorithm is scalable because of its independence from the size of the map and from the number, shape, and pose of obstacles. It does not impose restrictions on actions to be performed: the robot can both push and grasp every object. Currently, the algorithm assumes full world knowledge but the environment is reconfigurable and the algorithm can be easily extended in order to solve NAMO problems in unknown environments. The algorithm handles sensor feedbacks and corrects uncertainties. Usually Robotics separates Motion Planning and Manipulation problems. NAMO forces their combined processing by introducing the need of manipulating multiple objects, often unknown, while navigating. Adopting standard precomputed grasps is not sufficient to deal with the big amount of existing different objects. A Semantic Knowledge Framework is proposed in support of the proposed algorithm by giving robots the ability to learn to manipulate objects and disseminate the information gained during the fulfillment of tasks. The Framework is composed by an Ontology and an Engine. The Ontology extends the IEEE Standard Ontologies for Robotics and Automation and contains descriptions of learned manipulation tasks and detected objects. It is accessible from any robot connected to the Cloud. It can be considered a data store for the efficient and reliable execution of repetitive tasks; and a Web-based repository for the exchange of information between robots and for the speed up of the learning phase. No other manipulation ontology exists respecting the IEEE Standard and, regardless the standard, the proposed ontology differs from the existing ones because of the type of features saved and the efficient way in which they can be accessed: through a super fast Cascade Hashing algorithm. The Engine lets compute and store the manipulation actions when not present in the Ontology. It is based on Reinforcement Learning techniques that avoid massive trainings on large-scale databases and favors human-robot interactions. The overall system is flexible and easily adaptable to different robots operating in different industrial environments. It is characterized by a modular structure where each software block is completely reusable. Every block is based on the open-source Robot Operating System. Not all industrial robot controllers are designed to be ROS-compliant. This thesis presents the method adopted during this research in order to Open Industrial Robot Controllers and create a ROS-Industrial interface for them
    corecore