1,856 research outputs found

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots

    Personalized robot assistant for support in dressing

    Get PDF
    Robot-assisted dressing is performed in close physical interaction with users who may have a wide range of physical characteristics and abilities. Design of user adaptive and personalized robots in this context is still indicating limited, or no consideration, of specific user-related issues. This paper describes the development of a multi-modal robotic system for a specific dressing scenario - putting on a shoe, where users’ personalized inputs contribute to a much improved task success rate. We have developed: 1) user tracking, gesture recognition andposturerecognitionalgorithmsrelyingonimagesprovidedby a depth camera; 2) a shoe recognition algorithm from RGB and depthimages;3)speechrecognitionandtext-to-speechalgorithms implemented to allow verbal interaction between the robot and user. The interaction is further enhanced by calibrated recognition of the users’ pointing gestures and adjusted robot’s shoe delivery position. A series of shoe fitting experiments have been performed on two groups of users, with and without previous robot personalization, to assess how it affects the interaction performance. Our results show that the shoe fitting task with the personalized robot is completed in shorter time, with a smaller number of user commands and reduced workload

    Intelligence for Human-Assistant Planetary Surface Robots

    Get PDF
    The central premise in developing effective human-assistant planetary surface robots is that robotic intelligence is needed. The exact type, method, forms and/or quantity of intelligence is an open issue being explored on the ERA project, as well as others. In addition to field testing, theoretical research into this area can help provide answers on how to design future planetary robots. Many fundamental intelligence issues are discussed by Murphy [2], including (a) learning, (b) planning, (c) reasoning, (d) problem solving, (e) knowledge representation, and (f) computer vision (stereo tracking, gestures). The new "social interaction/emotional" form of intelligence that some consider critical to Human Robot Interaction (HRI) can also be addressed by human assistant planetary surface robots, as human operators feel more comfortable working with a robot when the robot is verbally (or even physically) interacting with them. Arkin [3] and Murphy are both proponents of the hybrid deliberative-reasoning/reactive-execution architecture as the best general architecture for fully realizing robot potential, and the robots discussed herein implement a design continuously progressing toward this hybrid philosophy. The remainder of this chapter will describe the challenges associated with robotic assistance to astronauts, our general research approach, the intelligence incorporated into our robots, and the results and lessons learned from over six years of testing human-assistant mobile robots in field settings relevant to planetary exploration. The chapter concludes with some key considerations for future work in this area

    Telescience Testbed Pilot Program

    Get PDF
    The Telescience Testbed Pilot Program is developing initial recommendations for requirements and design approaches for the information systems of the Space Station era. During this quarter, drafting of the final reports of the various participants was initiated. Several drafts are included in this report as the University technical reports

    Modeling and control of UAV bearing formations with bilateral high-level steering

    Get PDF
    In this paper we address the problem of controlling the motion of a group of unmanned aerial vehicles (UAVs) bound to keep a formation defined in terms of only relative angles (i.e. a bearing formation). This problem can naturally arise within the context of several multi-robot applications such as, e.g. exploration, coverage, and surveillance. First, we introduce and thoroughly analyze the concept and properties of bearing formations, and provide a class of minimally linear sets of bearings sufficient to uniquely define such formations. We then propose a bearing-only formation controller requiring only bearing measurements, converging almost globally, and maintaining bounded inter-agent distances despite the lack of direct metric information.The controller still leaves the possibility of imposing group motions tangent to the current bearing formation. These can be either autonomously chosen by the robots because of any additional task (e.g. exploration), or exploited by an assisting human co-operator. For this latter 'human-in-the-loop' case, we propose a multi-master/multi-slave bilateral shared control system providing the co-operator with some suitable force cues informative of the UAV performance. The proposed theoretical framework is extensively validated by means of simulations and experiments with quadrotor UAVs equipped with onboard cameras. Practical limitations, e.g. limited field-of-view, are also considered. © The Author(s) 2012

    Control of robot swarms through natural language dialogue: A case study on monitoring fires

    Get PDF
    There are numerous environmental and non-environmental disasters happening throughout the world, representing a big danger to common people, community helpers, to the fauna and flora. Developing a program capable of controlling swarms of robots, using natural language processing (NLP) and further on, a speech to text system, will enable a more mobile solution, with no need for keyboard and mouse or a mobile device for operating with the robots. Using a welldeveloped NLP system will allow the program to understand natural languagebased interactions, making this system able to be used in different contexts. In firefighting, the use of robots, more specifically drones, enables new ways to obtain reliable information that before was based on guesses or knowledge from someone who had long-time experience on field. Using a swarm of robots to monitor fire enables innumerous advantages, from the creation of a dynamic fire map, climate information inside the fire, to finding lost firefighters on field through the generated map. This work uses firefighting as a case-study, but other situations can be considered, like searching someone in the sea or searching for toxins in an open environmental area.Existem muitos desastres ambientais e não ambientais em todo o mundo, representando um grande perigo para pessoas comuns, ajudantes da comunidade e para a fauna e flora. O desenvolvimento de um programa capaz de controlar enxames de robôs, usando Processamento Computacional da Língua (PCL) e, posteriormente, um sistema de fala-para-texto, permitirá uma solução mais móvel, sem necessidade de teclado e rato ou dispositivos móveis para operar com os robôs. O uso de um sistema bem desenvolvido de PCL permitirá que o programa entenda interações baseadas em linguagem natural, tornando-o capaz de ser usado em diferentes contextos. O uso de robôs (mais especificamente drones) no combate a incêndios, permite novas maneiras de obter informações confiáveis que antes eram baseadas em suposições ou conhecimentos de pessoas com longa experiência em campo. O uso de um enxame de robôs para monitorizar o incêndio permite inúmeras vantagens, desde a criação de um mapa dinâmico do incêndio, informações climáticas dentro do mesmo, até encontrar bombeiros perdidos no campo, através do mapa gerado pelos robôs. Este trabalho usa o combate a incêndios como um estudo de caso, mas outras situações podem ser consideradas, como procurar alguém no mar ou procurar toxinas numa área ambiental aberta

    Gaze-Based Control of Robot Arm in Three-Dimensional Space

    Get PDF
    Eye tracking technology has opened up a new communication channel for people with very restricted body movements. These devices had already been successfully applied as a human computer interface, e.g. for writing a text, or to control different devices like a wheelchair. This thesis proposes a Human Robot Interface (HRI) that enables the user to control a robot arm in 3-Dimensional space using only 2-Dimensional gaze direction and the states of the eyes. The introduced interface provides all required commands to translate, rotate, open or close the gripper with the definition of different control modes. In each mode, different commands are provided and direct gaze direction of the user is applied to generate continuous robot commands. To distinguish between natural inspection eye movements and the eye movements that intent to control the robot arm, dynamic command areas are proposed. The dynamic command areas are defined around the robot gripper and are updated with its movements. To provide a direct interaction of the user, gaze gestures and states of the eyes are used to switch between different control modes. For the purpose of this thesis, two versions of the above-introduced HRI were developed. In the first version of the HRI, only two simple gaze gestures and two states of the eye (closed eyes and eye winking) are used for switching. In the second version, instead of the two simple gestures, four complex gaze gestures were applied and the positions of the dynamic command areas were optimized. The complex gaze gestures enable the user to switch directly from initial mode to the desired control mode. These gestures are flexible and can be generated directly in the robot environments. For the recognition of complex gaze gestures, a novel algorithm based on Dynamic Time Warping (DTW) is proposed. The results of the studies conducted with both HRIs confirmed their feasibility and showed the high potential of the proposed interfaces as hands-free interfaces. Furthermore, the results of subjective and objective measurements showed that the usability of the interface with simple gaze gestures had been improved with the integration of complex gaze gestures and the new positions of the dynamic command areas

    A survey on robotic technologies for forest firefighting: Applying drone swarms to improve firefighters’ efficiency and safety

    Full text link
    Forest firefighting missions encompass multiple tasks related to prevention, surveillance, and extinguishing. This work presents a complete survey of firefighters on the current problems in their work and the potential technological solutions. Additionally, it reviews the efforts performed by the academy and industry to apply different types of robots in the context of firefighting missions. Finally, all this information is used to propose a concept of operation for the comprehensive application of drone swarms in firefighting. The proposed system is a fleet of quadcopters that individually are only able to visit waypoints and use payloads, but collectively can perform tasks of surveillance, mapping, monitoring, etc. Three operator roles are defined, each one with different access to information and functions in the mission: Mission commander, team leaders, and team members. These operators take advantage of virtual and augmented reality interfaces to intuitively get the information of the scenario and, in the case of the mission commander, control the drone swarmThis research received no external fundin
    corecore