1,299 research outputs found

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Spacecraft Dormancy Autonomy Analysis for a Crewed Martian Mission

    Get PDF
    Current concepts of operations for human exploration of Mars center on the staged deployment of spacecraft, logistics, and crew. Though most studies focus on the needs for human occupation of the spacecraft and habitats, these resources will spend most of their lifetime unoccupied. As such, it is important to identify the operational state of the unoccupied spacecraft or habitat, as well as to design the systems to enable the appropriate level of autonomy. Key goals for this study include providing a realistic assessment of what "dormancy" entails for human spacecraft, exploring gaps in state-of-the-art for autonomy in human spacecraft design, providing recommendations for investments in autonomous systems technology development, and developing architectural requirements for spacecraft that must be autonomous during dormant operations. The mission that was chosen is based on a crewed mission to Mars. In particular, this study focuses on the time that the spacecraft that carried humans to Mars spends dormant in Martian orbit while the crew carries out a surface mission. Communications constraints are assumed to be severe, with limited bandwidth and limited ability to send commands and receive telemetry. The assumptions made as part of this mission have close parallels with mission scenarios envisioned for dormant cis-lunar habitats that are stepping-stones to Mars missions. As such, the data in this report is expected to be broadly applicable to all dormant deep space human spacecraft

    On Mixed-Initative Planning and Control for Autonomous Underwater Vehicles

    Get PDF
    Supervision and control of Autonomous underwater vehicles (AUVs) has traditionally been focused on an operator determining a priori the sequence of waypoints of a single vehicle for a mission. As AUVs become more ubiquitous as a scientific tool, we envision the need for controlling multiple vehicles which would impose less cognitive burden on the operator with a more abstract form of human-in-the-loop control. Such mixed-initiative methods in goal-oriented commanding are new for the oceanographic domain and we describe the motivations and preliminary experiments with multiple vehicles operating simultaneously in the water, using a shore-based automated planner

    Adjustably Autonomous Multi-agent Plan Execution with an Internal Spacecraft Free-Flying Robot Prototype

    Get PDF
    We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots

    Natural User Interface for Roombots

    Get PDF
    Roombots (RB) are self-reconfigurable modular robots designed to study robotic reconfiguration on a structured grid and adaptive locomotion off grid. One of the main goals of this platform is to create adaptive furniture inside living spaces such as homes or offices. To ease the control of RB modules in these environments, we propose a novel and more natural way of interaction with the RB modules on a RB grid, called the Natural Roombots User Interface. In our method, the user commands the RB modules using pointing gestures. The user's body is tracked using multiple Kinects. The user is also given real-time visual feedback of their physical actions and the state of the system via LED illumination electronics installed on both RB modules and the grid. We demonstrate how our interface can be used to efficiently control RB modules on simple point-to-point grid locomotion and conclude by discussing future extensions

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Control of robot swarms through natural language dialogue: A case study on monitoring fires

    Get PDF
    There are numerous environmental and non-environmental disasters happening throughout the world, representing a big danger to common people, community helpers, to the fauna and flora. Developing a program capable of controlling swarms of robots, using natural language processing (NLP) and further on, a speech to text system, will enable a more mobile solution, with no need for keyboard and mouse or a mobile device for operating with the robots. Using a welldeveloped NLP system will allow the program to understand natural languagebased interactions, making this system able to be used in different contexts. In firefighting, the use of robots, more specifically drones, enables new ways to obtain reliable information that before was based on guesses or knowledge from someone who had long-time experience on field. Using a swarm of robots to monitor fire enables innumerous advantages, from the creation of a dynamic fire map, climate information inside the fire, to finding lost firefighters on field through the generated map. This work uses firefighting as a case-study, but other situations can be considered, like searching someone in the sea or searching for toxins in an open environmental area.Existem muitos desastres ambientais e nĂŁo ambientais em todo o mundo, representando um grande perigo para pessoas comuns, ajudantes da comunidade e para a fauna e flora. O desenvolvimento de um programa capaz de controlar enxames de robĂŽs, usando Processamento Computacional da LĂ­ngua (PCL) e, posteriormente, um sistema de fala-para-texto, permitirĂĄ uma solução mais mĂłvel, sem necessidade de teclado e rato ou dispositivos mĂłveis para operar com os robĂŽs. O uso de um sistema bem desenvolvido de PCL permitirĂĄ que o programa entenda interaçÔes baseadas em linguagem natural, tornando-o capaz de ser usado em diferentes contextos. O uso de robĂŽs (mais especificamente drones) no combate a incĂȘndios, permite novas maneiras de obter informaçÔes confiĂĄveis que antes eram baseadas em suposiçÔes ou conhecimentos de pessoas com longa experiĂȘncia em campo. O uso de um enxame de robĂŽs para monitorizar o incĂȘndio permite inĂșmeras vantagens, desde a criação de um mapa dinĂąmico do incĂȘndio, informaçÔes climĂĄticas dentro do mesmo, atĂ© encontrar bombeiros perdidos no campo, atravĂ©s do mapa gerado pelos robĂŽs. Este trabalho usa o combate a incĂȘndios como um estudo de caso, mas outras situaçÔes podem ser consideradas, como procurar alguĂ©m no mar ou procurar toxinas numa ĂĄrea ambiental aberta

    Development and Field Testing of the FootFall Planning System for the ATHLETE Robots

    Get PDF
    The FootFall Planning System is a ground-based planning and decision support system designed to facilitate the control of walking activities for the ATHLETE (All-Terrain Hex-Limbed Extra-Terrestrial Explorer) family of robots. ATHLETE was developed at NASA's Jet Propulsion Laboratory (JPL) and is a large six-legged robot designed to serve multiple roles during manned and unmanned missions to the Moon; its roles include transportation, construction and exploration. Over the four years from 2006 through 2010 the FootFall Planning System was developed and adapted to two generations of the ATHLETE robots and tested at two analog field sites (the Human Robotic Systems Project's Integrated Field Test at Moses Lake, Washington, June 2008, and the Desert Research and Technology Studies (D-RATS), held at Black Point Lava Flow in Arizona, September 2010). Having 42 degrees of kinematic freedom, standing to a maximum height of just over 4 meters, and having a payload capacity of 450 kg in Earth gravity, the current version of the ATHLETE robot is a uniquely complex system. A central challenge to this work was the compliance of the high-DOF (Degree Of Freedom) robot, especially the compliance of the wheels, which affected many aspects of statically-stable walking. This paper will review the history of the development of the FootFall system, sharing design decisions, field test experiences, and the lessons learned concerning compliance and self-awareness
    • 

    corecore