589 research outputs found

    Symbiotic interaction between humans and robot swarms

    Get PDF
    Comprising of a potentially large team of autonomous cooperative robots locally interacting and communicating with each other, robot swarms provide a natural diversity of parallel and distributed functionalities, high flexibility, potential for redundancy, and fault-tolerance. The use of autonomous mobile robots is expected to increase in the future and swarm robotic systems are envisioned to play important roles in tasks such as: search and rescue (SAR) missions, transportation of objects, surveillance, and reconnaissance operations. To robustly deploy robot swarms on the field with humans, this research addresses the fundamental problems in the relatively new field of human-swarm interaction (HSI). Four groups of core classes of problems have been addressed for proximal interaction between humans and robot swarms: interaction and communication; swarm-level sensing and classification; swarm coordination; swarm-level learning. The primary contribution of this research aims to develop a bidirectional human-swarm communication system for non-verbal interaction between humans and heterogeneous robot swarms. The guiding field of application are SAR missions. The core challenges and issues in HSI include: How can human operators interact and communicate with robot swarms? Which interaction modalities can be used by humans? How can human operators instruct and command robots from a swarm? Which mechanisms can be used by robot swarms to convey feedback to human operators? Which type of feedback can swarms convey to humans? In this research, to start answering these questions, hand gestures have been chosen as the interaction modality for humans, since gestures are simple to use, easily recognized, and possess spatial-addressing properties. To facilitate bidirectional interaction and communication, a dialogue-based interaction system is introduced which consists of: (i) a grammar-based gesture language with a vocabulary of non-verbal commands that allows humans to efficiently provide mission instructions to swarms, and (ii) a swarm coordinated multi-modal feedback language that enables robot swarms to robustly convey swarm-level decisions, status, and intentions to humans using multiple individual and group modalities. The gesture language allows humans to: select and address single and multiple robots from a swarm, provide commands to perform tasks, specify spatial directions and application-specific parameters, and build iconic grammar-based sentences by combining individual gesture commands. Swarms convey different types of multi-modal feedback to humans using on-board lights, sounds, and locally coordinated robot movements. The swarm-to-human feedback: conveys to humans the swarm's understanding of the recognized commands, allows swarms to assess their decisions (i.e., to correct mistakes: made by humans in providing instructions, and errors made by swarms in recognizing commands), and guides humans through the interaction process. The second contribution of this research addresses swarm-level sensing and classification: How can robot swarms collectively sense and recognize hand gestures given as visual signals by humans? Distributed sensing, cooperative recognition, and decision-making mechanisms have been developed to allow robot swarms to collectively recognize visual instructions and commands given by humans in the form of gestures. These mechanisms rely on decentralized data fusion strategies and multi-hop messaging passing algorithms to robustly build swarm-level consensus decisions. Measures have been introduced in the cooperative recognition protocol which provide a trade-off between the accuracy of swarm-level consensus decisions and the time taken to build swarm decisions. The third contribution of this research addresses swarm-level cooperation: How can humans select spatially distributed robots from a swarm and the robots understand that they have been selected? How can robot swarms be spatially deployed for proximal interaction with humans? With the introduction of spatially-addressed instructions (pointing gestures) humans can robustly address and select spatially- situated individuals and groups of robots from a swarm. A cascaded classification scheme is adopted in which, first the robot swarm identifies the selection command (e.g., individual or group selection), and then the robots coordinate with each other to identify if they have been selected. To obtain better views of gestures issued by humans, distributed mobility strategies have been introduced for the coordinated deployment of heterogeneous robot swarms (i.e., ground and flying robots) and to reshape the spatial distribution of swarms. The fourth contribution of this research addresses the notion of collective learning in robot swarms. The questions that are answered include: How can robot swarms learn about the hand gestures given by human operators? How can humans be included in the loop of swarm learning? How can robot swarms cooperatively learn as a team? Online incremental learning algorithms have been developed which allow robot swarms to learn individual gestures and grammar-based gesture sentences supervised by human instructors in real-time. Humans provide different types of feedback (i.e., full or partial feedback) to swarms for improving swarm-level learning. To speed up the learning rate of robot swarms, cooperative learning strategies have been introduced which enable individual robots in a swarm to intelligently select locally sensed information and share (exchange) selected information with other robots in the swarm. The final contribution is a systemic one, it aims on building a complete HSI system towards potential use in real-world applications, by integrating the algorithms, techniques, mechanisms, and strategies discussed in the contributions above. The effectiveness of the global HSI system is demonstrated in the context of a number of interactive scenarios using emulation tests (i.e., performing simulations using gesture images acquired by a heterogeneous robotic swarm) and by performing experiments with real robots using both ground and flying robots

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    Gesture Recognition from Data Streams of Human Motion Sensor Using Accelerated PSO Swarm Search Feature Selection Algorithm

    Get PDF
    Human motion sensing technology gains tremendous popularity nowadays with practical applications such as video surveillance for security, hand signing, and smart-home and gaming. These applications capture human motions in real-time from video sensors, the data patterns are nonstationary and ever changing. While the hardware technology of such motion sensing devices as well as their data collection process become relatively mature, the computational challenge lies in the real-time analysis of these live feeds. In this paper we argue that traditional data mining methods run short of accurately analyzing the human activity patterns from the sensor data stream. The shortcoming is due to the algorithmic design which is not adaptive to the dynamic changes in the dynamic gesture motions. The successor of these algorithms which is known as data stream mining is evaluated versus traditional data mining, through a case of gesture recognition over motion data by using Microsoft Kinect sensors. Three different subjects were asked to read three comic strips and to tell the stories in front of the sensor. The data stream contains coordinates of articulation points and various positions of the parts of the human body corresponding to the actions that the user performs. In particular, a novel technique of feature selection using swarm search and accelerated PSO is proposed for enabling fast preprocessing for inducing an improved classification model in real-time. Superior result is shown in the experiment that runs on this empirical data stream. The contribution of this paper is on a comparative study between using traditional and data stream mining algorithms and incorporation of the novel improved feature selection technique with a scenario where different gesture patterns are to be recognized from streaming sensor data

    Learning from human-robot interaction

    Get PDF
    En los últimos años cada vez es más frecuente ver robots en los hogares. La robótica está cada vez más presente en muchos aspectos de nuestras vidas diarias, en aparatos de asistencia doméstica, coches autónomos o asistentes personales. La interacción entre estos robots asistentes y los usuarios es uno de los aspectos clave en la robótica de servicio. Esta interacción necesita ser cómoda e intuitiva para que sea efectiva su utilización. Estas interacciones con los usuarios son necesarias para que el robot aprenda y actualice de manera natural tanto su modelo del mundo como sus capacidades. Dentro de los sistemas roboticos de servicio, hay muchos componentes que son necesarios para su buen funcionamiento. Esta tesis esta centrada en el sistema de percepción visual de dichos sistemas.Para los humanos la percepción visual es uno de los componentes más esenciales, permitiendo tareas como reconocimiento de objetos u otras personas, o estimación de información 3D. Los grandes logros obtenidos en los últimos años en tareas de reconocimiento automático utilizan los enfoques basados en aprendizaje automático, en particular técnicas de deep learning. La mayoría de estos trabajos actuales se centran en modelos entrenados 'a priori' en un conjunto de datos muy grandes. Sin embargo, estos modelos, aunque entrenados en una gran cantidad de datos, no pueden, en general, hacer frente a los retos que aparecen al tratar con datos reales en entornos domésticos. Por ejemplo, es frecuente que se de el caso de tener nuevos objetos que no existían durante el entrenamiento de los modelos. Otro reto viene de la dispersión de los objetos, teniendo objetos que aparecen muy raramente y por lo tanto habia muy pocos, o ningún, ejemplos en los datos de entenamiento disponibles al crear el modelo.Esta tesis se ha desarrollado dentro del contexto del proyecto IGLU (Interactive Grounded Language Understanding). Dentro del proyecto y sus objetivos, el objetivo principal de esta Tesis doctoral es investigar métodos novedosos para que un robot aprenda de manera incremental mediante la interacción multimodal con el usuario.Desarrollando dicho objetivo principal, los principales trabajos desarrollados durante esta tesis han sido:-Crear un benchmark más adecuado para las tareas de aprendizaje mediante la interacción natural de usuario y robot. Por ejemplo, la mayoría de los datasets para la tarea de reconocimiento de objetos se centra en fotos de diferentes escenarios con múltiples clases por foto. Es necesario un dataset que combine interacción usuario robot con aprendizaje de objetos.-Mejorar sistemas existentes de aprendizaje de objetos y adecuarlos para aprendizaje desde la interacción multimodal humana. Los trabajos de detección de objetos se focalizan en detectar todos los objetos aprendidos en una imagen. Nuestro objetivo es usar la interacción para encontrar el objeto de referencia y aprenderlo incrementalmente.-Desarrollar métodos de aprendizaje incremental que se puedan utilizar en escenarios incrementales, p.e., la aparición de una nueva clase de objeto o cambios a lo largo del tiempo dentro de una clase objetos. Nuestro objetivo es diseñar un sistema que pueda aprender clases desde cero y que pueda actualizar los datos cuando estos aparecen.-Crear un completo prototipo para el aprendizaje incremental y multimodal usando la interacción humana-robot. Se necesita realizar la integración de los distintos métodos desarrollados como parte de los otros objetivos y evaluarlo.<br /

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Music as complex emergent behaviour : an approach to interactive music systems

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 28.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/770) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis suggests a new model of human-machine interaction in the domain of non-idiomatic musical improvisation. Musical results are viewed as emergent phenomena issuing from complex internal systems behaviour in relation to input from a single human performer. We investigate the prospect of rewarding interaction whereby a system modifies itself in coherent though non-trivial ways as a result of exposure to a human interactor. In addition, we explore whether such interactions can be sustained over extended time spans. These objectives translate into four criteria for evaluation; maximisation of human influence, blending of human and machine influence in the creation of machine responses, the maintenance of independent machine motivations in order to support machine autonomy and finally, a combination of global emergent behaviour and variable behaviour in the long run. Our implementation is heavily inspired by ideas and engineering approaches from the discipline of Artificial Life. However, we also address a collection of representative existing systems from the field of interactive composing, some of which are implemented using techniques of conventional Artificial Intelligence. All systems serve as a contextual background and comparative framework helping the assessment of the work reported here. This thesis advocates a networked model incorporating functionality for listening, playing and the synthesis of machine motivations. The latter incorporate dynamic relationships instructing the machine to either integrate with a musical context suggested by the human performer or, in contrast, perform as an individual musical character irrespective of context. Techniques of evolutionary computing are used to optimise system components over time. Evolution proceeds based on an implicit fitness measure; the melodic distance between consecutive musical statements made by human and machine in relation to the currently prevailing machine motivation. A substantial number of systematic experiments reveal complex emergent behaviour inside and between the various systems modules. Music scores document how global systems behaviour is rendered into actual musical output. The concluding chapter offers evidence of how the research criteria were accomplished and proposes recommendations for future research

    Human lower limb activity recognition techniques, databases, challenges and its applications using sEMG signal: an overview

    Get PDF
    Human lower limb activity recognition (HLLAR) has grown in popularity over the last decade mainly because to its applications in the identification and control of neuromuscular disorders, security, robotics, and prosthetics. Surface electromyography (sEMG) sensors provide various advantages over other wearable or visual sensors for HLLAR applications, including quick response, pervasiveness, no medical monitoring, and negligible infection. Recognizing lower limb activity from sEMG signals is also challenging owing to the noise in the sEMG signal. Pre- processing of sEMG signals is extremely desirable before the classification because they allow a more consistent and precise evaluation in the above applications. This article provides a segment-by-segment overview of: (1) Techniques for eliminating artifacts from sEMG signals from the lower limb. (2) A survey of existing datasets of lower limb sEMG. (3) A concise description of the various techniques for processing and classifying sEMG data for various applications involving lower limb activity. Finally, an open discussion is presented, which may result in the identification of a variety of future research possibilities for human lower limb activity recognition. Therefore, it is possible to anticipate that the framework presented in this study can aid in the advancement of sEMG-based recognition of human lower limb activity
    corecore