504 research outputs found

    Human-robot collaborative task planning using anticipatory brain responses

    Get PDF
    Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning

    A Classification Model for Sensing Human Trust in Machines Using EEG and GSR

    Full text link
    Today, intelligent machines \emph{interact and collaborate} with humans in a way that demands a greater level of trust between human and machine. A first step towards building intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real-time. In this paper, two approaches for developing classifier-based empirical trust sensor models are presented that specifically use electroencephalography (EEG) and galvanic skin response (GSR) measurements. Human subject data collected from 45 participants is used for feature extraction, feature selection, classifier training, and model validation. The first approach considers a general set of psychophysiological features across all participants as the input variables and trains a classifier-based model for each participant, resulting in a trust sensor model based on the general feature set (i.e., a "general trust sensor model"). The second approach considers a customized feature set for each individual and trains a classifier-based model using that feature set, resulting in improved mean accuracy but at the expense of an increase in training time. This work represents the first use of real-time psychophysiological measurements for the development of a human trust sensor. Implications of the work, in the context of trust management algorithm design for intelligent machines, are also discussed.Comment: 20 page

    Towards Multi-UAV and Human Interaction Driving System Exploiting Human Mental State Estimation

    Get PDF
    This paper addresses the growing human-multi-UAV interaction issue. Current active approaches towards a reliable multi-UAV system are reviewed. This brings us to the conclusion that the multiple Unmanned Aerial Vehicles (UAVs) control paradigm is segmented into two main scopes: i) autonomous control and coordination within the group of UAVs, and ii) a human centered approach with helping agents and overt behavior monitoring. Therefore, to move further with the future of human-multi-UAV interaction problem, a new perspective is put forth. In the following sections, a brief understanding of the system is provided, followed by the current state of multi-UAV research and how taking the human pilot's physiology into account could improve the interaction. This idea is developed first by detailing what physiological computing is, including mental states of interest and their associated physiological markers. Second, the article concludes with the proposed approach for Human-multi-UAV interaction control and future plans

    Robot Learning and Control Using Error-Related Cognitive Brain Signals

    Get PDF
    Durante los últimos años, el campo de los interfaces cerebro-máquina (BMIs en inglés) ha demostrado cómo humanos y animales son capaces de controlar dispositivos neuroprotésicos directamente de la modulación voluntaria de sus señales cerebrales, tanto en aproximaciones invasivas como no invasivas. Todos estos BMIs comparten un paradigma común, donde el usuario trasmite información relacionada con el control de la neuroprótesis. Esta información se recoge de la actividad cerebral del usuario, para luego ser traducida en comandos de control para el dispositivo. Cuando el dispositivo recibe y ejecuta la orden, el usuario recibe una retroalimentación del rendimiento del sistema, cerrando de esta manera el bucle entre usuario y dispositivo. La mayoría de los BMIs decodifican parámetros de control de áreas corticales para generar la secuencia de movimientos para la neuroprótesis. Esta aproximación simula al control motor típico, dado que enlaza la actividad neural con el comportamiento o la ejecución motora. La ejecución motora, sin embargo, es el resultado de la actividad combinada del córtex cerebral, áreas subcorticales y la médula espinal. De hecho, numerosos movimientos complejos, desde la manipulación a andar, se tratan principalmente al nivel de la médula espinal, mientras que las áreas corticales simplemente proveen el punto del espacio a alcanzar y el momento de inicio del movimiento. Esta tesis propone un paradigma BMI alternativo que trata de emular el rol de los niveles subcorticales durante el control motor. El paradigma se basa en señales cerebrales que transportan información cognitiva asociada con procesos de toma de decisiones en movimientos orientados a un objetivo, y cuya implementación de bajo nivel se maneja en niveles subcorticales. A lo largo de la tesis, se presenta el primer paso hacia el desarrollo de este paradigma centrándose en una señal cognitiva específica relacionada con el procesamiento de errores humano: los potenciales de error (ErrPs) medibles mediante electroencefalograma (EEG). En esta propuesta de paradigma, la neuroprótesis ejecuta activamente una tarea de alcance mientras el usuario simplemente monitoriza el rendimiento del dispositivo mediante la evaluación de la calidad de las acciones ejecutadas por el dispositivo. Estas evaluaciones se traducen (gracias a los ErrPs) en retroalimentación para el dispositivo, el cual las usa en un contexto de aprendizaje por refuerzo para mejorar su comportamiento. Esta tesis demuestra por primera vez este paradigma BMI de enseñanza con doce sujetos en tres experimentos en bucle cerrado concluyendo con la operación de un manipulador robótico real. Como la mayoría de BMIs, el paradigma propuesto requiere una etapa de calibración específica para cada sujeto y tarea. Esta fase, un proceso que requiere mucho tiempo y extenuante para el usuario, dificulta la distribución de los BMIs a aplicaciones fuera del laboratorio. En el caso particular del paradigma propuesto, una fase de calibración para cada tarea es altamente impráctico ya que el tiempo necesario para esta fase se suma al tiempo de aprendizaje de la tarea, retrasando sustancialmente el control final del dispositivo. Así, sería conveniente poder entrenar clasificadores capaces de funcionar independientemente de la tarea de aprendizaje que se esté ejecutando. Esta tesis analiza desde un punto de vista electrofisiológico cómo los potenciales se ven afectados por diferentes tareas ejecutadas por el dispositivo, mostrando cambios principalmente en la latencia la señal; y estudia cómo transferir el clasificador entre tareas de dos maneras: primero, aplicando clasificadores adaptativos del estado del arte, y segundo corrigiendo la latencia entre las señales de dos tareas para poder generalizar entre ambas. Otro reto importante bajo este paradigma viene del tiempo necesario para aprender la tarea. Debido al bajo ratio de información transferida por minuto del BMI, el sistema tiene una pobre escalabilidad: el tiempo de aprendizaje crece exponencialmente con el tamaño del espacio de aprendizaje, y por tanto resulta impráctico obtener el comportamiento motor óptimo mediante aprendizaje por refuerzo. Sin embargo, este problema puede resolverse explotando la estructura de la tarea de aprendizaje. Por ejemplo, si el número de posiciones a alcanzar es discreto se puede pre-calcular la política óptima para cada posible posición. En esta tesis, se muestra cómo se puede usar la estructura de la tarea dentro del paradigma propuesto para reducir enormemente el tiempo de aprendizaje de la tarea (de diez minutos a apenas medio minuto), mejorando enormemente así la escalabilidad del sistema. Finalmente, esta tesis muestra cómo, gracias a las lecciones aprendidas en los descubrimientos anteriores, es posible eliminar completamente la etapa de calibración del paradigma propuesto mediante el aprendizaje no supervisado del clasificador al mismo tiempo que se está ejecutando la tarea. La idea fundamental es calcular un conjunto de clasificadores que sigan las restricciones de la tarea anteriormente usadas, para a continuación seleccionar el mejor clasificador del conjunto. De esta manera, esta tesis presenta un BMI plug-and-play que sigue el paradigma propuesto, aprende la tarea y el clasificador y finalmente alcanza la posición del espacio deseada por el usuario

    Workload-aware systems and interfaces for cognitive augmentation

    Get PDF
    In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion. We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user’s cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. In this thesis, we initially focus on determining opportune moments for cognitive assistance. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We present design requirements for how cognitive augmentation can be achieved using interfaces that sense cognitive workload. We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide empirical evidence that the human brain is sensitive to fluctuations in cognitive resting states, hence making cognitive effort measurable. Firstly, we show that electroencephalography is a reliable modality to assess the mental workload generated during the user interface operation. Secondly, we use eye tracking to evaluate changes in eye movements and pupil dilation to quantify different workload states. The combination of machine learning and physiological sensing resulted in suitable real-time assessments of cognitive workload. The use of physiological sensing enables us to derive when cognitive augmentation is suitable. Based on our inquiries, we present applications that regulate cognitive workload in home and work settings. We deployed an assistive system in a field study to investigate the validity of our derived design requirements. Finding that workload is mitigated, we investigated how cognitive workload can be visualized to the user. We present an implementation of a biofeedback visualization that helps to improve the understanding of brain activity. A final study shows how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Here, we conclude with use cases and applications which benefit from cognitive augmentation. This thesis investigates how assistive systems can be designed to implicitly sense and utilize cognitive workload for input and output. To do so, we measure cognitive workload in real-time by collecting behavioral and physiological data from users and analyze this data to support users through assistive systems that adapt their interface according to the currently measured workload. Our overall goal is to extend new and existing context-aware applications by the factor cognitive workload. We envision Workload-Aware Systems and Workload-Aware Interfaces as an extension in the context-aware paradigm. To this end, we conducted eight research inquiries during this thesis to investigate how to design and create workload-aware systems. Finally, we present our vision of future workload-aware systems and workload-aware interfaces. Due to the scarce availability of open physiological data sets, reference implementations, and methods, previous context-aware systems were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, we expect this thesis to pave the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.Tagtäglich werden unsere kognitiven Fähigkeiten durch die Verarbeitung von unzähligen Informationen in Anspruch genommen. Dies kann die Schwierigkeit einer Aufgabe durch mehr oder weniger Arbeitslast beeinflussen. Der menschliche Körper drückt die Nutzung kognitiver Ressourcen durch physiologische Reaktionen aus, wenn dieser mit kognitiver Arbeitsbelastung konfrontiert oder überfordert wird. Dadurch werden weitere Ressourcen mobilisiert, um die Arbeitsbelastung vorübergehend zu bewältigen. Wir prognostizieren, dass die derzeitige Entwicklung physiologischer Messverfahren kognitive Leistungsmessungen stets möglich machen wird, um die kognitive Arbeitslast des Nutzers jederzeit zu messen. Diese sind in der Lage, einzugreifen wenn eine zu hohe oder zu niedrige kognitive Belastung erkannt wird. Wir konzentrieren uns zunächst auf die Erkennung passender Momente für kognitive Unterstützung welche sich der gegenwärtigen kognitiven Arbeitslast bewusst sind. Anschließend untersuchen wir in einem nutzerzentrierten Designprozess geeignete Feedbackmechanismen, die zur kognitiven Assistenz beitragen. Wir präsentieren Designanforderungen, welche zeigen wie Schnittstellen eine kognitive Augmentierung durch die Messung kognitiver Arbeitslast erreichen können. Anschließend untersuchen wir verschiedene physiologische Messmodalitäten, welche Bewertungen der kognitiven Arbeitsbelastung in Realzeit ermöglichen. Zunächst validieren wir empirisch, dass das menschliche Gehirn auf kognitive Arbeitslast reagiert. Es zeigt sich, dass die Ableitung der kognitiven Arbeitsbelastung über Elektroenzephalographie eine geeignete Methode ist, um den kognitiven Anspruch neuartiger Assistenzsysteme zu evaluieren. Anschließend verwenden wir Eye-Tracking, um Veränderungen in den Augenbewegungen und dem Durchmesser der Pupille unter verschiedenen Intensitäten kognitiver Arbeitslast zu bewerten. Das Anwenden von maschinellem Lernen führt zu zuverlässigen Echtzeit-Bewertungen kognitiver Arbeitsbelastung. Auf der Grundlage der bisherigen Forschungsarbeiten stellen wir Anwendungen vor, welche die Kognition im häuslichen und beruflichen Umfeld unterstützen. Die physiologischen Messungen stellen fest, wann eine kognitive Augmentierung sich als günstig erweist. In einer Feldstudie setzen wir ein Assistenzsystem ein, um die erhobenen Designanforderungen zur Reduktion kognitiver Arbeitslast zu validieren. Unsere Ergebnisse zeigen, dass die Arbeitsbelastung durch den Einsatz von Assistenzsystemen reduziert wird. Im Anschluss untersuchen wir, wie kognitive Arbeitsbelastung visualisiert werden kann. Wir stellen eine Implementierung einer Biofeedback-Visualisierung vor, die das Nutzerverständnis zum Verlauf und zur Entstehung von kognitiver Arbeitslast unterstützt. Eine abschließende Studie zeigt, wie Messungen kognitiver Arbeitslast zur Vorhersage der aktuellen Leseeffizienz benutzt werden können. Wir schließen hierbei mit einer Reihe von Applikationen ab, welche sich kognitive Arbeitslast als Eingabe zunutze machen. Die vorliegende wissenschaftliche Arbeit befasst sich mit dem Design von Assistenzsystemen, welche die kognitive Arbeitslast der Nutzer implizit erfasst und diese bei der Durchführung alltäglicher Aufgaben unterstützt. Dabei werden physiologische Daten erfasst, um Rückschlüsse in Realzeit auf die derzeitige kognitive Arbeitsbelastung zu erlauben. Anschließend werden diese Daten analysiert, um dem Nutzer strategisch zu assistieren. Das Ziel dieser Arbeit ist die Erweiterung neuartiger und bestehender kontextbewusster Benutzerschnittstellen um den Faktor kognitive Arbeitslast. Daher werden in dieser Arbeit arbeitslastbewusste Systeme und arbeitslastbewusste Benutzerschnittstellen als eine zusätzliche Dimension innerhalb des Paradigmas kontextbewusster Systeme präsentiert. Wir stellen acht Forschungsstudien vor, um die Designanforderungen und die Implementierung von kognitiv arbeitslastbewussten Systemen zu untersuchen. Schließlich stellen wir unsere Vision von zukünftigen kognitiven arbeitslastbewussten Systemen und Benutzerschnittstellen vor. Durch die knappe Verfügbarkeit öffentlich zugänglicher Datensätze, Referenzimplementierungen, und Methoden, waren Kontextbewusste Systeme in der Auswertung kognitiver Arbeitslast bezüglich der Nutzerinteraktion limitiert. Ergänzt durch die in dieser Arbeit gesammelten Datensätze erwarten wir, dass diese Arbeit den Weg für methodische und technische Werkzeuge ebnet, welche kognitive Arbeitslast als Faktor in das Kontextbewusstsein von Computersystemen integriert

    How Can Physiological Computing Benefit Human-Robot Interaction?

    Get PDF
    As systems grow more automatized, the human operator is all too often overlooked. Although human-robot interaction (HRI) can be quite demanding in terms of cognitive resources, the mental states (MS) of the operators are not yet taken into account by existing systems. As humans are no providential agents, this lack can lead to hazardous situations. The growing number of neurophysiology and machine learning tools now allows for efficient operators' MS monitoring. Sending feedback on MS in a closed-loop solution is therefore at hands. Involving a consistent automated planning technique to handle such a process could be a significant asset. This perspective article was meant to provide the reader with a synthesis of the significant literature with a view to implementing systems that adapt to the operator's MS to improve human-robot operations' safety and performance. First of all, the need for this approach is detailed as regards remote operation, an example of HRI. Then, several MS identified as crucial for this type of HRI are defined, along with relevant electrophysiological markers. A focus is made on prime degraded MS linked to time-on-task and task demands, as well as collateral MS linked to system outputs (i.e. feedback and alarms). Lastly, the principle of symbiotic HRI is detailed and one solution is proposed to include the operator state vector into the system using a mixed-initiative decisional framework to drive such an interaction

    Towards Mixed-Initiative Human–Robot Interaction: Assessment of Discriminative Physiological and Behavioral Features for Performance Prediction

    Get PDF
    The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose

    Neural correlates of performance monitoring during discrete and continuous tasks

    Get PDF
    Monitoring our actions is a key function of the brain for adaptive and successful behavior. Actions can be discrete such as when pressing a button, or continuous, such as driving a car. Moreover, we evaluate our actions as correct or erroneous (performance monitoring) and this appraisal of performance comes with various levels of confidence (metacognition). However, studies of performance monitoring have focused on discrete actions and are mostly agnostic to metacognitive judgments. The objective of this thesis was to extend the study of performance monitoring to more ecological conditions, in which monitoring occurs during continuous motor tasks under various degrees of error and confidence level. We first investigated the role of actions in performance monitoring together with metacognitive judgments, using simultaneous EEG and fMRI recordings. To dissociate the role of motor actions, we designed an experimental paradigm in which subjects had to rate their confidence level about an action that they had either performed themselves (a button press) based on a decision or passively observed (a virtual hand displayed). We found correlates of confidence in both condition, in the EEG and in the supplementary motor area (SMA). Furthermore, we found that subject showed better metacognitive performances when they were the agents of the action. This difference was further emphasized for subjects that showed higher activations of a network previously linked to motor inhibition and comprising the pre-SMA and inferior frontal gyrus (IFG). Our results imply that the SMA plays a primary role in the monitoring of performance, irrespectively of a commitment to a decision and the resulting action. Our findings also suggest that the additional neural processes leading to decisions and actions can inform the metacognitive judgments. In the following chapters, we ask whether electrophysiological correlates of performance monitoring can be found in less experimentally constrained paradigms for which motor output continuous unfolds and visual feedback is not segregated into discrete events. By decomposing the unfolding hand kinematics during a visuo-motor tracking task into periodic acceleration pulses âhenceforth referred to as sub-movements, we found three electrophysiological markers that could possibly be linked to performance monitoring. Firstly, we found an ERP in the SMA, time-locked to sub-movements which encoded the deviation of the hand, 110 ms before. Secondly, we found high-gamma activity in the ACC and SMA of epileptic patients, that was phase-locked to sub-movements. Thirdly, we found a transient modulation of mu oscillations over the ipsilateral sensorimotor cortices that depended on sub-movement amplitude. Altogether, these results provide a strong contribution in the understanding of the neurophysiological processes underlying performance monitoring. Our work proposes a methodological framework to study electrophysiological correlates of performance monitoring in less controlled paradigms during which continuous visual feedback has to be constantly integrated into motor corrections. In the conclusion chapter, we propose a way of extending current models of performance monitoring and decision making to explain the findings of this thesis by considering continuous motor tasks as a succession of decision making processes under time pressure and uncertainty

    Performance monitoring during action observation and auditory lexical decisions

    Get PDF
    How does the brain monitor performances? Does expertise modulate this process? How does an observer’s error related activity differ from a performers own error related activity? How does ambiguity change the markers of error monitoring? In this thesis, I present two EEG studies and a commentary that sought to answer these questions. Both empirical studies concern performance monitoring in two different contexts and from two different personal perspectives, i.e. investigating the effects of expertise on electroencephalographic (EEG) neuromarkers of performance monitoring and in terms of monitoring own and others’ errors during actions and language processing. My first study focused on characterizing the electrophysiological responses in experts and control individuals while they are observing domain-specific actions in wheelchair basketball with correct and wrong outcomes (Chapter II). The aim of the commentary in the following chapter was to highlight the role of Virtual Reality approaches to error prediction during one’s own actions (Chapter III). The fourth chapter hypothesised that the error monitoring markers are present during both one’s own performance errors in a lexical decision task, and the observation of others’ performance errors (Chapter IV), however, the results suggested a further modulation of uncertainty created by our task design. The final chapter presents a general discussion that provides an overview of the results of my PhD work (Chapter V). The present chapter consists of a literature review in the leading frameworks of performance monitoring, action observation, visuo-motor expertise and language processing

    Soft, wireless periocular wearable electronics for real-time detection of eye vergence in a virtual reality toward mobile eye therapies

    Get PDF
    Ocular disorders are currently affecting the developed world, causing loss of productivity in adults and children. While the cause of such disorders is not clear, neurological issues are often considered as the biggest possibility. Treatment of strabismus and vergence requires an invasive surgery or clinic-based vision therapy that has been used for decades due to the lack of alternatives such as portable therapeutic tools. Recent advancement in electronic packaging and image processing techniques have opened the possibility for optics-based portable eye tracking approaches, but several technical and safety hurdles limit the implementation of the technology in wearable applications. Here, we introduce a fully wearable, wireless soft electronic system that offers a portable, highly sensitive tracking of eye movements (vergence) via the combination of skin-conformal sensors and a virtual reality system. Advancement of material processing and printing technologies based on aerosol jet printing enables reliable manufacturing of skin-like sensors, while a flexible electronic circuit is prepared by the integration of chip components onto a soft elastomeric membrane. Analytical and computational study of a data classification algorithm provides a highly accurate tool for real-time detection and classification of ocular motions. In vivo demonstration with 14 human subjects captures the potential of the wearable electronics as a portable therapy system, which can be easily synchronized with a virtual reality headset
    corecore