85 research outputs found

    Towards Mixed-Initiative Human–Robot Interaction: Assessment of Discriminative Physiological and Behavioral Features for Performance Prediction

    Get PDF
    The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose

    Mixed-initiative mission planning considering human operator state estimation based on physiological sensors

    Get PDF
    Missions involving humans with automated systems become increasingly common and are subject to risk of failing due to human factors. In fact, missions workload may generate stress or mental fatigue increasing the accident risk. The idea of our project is to refine human-robot supervision by using data from physiological sensors(eye tracking and heart rate monitoring devices) giving information about the operator's state. The proof of concept mission consists of a ground robot, autonomous or controlled by a human operator, which has to fight fires that catch randomly. We proposed to use the planning framework called Partially Observable Markov Decision Process (POMDP) along with machine learning techniques to improve human-machine interactions by optimizing the decision of the mode (autonomous or controlled robot) and of the display of alarms in the form of visual stimuli.A dataset of demonstrations produced by remote volunteers through an online video game simulating the mission allows to learn a POMDP that infers human state and to optimize the associated strategy. Cognitive availability, current task, type of behavior, situation awareness or involvement in the mission are examples of studied human operator states. Finally, scores of the missions, consisting in the number of extinguished fires, will quantify the improvement made by using physiological data

    MOMDP-based target search mission taking into account the human operator's cognitive state

    Get PDF
    This study discusses the application of sequential decision making under uncertainty and mixed observability in a mixed-initiative robotic target search application. In such a robotic mission, two agents, a ground robot and a human operator, must collaborate to reach a common goal using, each in turn, their recognized skills. The originality of the work relies in considering that the human operator is not a providential agent when the robot fails. Using the data from previous experiments, a Mixed Observability Markov Decision Process (MOMDP) model was designed, which allows to consider aleatory failure events and the partial observable human operator's state while planning for a long-term horizon. Results show that the collaborative system was in general able to successfully complete or terminate the mission, even when many simultaneous sensors, devices and operators failures happened. So, the mixed-initiative framework highlighted in this study shows the relevancy of taking into account the cognitive state of the operator, which permits to compute a policy for the sequential decision problem which prevents to re-planning when unexpected (but known) events occurs

    Physiological Assessment of Engagement during HRI: Impact of Manual vs Automatic Mode

    Get PDF
    The employment of physiological measurements to perform on-line assessment of operators' mental states is crucial in the field of human-robot interaction (HRI) researches and is still an open topic to the best of our knowledge. In order to progress towards systems that would dynamically adapt to operators' mental states, a first step is to determine an adequate protocol that elicits variations in engagement. To this purpose, this work focuses on analyzing operator's physiological data streams recorded during a human-robot mission executed in a original virtual environment [Drougard17]. In detail, the conceived mission consists in a mutual cooperation of a firefighter robot and its human operator to extinguish fires. A high level of complexity is obtained by the number and the random nature of events to be handled during the mission. As an example, guiding the robot, managing its water tank level and taking care of its electric charge are all tasks to be accomplished simultaneously, which can randomly be assigned to autonomous or manually operative mode. In addition to that, an extra task consisting in keeping an adequate level of an external water tank to allow robot refill is assigned only to the human operator for continuously soliciting his attention. Anyone can experience this mission by visiting the website robot-isae.isae.fr set up to collect a large amount of behavioral data by crowdsourcing. The mission is accomplished through a remote human-machine interface made of controllers and a screen displaying different areas corresponding to each task. Figure 1 shows the graphical user interface, with the 5 areas of interest (AOIs). The control station is equipped with sensing devices for human data collecting. The sensing devices which allows for human data collection are an eye-tracker (SMI), located on the bottom bar of the display and a portable bluetooth electrocardiograph (eMotion Faros 360). A specific procedure for the experiment has been defined to guarantee a statistical significance of the recorded dataset. In detail, each different human operator has to complete at least three times a ten minutes mission aiming to the best score in terms of fires extinguished. An absolute resting is imposed between missions to get a baseline for the cardiac activity. The data are collected on 17 participants of mixed sex (9 females) with average age 28.5 (S.D. = 4.52). The number and the duration of fixations per area of interest are extracted from the eye-tracker. The length of inter-beat intervals, the Heart Rate Variability (HRV) and instant Heart Rate Variability (IHRV) are computed from the ECG. Preliminary results show a lower HRV and IHRV during the mission than during the rest session: Student and Wilcoxon statistical tests ensure a difference at least equal to 6 (p<0.05). This evidence, according to the literature, highlights that the created mission succeeded in engaging the participants. The impact of each mode of operation (manual/autonomous) on the human markers is also observed and analyzed. Contrarily to expectations the operator results more engaged (lower HRV) during autonomous than manual mode (p<0.05). This is in accordance with the tasks' difficulty. In fact, when the autonomous mode takes over, the human priority is the only task that he has to accomplish by himself (external tank filling), which is also designed to be the hardest one. This is confirmed by IHRV that in average is greater during manual mode. Moreover, finding that HRV and IHRV behaving in the same way, a real-time Human-Robot team supervision application can be foreseen. The effect of the current mode of operation is observable also on the number and durations of fixations on the two main AOIs: video streamed from the robot and external water tank level. The first attracts the operator attention mainly when in manual mode, the second when in autonomous mode (p<0.05). Spearman correlations of data sample per second confirm previous results. Indeed, markers on these AOIs are correlated with the mode of the robot (rho=0.22, p<0.05) as well as IHRV (rho=0.03, p<0.05). Several kind of correlation have been identified analyzing the recorded dataset, of which the most significant to describe the human operator engagement are found to be: the number and durations of fixations on AOIs corresponding to the two main tasks are negatively correlated (rho=-0.6, p<0.05) which describes the fact that the human tends to switch attention mainly between these two tasks; the correlation of IHRV marker to the markers on the AOIs (tank: rho=0.1, p<0.05; video: rho=-0.06, p<0.05) demonstrates that the main tasks are perceived as so from the human operator, while its correlation to the remaining mission time (rho=0.05, p<0.05) expresses a higher engagement as the mission progresses. Moreover, IHRV is also negatively correlated to performance indexes as the number of extinguished fires (rho=-0.07, p<0.05) and the external tank level (rho=-0.14, p<0.05) meaning that the human operator has higher engagement level when successfully accomplishing the mission. The outcomes of the proposed research confirm that human-robot interaction mission implies mental states variation that corresponds to levels of engagement. The demonstrated link between the sampled correlations and the global statistical analyses, returning relevant information on the human operator behavior, validates the possibility of using these markers for on-line applications. The effect of the alternation of manual and autonomous mode during the mission has been quantified on the markers and paves the way for automatic tasks allocation by a decisional system based on physiological data classification. Finally, the results obtained through these experiments demonstrate the validity of the overall approach proposed and the designed virtual environment. Further statistical analyses and the employment of additional physiological measurements such as electroencephalogram (EEG) are foreseen for the next future

    A Loewner-based Approach for the Approximation of Engagement-related Neurophysiological Features

    Get PDF
    Currently, in order to increase both safety and performance of human-machine systems, researchers from various domains gather together to work towards the use of operators' mental state estimation in the systems control-loop. Mental state estimation is performed using neurophysiological data recorded, for instance, using electroencephalography (EEG). Features such as power spectral densities in specific frequency bands are extracted from these data and used as indices or metrics. Another interesting approach could be to identify the dynamic model of such features. Hence, this article discusses the potential use of tools derived from the linear algebra and control communities to perform an approximation of the neurophysiological features model that could be explored to monitor the engagement of an operator. The method provides a smooth interpolation of all the data points allowing to extract frequential features that reveal fluctuations in engagement with growing time-on-task

    Pre-stimulus antero-posterior EEG connectivity predicts performance in a UAV monitoring task

    Get PDF
    Long monitoring tasks without regular actions, are becoming increasingly common from aircraft pilots to train conductors as these systems grow more automated. These task contexts are challenging for the human operator because they require inputs at irregular and highly interspaced moments even though these actions are often critical. It has been shown that such conditions lead to divided and distracted attentional states which in turn reduce the processing of external stimuli (e.g. alarms) and may lead to miss critical events. In this study we explored to which extent it is possible to predict an operator’s behavioural performance in a Unmanned Aerial Vehicle (UAV) monitoring task using electroencephalographic (EEG) activity. More specifically we investigated the relevance of large-scale EEG connectivity for performance prediction by correlating relative coherence with reaction times (RT). We show that long-range EEG relative coherence, i.e. between occipital and frontal electrodes, is significantly correlated with RT and that different frequency bands exhibit opposite effects. More specifically we observed that coherence between occipital and frontal electrodes was: negatively correlated with RT at 6Hz (theta band), more coherence leading to better performance, and positively correlated with RT at 8Hz (lower alpha band), more coherence leading to worse performance. Our results suggest that EEG connectivity measures could be useful in predicting an operator’s attentional state and her/his performances in ecological settings. Hence these features could potentially be used in a neuro-adaptive interface to improve operator-system interaction and safety in critical systems

    Vers l’application de l’apprentissage par renforcement inverse aux réseaux naturels d’attention

    Get PDF
    Le cerveau humain, pour allouer de manière optimale les ressources attentionnelles limitées dont il dispose, supprime ou renforce l’activation de circuits neuronaux : il implémente des heuristiques. Dans une approche novatrice, nous proposons d’utiliser l’apprentissage par renforcement inverse pour caractériser la dynamique d’activation de ces réseaux. Un protocole expérimental est proposé, et les données collectées devraient permettre, à terme, de vérifier cette démarche

    Towards a POMDP-based Control in Hybrid Brain-Computer Interfaces

    Get PDF
    Brain-Computer Interfaces (BCI) provide a unique communication channel between the brain and computer systems. After extensive research and implementation on ample fields of application, numerous challenges to assure reliable and quick data processing have resulted in the hybrid BCI (hBCI) paradigm, consisting on the combination of two BCI systems. However, not all challenges have been properly addressed (e.g. re-calibration, idle-state modelling, adaptive thresholds, etc) to allow hBCI implementation outside of the lab. In this paper, we review electroencephalography based hBCI studies and state potential limitations. We propose a sequential decision-making framework based on Partially Observable Markov Decision Process (POMDP) to design and to control hBCI systems. The POMDP framework is an excellent candidate to deal with the limitations raised above. To illustrate our opinion, an example of architecture using a POMDP-based hBCI control system is provided, and future directions are discussed. We believe this framework will encourage research efforts to provide relevant means to combine information from BCI systems and push BCI out of the laboratory

    Characterization with EEG and eye tracking of the impact of time­on­task on a UAV operator

    Get PDF
    Boredom, mind wandering and mental fatigue are common issues in operational UAV supervision control. They can lead to a lack of situational awareness. We propose to characterize them using physiological sensors in order to perform real time monitoring and apply countermeasures in later experiments

    Operator Engagement During Prolonged Simulated UAV Operation

    Get PDF
    Unmanned aerial vehicle (UAV) operation is demanding in terms of attentional resources' engagement. As systems grow more automated, the operators are placed in long monitoring phases most of the time. Although UAV operators' fatigue state has been extensively assessed at the behavioral and oculomotor levels, to our knowledge there is a lack of literature regarding potential cardiac and cerebral markers. Therefore, this study was designed to investigate which markers of operators' engagement could be used for mental state estimation in the context of UAV operations. Five volunteers performed a UAV monitoring task for two hours without any break. The task included an alarm monitoring task and a target identification task using a joystick. Only ten alarms occurred during the session, amongst which only seven required an identification from the operator. The investigated markers were of oculomotor (eye-tracking), cardiac (ECG) and cerebral (EEG) origin. In addition to a significant modulation of the alpha power, the blink rate and the number of fixations with time-on-task, the main results are a significant correlation of response times with both the cardiac Low Frequency / High Frequency ratio and the number of ocular fixations
    • …
    corecore