159 research outputs found
GHOST: experimenting countermeasures for conflicts in the pilot's activity
An approach for designing countermeasures to cure
conflict in aircraft pilots’ activities is presented,
both based on Artificial Intelligence and Human
Factors concepts.
The first step is to track the pilot’s activity, i.e. to
reconstruct what he has actually done thanks to the
flight parameters and reference models describing
the mission and procedures. The second step is
to detect conflict in the pilot’s activity, and this is
linked to what really matters to the achievement
of the mission. The third step is to design accu-
rate countermeasures which are likely to do bet-
ter than the existing onboard devices. The three
steps are presented and supported by experimental
results obtained from private and professional pi-
lots
Authority Management and Conflict Solving in Human-Machine Systems
This paper focuses on vehicle-embedded decision autonomy and the human operator’s role in so-called autonomous systems. Autonomy control and authority sharing are discussed, and the possible effects of authority conflicts on the human operator’s cognition and situation awareness are highlighted. As an illustration, an experiment conducted at ISAE (the French Aeronautical and Space Institute) shows that the occurrence of a conflict leads to a perseveration behavior and attentional tunneling of the operator. Formal methods are discussed to infer such attentional impairment from the monitoring of physiological and behavioral measures and some results are given
What the heck is it doing? Better understanding human-machine conflicts through models
This paper deals with human-machine conflicts with a special focus on conflicts caused by an “automation surprise”. Considering both the human operator and the machine autopilot or decision functions as agents, we propose Petri net based models of two real cases and we show how modelling each agent’s possible actions is likely to
highlight conflict states as deadlocks in the Petri net. A general conflict model is then be proposed and paves the way for further on-line human-machine conflict forecast and detection
Authority management in human-robot systems
In the context of missions accomplished jointly by an artifical agent and a human agent, we focus on a controller of the authority dynamics based on a dependence graph of resources that can be controlled by both agents. The controller is designed to adapt the behaviours of the artificial agent or of the human agent in case of an authority conflict occurring on these resources. The relative authority of two agents regarding the control of a resource is defined so as the authority conflict, which appears relevant to trigger authority reallocation between agents as shown by a first experiment. Finally a second experiment shows that beyond the modification of the artificial agent's behaviour, it is also possible to adapt the human operator's behaviour in order to solve such a conflict
Détection et résolution de conflits d'autorité dans un système homme-robot
Dans le cadre de missions réalisées conjointement par un agent artificiel et un agent humain, nous présentons un contrôleur de la dynamique de l'autorité, fondé sur un graphe de dépendances entre ressources contrôlables par les deux agents, dont l'objectif est d'adapter le comportement de l'agent artificiel ou de l'agent humain en cas de conflit d'autorité sur ces ressources. Nous définissons l'autorité relative de deux agents par rapport au contrôle d'une ressource, ainsi que la notion de conflit d'autorité : une première expérience nous montre en effet que le conflit constitue un déclencheur pertinent pour une redistribution de l'autorité entre agents. Une seconde expérience montre qu'au-delà de la modification du comportement de l'agent artificiel, il est effectivement possible d'adapter le comportement de l'opérateur humain
en vue de résoudre un tel conflit
Premières pistes pour l'autonomie adaptative sans niveaux
Dans le cadre de la supervision de mission d'un ou plusieurs agents artificiels (robots, drones...) par un opérateur humain, la question du partage des rôles et de l'autorité est une problématique avérée. En effet, un équilibre doit être trouvé entre le contrôle purement manuel des engins qui permet en général d'avoir une grande confiance dans le système mais qui soumet l'opérateur humain à une charge de travail importante, et l'autonomie totale des engins qui offre moins de garanties en environnement incertain et de moins bonnes performances. L'autonomie ajustable (ou adaptative) basée sur les niveaux d'autonomie semble être une réponse à ce type de problème. Cependant, ce type d'approche n'est pas exempte de défauts : les niveaux constituent des modes de partage d'autorité et de répartition des tâches rigides et prédéfinis, sans compter le manque de recul concernant les apports de l'opérateur trop souvent considérés comme uniquement bénéfiques. Nous présentons les concepts élémentaires d'une approche destinée à adapter dynamiquement l'autonomie d'un agent relativement à un opérateur humain, non pas axée sur l'utilisation de niveaux d'autonomie mais sur la base de la gestion des ressources et des conflits d'utilisation de ces ressources
Authority sharing in human-robot systems
In the context of missions accomplished jointly by an artifi cal agent and a human agent, we focus on a controller of the authority dynamics based on a dependence graph of resources that can be controlled by both agents. The controller is designed to adapt the behaviours of the artifi cial agent or of the human agent in case of an authority conflict occurring on these resources. The relative authority of two agents regarding the control of a resource is de fined so as the authority conflict, which appears relevant to trigger authority reallocation between agents as shown by a fi rst experiment. Finally a second experiment shows that beyond the modifi cation of the arti ficial agent's behaviour, it is also possible to adapt the human operator's behaviour in order to solve such a conflict
Petri net-based modelling of human–automation conflicts in aviation
Analyses of aviation safety reports reveal that human–machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew–automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via ‘hidden’ mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired ‘hidden’ transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human–automation conflicts.
Practitioner Summary: We propose a Petri net model of human–automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches are complementary to identify and assess the criticality conflicts
Towards human operator “state” assessment
This paper focuses on an approach to estimate the symbolic “state” and detect the attentional tunneling of a human operator in the frame of a human-robot mission. The symbolic “state” results from a fuzzy aggregation of the operator's gaze position and heart rate
Networks with fourfold connectivity in two dimensions
The elastic properties of planar, C4-symmetric networks under stress and at nonzero temperature are determined by simulation and mean field approximations. Attached at fourfold coordinated junction vertices, the networks are self-avoiding in that their elements (or bonds) may not intersect each other. Two different models are considered for the potential energy of the elements: either Hooke’s law springs or flexible tethers (square well potential). For certain ranges of stress and temperature, the properties of the networks are captured by one of several models: at large tensions, the networks behave like a uniform system of square plaquettes, while at large compressions or high temperatures, they display many characteristics of an ideal gas. Under less severe conditions, mean field models with more general shapes (parallelograms) reproduce many essential features of both networks. Lastly, the spring network expands without limit at a two-dimensional tension equal to the force constant of the spring; however, it does not appear to collapse under compression, except at zero temperature
- …