5 research outputs found
OWL-POLAR : A Framework for Semantic Policy Representation and Reasoning
Peer reviewedPreprin
Authority Management and Conflict Solving in Human-Machine Systems
This paper focuses on vehicle-embedded decision autonomy and the human operator’s role in so-called autonomous systems. Autonomy control and authority sharing are discussed, and the possible effects of authority conflicts on the human operator’s cognition and situation awareness are highlighted. As an illustration, an experiment conducted at ISAE (the French Aeronautical and Space Institute) shows that the occurrence of a conflict leads to a perseveration behavior and attentional tunneling of the operator. Formal methods are discussed to infer such attentional impairment from the monitoring of physiological and behavioral measures and some results are given
Détection et résolution de conflits d'autorité dans un système homme-robot
Dans le cadre de missions réalisées conjointement par un agent artificiel et un agent humain, nous présentons un contrôleur de la dynamique de l'autorité, fondé sur un graphe de dépendances entre ressources contrôlables par les deux agents, dont l'objectif est d'adapter le comportement de l'agent artificiel ou de l'agent humain en cas de conflit d'autorité sur ces ressources. Nous définissons l'autorité relative de deux agents par rapport au contrôle d'une ressource, ainsi que la notion de conflit d'autorité : une première expérience nous montre en effet que le conflit constitue un déclencheur pertinent pour une redistribution de l'autorité entre agents. Une seconde expérience montre qu'au-delà de la modification du comportement de l'agent artificiel, il est effectivement possible d'adapter le comportement de l'opérateur humain
en vue de résoudre un tel conflit
An Approach to Operationalize Regulative Norms in Multiagent Systems
International audienc
Prevention of Harmful Behaviors within Cognitive and Autonomous Agents
International audienceBeing able to ensure that a multiagent system will not generate undesirable behaviors is essential within the context of critical applications (embedded systems or real-time systems). The emergence of behaviors from the agents interaction can generate situations incompatible with the expected system execution. The standard methods to validate a multiagent system do not prevent the occurrence of undesirable behaviors during its execution in real condition. We propose a complementary approach of dynamic self-monitoring and self-regulation allowing the agents to control their own behavior. This paper goes on to present the automatic generation of self-controlled agents. We use the observer approach to verify that the agents behavior respects a set of laws throughout the system execution