16 research outputs found

    Culture-Based Explainable Human-Agent Deconfliction

    Get PDF
    Law codes and regulations help organise societies for centuries, and as AI systems gain more autonomy, we question how human-agent systems can operate as peers under the same norms, especially when resources are contended. We posit that agents must be accountable and explainable by referring to which rules justify their decisions. The need for explanations is associated with user acceptance and trust. This paper's contribution is twofold: i) we propose an argumentation-based human-agent architecture to map human regulations into a culture for artificial agents with explainable behaviour. Our architecture leans on the notion of argumentative dialogues and generates explanations from the history of such dialogues; and ii) we validate our architecture with a user study in the context of human-agent path deconfliction. Our results show that explanations provide a significantly higher improvement in human performance when systems are more complex. Consequently, we argue that the criteria defining the need of explanations should also consider the complexity of a system. Qualitative findings show that when rules are more complex, explanations significantly reduce the perception of challenge for humans.L3Harris ASV and the Royal Commission for the Exhibition of 185

    Explanation-Aware Experience Replay in Rule-Dense Environments

    Get PDF
    Human environments are often regulated by explicit and complex rulesets. Integrating Reinforcement Learning (RL) agents into such environments motivates the development of learning mechanisms that perform well in rule-dense and exception-ridden environments such as autonomous driving on regulated roads. In this letter, we propose a method for organising experience by means of partitioning the experience buffer into clusters labelled on a per-explanation basis. We present discrete and continuous navigation environments compatible with modular rulesets and 9 learning tasks. For environments with explainable rulesets, we convert rule-based explanations into case-based explanations by allocating state-transitions into clusters labelled with explanations. This allows us to sample experiences in a curricular and task-oriented manner, focusing on the rarity, importance, and meaning of events. We label this concept Explanation-Awareness (XA). We perform XA experience replay (XAER) with intra and inter-cluster prioritisation, and introduce XA-compatible versions of DQN, TD3, and SAC. Performance is consistently superior with XA versions of those algorithms, compared to traditional Prioritised Experience Replay baselines, indicating that explanation engineering can be used in lieu of reward engineering for environments with explainable features

    Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution.

    Get PDF
    Fairness is commonly seen as a property of the global outcome of a system and assumes centralisation and complete knowledge. However, in real decentralised applications, agents only have partial observation capabilities. Under limited information, agents rely on communication to divulge some of their private (and unobservable) information to others. When an agent deliberates to resolve conflicts, limited knowledge may cause its perspective of a correct outcome to differ from the actual outcome of the conflict resolution. This is subjective unfairness. As human systems and societies are organised by rules and norms, hybrid human-agent and multi-agent environments of the future will require agents to resolve conflicts in a decentralised and rule-aware way. Prior work achieves such decentralised, rule-aware conflict resolution through cultures: explainable architectures that embed human regulations and norms via argumentation frameworks with verification mechanisms. However, this prior work requires agents to have full state knowledge of each other, whereas many distributed applications in practice admit partial observation capabilities, which may require agents to communicate and carefully opt to release information if privacy constraints apply. To enable decentralised, fairness-aware conflict resolution under privacy constraints, we have two contributions: 1) a novel interaction approach and 2) a formalism of the relationship between privacy and fairness. Our proposed interaction approach is an architecture for privacy-aware explainable conflict resolution where agents engage in a dialogue of hypotheses and facts. To measure the privacy-fairness relationship, we define subjective and objective fairness on both the local and global scope and formalise the impact of partial observability due to privacy in these different notions of fairness. We first study our proposed architecture and the privacy-fairness relationship in the abstract, testing different argumentation strategies on a large number of randomised cultures. We empirically demonstrate the trade-off between privacy, objective fairness, and subjective fairness and show that better strategies can mitigate the effects of privacy in distributed systems. In addition to this analysis across a broad set of randomised abstract cultures, we analyse a case study for a specific scenario: we instantiate our architecture in a multi-agent simulation of prioritised rule-aware collision avoidance with limited information disclosure

    Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control

    Full text link
    Designing a safe, trusted, and ethical AI may be practically impossible; however, designing AI with safe, trusted, and ethical use in mind is possible and necessary in safety and mission-critical domains like aerospace. Safe, trusted, and ethical use of AI are often used interchangeably; however, a system can be safely used but not trusted or ethical, have a trusted use that is not safe or ethical, and have an ethical use that is not safe or trusted. This manuscript serves as a primer to illuminate the nuanced differences between these concepts, with a specific focus on applications of Human-AI teaming in aerospace system control, where humans may be in, on, or out-of-the-loop of decision-making

    The European Union and NATO Beyond Berlin Plus: the institutionalisation of informal cooperation

    Get PDF
    For a decade, the EU and NATO have both claimed to have a relationship purported to be a Strategic Partnership . However, this relationship is widely understood by both academics and practitioners to be problematic. Although not denying that the relationship is problematic, it is claimed here that the argument, whereby the EU and NATO simply do not cooperate, is very limited in its value. In fact, it is argued that the two organisations cooperate far more, albeit less efficiently, outside of the formal Agreed Framework for cooperation. According to the formal rules of Berlin Plus/Agreed Framework (BP/AF), the EU and NATO should not cooperate at all outside of the Bosnia Herzegovina (ALTHEA) context. This is clearly not the case. The fundamental aim of this thesis is to investigate how this cooperation - beyond the BP/AF has emerged. Above all, it asks, within a context where formal EU-NATO cooperation is ruled out, what type of cooperation is emerging? This thesis attempts to explain the creation and performance of the informal EU-NATO institutional relationship beyond Berlin Plus. This thesis, drawing on insights from historical institutionalist theory and by investigating EU-NATO cooperation in counter-piracy, Kosovo and Afghanistan, puts forward three general arguments. First, in order for informal EU-NATO cooperation to take place outside of the BP/AF, cooperation is driven spatially away from the central political tools of Brussels, towards the common operational areas and hierarchically downwards to the international staffs and, in particular, towards the operational personnel. Second, although the key assumptions of historical institutionalism (path dependency, punctuated equilibrium and critical junctures) help to explain the stasis of the EU-NATO relationship at the broad political and strategic level, a more complete understanding of the relationship is warranted. Including theoretical assumptions of incremental change helps to explain the informal cooperation that is now driving EU-NATO relations beyond Berlin Plus. Finally, this thesis makes the fundamental claim that the processes of incremental change through informal cooperation reinforce the current static formal political and strategic relationship. Events and operational necessity are driving incremental change far more than any theoretical debates about where the EU ends and NATO begins. Until events force a situation whereby both organisations must revisit the formal structures of cooperation, the static relationship will continue to exist, reinforced by sporadically releasing the political pressure valve expedited through the processes of informal cooperation. If the EU and NATO are to truly achieve a Strategic Partnership , it will stem from an existential security critical juncture and not from internal evolutionary processes

    The problematisation of autonomous weapon systems - a case study of the US Department of Defense

    Get PDF
    Robotics systems play an increasingly important role in armed conflicts and there are already weapons in service that replace a human being at the point of engagement. The United States (US) is the first country to have adopted a policy on autonomous weapon systems (AWS) in the Directive 3000.09. The US policy on AWS is however poorly understood in the academic and policy circles. This thesis addresses the question of how the US Department of Defense (DoD) problematises the concept of AWS. By applying a Bacchi’s poststructuralist approach to policy analysis, the thesis asks how the US DoD constructs the ‘problem’ of AWS, what assumptions underlie this representation of the ‘problem’, how has it come about, what effects it produces, what is left out of problem representation, and how could it be questioned. The US DoD problematisation of AWS does not only clarifies the Department’s approach, but also it explores the role of human involvement over the use of AWS. The US policy states that AWS shall be used by ‘appropriate levels of human judgment’. This term is, however, open to different interpretations, and some argue that it prohibits a lethal use of AWS, while others disagree. The thesis focuses not only on content of the US concept of human judgment, but primarily on how this concept relates to the wider US military understanding of ‘control.’ In that, it unpacks the concept of human judgment and distinguishes it from the concept of human control. I argue that both concepts are important in the debate on AWS as they represent alternative policy approaches to the use of such weapons. By making these concepts more explicit, my thesis contributes to the specific and emerging academic debate about the role of human involvement over the use of AWS
    corecore