1,073 research outputs found

    Motivated cooperation in autonomous agents

    Get PDF
    Multi-agent systems are underpinned by the notion of cooperation - the process by which independent agents act together to achieve particular goals. Cooperation between autonomous agents requires appropriate motivations on behalf of those agents, since an agent’s behaviour is guided by its motivations. Interaction with others involves an inherent risk and, to manage this risk, an agent must consider its trust of others in conjunction with its motivations in entering into, and continuing in, cooperative activity. The aim of this thesis is to develop a framework for motivated cooperation, focusing in particular on the motivational reasons an agent might have for cooperating, and how it can use the information it has about others (such as their capabilities and trustworthiness) to make informed judgements about the risk involved in cooperating. The main body of this thesis can be decomposed into four parts. First, we introduce the issues associated with motivated cooperation, identify the outstanding problems, and discuss the key related work that gives a context to the thesis. Second, we present the motivated agent architecture, SENARA, which forms the foundation of our framework. Third, we introduce the framework itself, drawing out the details related to motivation and risk, and describing how this framework can be instantiated in particular applications. Finally, we conclude the thesis by considering the contributions it has made, and identifying potential areas for future work

    HAC-ER: a disaster response system based on human-agent collectives

    Get PDF
    This paper proposes a novel disaster management system called HAC-ER that addresses some of the challenges faced by emergency responders by enabling humans and agents, using state-of-the-art algorithms, to collaboratively plan and carry out tasks in teams referred to as human-agent collectives. In particular, HAC-ER utilises crowdsourcing combined with machine learning to extract situational awareness information from large streams of reports posted by members of the public and trusted organisations. We then show how this information can inform human-agent teams in coordinating multi-UAV deployments as well as task planning for responders on the ground. Finally, HAC-ER incorporates a tool for tracking and analysing the provenance of information shared across the entire system. In summary, this paper describes a prototype system, validated by real-world emergency responders, that combines several state-of-the-art techniques for integrating humans and agents, and illustrates, for the first time, how such an approach can enable more effective disaster response operations

    Supporting group plans in the BDI architecture using coordination middleware

    Get PDF
    This is the full version of a paper published as the following extended abstract: Supporting Group Plans in the BDI Architecture using Coordination Middleware (Extended Abstract), Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems, 1427-1428, International Foundation for Autonomous Agents and Multiagent Systems, 2016 http://trust.sce.ntu.edu.sg/aamas16/pdfs/p1427.pdfThis paper investigates the use of group plans and goals as programming abstractions that encapsulate the communication needed to coordinate collaborative behaviour. It presents an extension of the BDI agent architecture to include explicit constructs for goals and plans that involve coordinated action by groups of agents. Formal operational semantics for group goals are provided, and an implementation of group plans and goals for the Jason agent platform is described, based on integration with the Zookeeper coordination middleware

    Building an Apparatus: Refractive, Reflective, and Diffractive Readings of Trace Data

    Get PDF
    We propose a set of methodological principles and strategies for the use of trace data, i.e., data capturing performances carried out on or via information systems, often at a fine level of detail. Trace data comes with a number of methodological and theoretical challenges associated with the inseparable nature of the social and material. Drawing on Haraway and Barad’s distinctions among refraction, reflection, and diffraction, we compare three approaches to trace data analysis. We argue that a diffractive methodology allows us to explore how trace data are not given but created through the construction of a research apparatus to study trace data. By focusing on the diffractive ways in which traces ripple through an apparatus, it is possible to explore some of the taken-for-granted, invisible dynamics of sociomateriality. Equally important, this approach allows us to describe what distinctions emerge and when, within entwined phenomena in the research process. Empirically, we illustrate the guiding methodological principles and strategies by analyzing trace data from Gravity Spy, a crowdsourced citizen science project on Zooniverse.org. We conclude by suggesting that a diffractive methodology helps us draw together quantitative and qualitative research practices in new and productive ways that allow us to study and design for the entwined and dynamic sociomaterial practices found in contemporary organizations

    Owning the Law: Intellectual Property Rights in Primary Law

    Get PDF

    Deep Learning, transparency and trust in Human Robot Teamwork

    Get PDF
    For Autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system (i.e., the system should be transparent). Robotics presents unique programming difficulties in that systems need to map from complicated sensor inputs such as camera feeds and laser scans to outputs such as joint angles and velocities. Advances in Deep Neural Networks are now making it possible to replace laborious handcrafted features and control code by learning control policies directly from high dimensional sensor inputs. Because Atari games, where these capabilities were first demonstrated, replicate the robotics problem they are ideal for investigating how humans might come to understand and interact with agents who have not been explicitly programmed. We present computational and human results for making DRLN more transparent using object saliency visualizations of internal states and test the effectiveness of expressing saliency through teleological verbal explanations

    Making and Keeping Probabilistic Commitments for Trustworthy Multiagent Coordination

    Full text link
    In a large number of real world domains, such as the control of autonomous vehicles, team sports, medical diagnosis and treatment, and many others, multiple autonomous agents need to take actions based on local observations, and are interdependent in the sense that they rely on each other to accomplish tasks. Thus, achieving desired outcomes in these domains requires interagent coordination. The form of coordination this thesis focuses on is commitments, where an agent, referred to as the commitment provider, specifies guarantees about its behavior to another, referred to as the commitment recipient, so that the recipient can plan and execute accordingly without taking into account the details of the provider's behavior. This thesis grounds the concept of commitments into decision-theoretic settings where the provider's guarantees might have to be probabilistic when its actions have stochastic outcomes and it expects to reduce its uncertainty about the environment during execution. More concretely, this thesis presents a set of contributions that address three core issues for commitment-based coordination: probabilistic commitment adherence, interpretation, and formulation. The first contribution is a principled semantics for the provider to exercise maximal autonomy that responds to evolving knowledge about the environment without violating its probabilistic commitment, along with a family of algorithms for the provider to construct policies that provably respect the semantics and make explicit tradeoffs between computation cost and plan quality. The second contribution consists of theoretical analyses and empirical studies that improve our understanding of the recipient's interpretation of the partial information specified in a probabilistic commitment; the thesis shows that it is inherently easier for the recipient to robustly model a probabilistic commitment where the provider promises to enable preconditions that the recipient requires than where the provider instead promises to avoid changing already-enabled preconditions. The third contribution focuses on the problem of formulating probabilistic commitments for the fully cooperative provider and recipient; the thesis proves structural properties of the agents' values as functions of the parameters of the commitment specification that can be exploited to achieve orders of magnitude less computation for 1) formulating optimal commitments in a centralized manner, and 2) formulating (approximately) optimal queries that induce (approximately) optimal commitments for the decentralized setting in which information relevant to optimization is distributed among the agents.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162948/1/qizhg_1.pd

    An Investigation of Human Annotators' AI Teammate Selection and Compliance Behaviours

    Get PDF
    Human-artificial intelligence (AI) collaborative annotation has gained increasing prominence as a result of its enormous potential to complement human and AI strengths as well as AI's recent development. However, it is not straightforward to form suitable human-AI teams and design human-AI interaction mechanisms for effective collaborative annotation. Through an exploratory study, this thesis investigated a diverse set of factors that may influence humans' AI teammate selection and compliance behaviours in a collaborative annotation context wherein AI agents serve as suggesters to humans. The study results indicate that multiple factors influenced which AI agents the participants chose to receive suggestions from, such as the AI agents' recent and overall accuracies as well as the participants' suggestion compliance records. We also discovered that the participants' AI compliance decisions were swayed by factors including whether the AI agents' suggestions aligned with the participants' top choices and whether such suggestions provided novel perspectives to the participants. Moreover, it was found that most of the participants constructed narratives to interpret the differences in various AI teammates' behaviours based on limited evidence. This thesis also contributes by presenting MIA, a versatile web platform for mixed-initiative annotation. Based on the weaknesses of MIA's current designs, as informed by empirical results of the aforementioned exploratory study and another human-AI collaborative annotation study, as well as the goals to improve MIA's scalability and adaptability, this thesis proposes design changes to MIA; these design changes also apply to other mixed-initiative annotation platforms
    • 

    corecore