160 research outputs found

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence

    Querying Social Practices in Hospital Context

    Get PDF
    Understanding the social contexts in which actions and interactions take place is of utmost importance for planning one’s goals and activities. People use social practices as means to make sense of their environment, assessing how that context relates to past, common experiences, culture and capabilities. Social practices can therefore simplify deliberation and planning in complex contexts. In the context of patient-centered planning, hospitals seek means to ensure that patients and their families are at the center of decisions and planning of the healthcare processes. This requires on one hand that patients are aware of the practices being in place at the hospital and on the other hand that hospitals have the means to evaluate and adapt current practices to the needs of the patients. In this paper we apply a framework for formalizing social practices of an organization to an emergency department that carries out patient-centered planning. We indicate how such a formalization can be used to answer operational queries about the expected outcome of operational actions.</p

    Clash of the Explainers: Argumentation for Context-Appropriate Explanations

    Full text link
    Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation. If -- in general -- no single explanation technique surpasses the rest, then reasoning over the available methods is required in order to select one that is context-appropriate. Due to the transparency they afford, we propose employing argumentation techniques to reach an agreement over the most suitable explainers from a given set of possible explainers. In this paper, we propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest. By formalising supporting premises -- and inferences -- we can map stakeholder characteristics to those of explanation techniques. This allows us to reason over the techniques and prioritise the best one for the given context, while also offering transparency into the selection decision.Comment: 17 pages, 3 figures, Accepted at XAI^3 Workshop at ECAI 202

    Group Norms for Multi-Agent Organisations

    Get PDF
    W. W. Vasconcelos acknowledges the support of the Engineering and Physical Sciences Research Council (EPSRC-UK) within the research project “Scrutable Autonomous Systems” (Grant No. EP/J012084/1). The authors thank the three anonymous reviewers for their comments, suggestions, and constructive criticisms. Thanks are due to Dr. Nir Oren, for comments on earlier versions of the article, and Mr. Seumas Simpson, for proofreading the manuscript. Any remaining mistakes are the sole responsibility of the authors.Peer reviewedPostprin

    Using intentional analysis to model knowledge management requirements in communities of practice

    Get PDF
    This working document presents a Knowledge Management (KM) fictitious scenario to be modeled using Intentional Analysis in order to guide us on choosing the appropriate Information System support for the given situation. In this scenario, a newcomer in a knowledge organization decides to join an existing Community of Practice (CoP) in order to share knowledge and adjust to his new working environment. The preliminary idea suggests that Tropos is used for the Intentional Analysis, allowing us to elicit the requirements for a KM system, followed by the use of Agent-Object-Relationship Modeling Language (AORML) on the architectural and detailed design phases of software development. Aside of this primary goal, we also intend to point out needs of extending the expressiveness of the current Intentional analysis modeling language we are using and to check where the methodology could be improved in order to make it more usable. This is the first version of this working document, which we aim to constantly update with our new findings resulting of progress in the analysis

    A Framework for Organization-Aware Agents

    Get PDF
    corecore