21 research outputs found

    Explaining Simulations Through Self Explaining Agents

    Get PDF
    Several strategies are used to explain emergent interaction patterns in agent-based simulations. A distinction can be made between simulations in which the agents just behave in a reactive way, and simulations involving agents with also pro-active (goal-directed) behavior. Pro-active behavior is more variable and harder to predict than reactive behavior, and therefore it might be harder to explain. However, the approach presented in this paper tries to make advantage of the agents' pro-activeness by using it to explain their behavior. The aggregation of the agents' explanations form a basis for explaining the simulation as a whole. In this paper, an agent model that is able to generate (pro-active) behavior and explanations about that behavior is introduced, and the implementation of the model is discussed. Examples show how the link between behavior generation and explanation in the model can contribute to the explanation of a simulation.Explanation, Agents, Goal-Based Behavior, Virtual Training

    A Conceptual Framework for Addressing IoT Threats: Challenges in Meeting Challenges

    Get PDF
    The Internet of Things (IoT) is rapidly growing, and offers many economical and societal potentials and benefits. Nevertheless, the IoT also introduces new threats to our Security, Privacy and Safety (SPS). The existing work on mitigating these SPS threats often fails to address the fundamental challenges behind the mitigation measures proposed, and fails to make the relations between different mitigation measures explicit. This paper, therefore, offers a conceptual framework for understanding and approaching the challenges and obstacles that arise in addressing the SPS threats of the IoT. This contribution aims to help policymakers in adopting policies and strategies that stimulate others to develop, deploy and use IoT devices, applications and services in secure, privacy-friendly and safe ways

    Context-Sensitive Sharedness Criteria for Teamwork (Extended Abstract)

    Get PDF
    ABSTRACT Teamwork between humans and intelligent systems gains importance with the maturing of agent and robot technology. In the social sciences, sharedness of mental models is used to explain and understand teamwork. To use this concept for developing teams that include agents, we propose contextsensitive sharedness criteria. These criteria specify how much, what, and among whom knowledge in a team should be shared

    Enhancing human understanding through intelligent explanations,”

    Get PDF
    Abstract. Ambient systems that explain their actions promote the user's understanding as they give the user more insight in the effects of their behavior on the environment. In order to provide individualized intelligent explanations, we need not only to evaluate a user's observable behavior, but we also need to make sense of the underlying beliefs, intentions and strategies. In this paper we argue for the need of intelligent explanations, identify the requirements of such explanations, propose a method to achieve generation of intelligent explanations, and report on a prototype in the training of naval situation assessment and decision making. We discuss the implications of intelligent explanations in training and set the agenda for future research

    Values in Public Service Media Recommenders

    No full text
    corecore