22,438 research outputs found

    Proceedings of the 11th European Agent Systems Summer School Student Session

    Get PDF
    This volume contains the papers presented at the Student Session of the 11th European Agent Systems Summer School (EASSS) held on 2nd of September 2009 at Educatorio della Providenza, Turin, Italy. The Student Session, organised by students, is designed to encourage student interaction and feedback from the tutors. By providing the students with a conference-like setup, both in the presentation and in the review process, students have the opportunity to prepare their own submission, go through the selection process and present their work to each other and their interests to their fellow students as well as internationally leading experts in the agent field, both from the theoretical and the practical sector. Table of Contents: Andrew Koster, Jordi Sabater Mir and Marco Schorlemmer, Towards an inductive algorithm for learning trust alignment . . . 5; Angel Rolando Medellin, Katie Atkinson and Peter McBurney, A Preliminary Proposal for Model Checking Command Dialogues. . . 12; Declan Mungovan, Enda Howley and Jim Duggan, Norm Convergence in Populations of Dynamically Interacting Agents . . . 19; Akın GĂŒnay, Argumentation on Bayesian Networks for Distributed Decision Making . . 25; Michael Burkhardt, Marco Luetzenberger and Nils Masuch, Towards Toolipse 2: Tool Support for the JIAC V Agent Framework . . . 30; Joseph El Gemayel, The Tenacity of Social Actors . . . 33; Cristian Gratie, The Impact of Routing on Traffic Congestion . . . 36; Andrei-Horia Mogos and Monica Cristina Voinescu, A Rule-Based Psychologist Agent for Improving the Performances of a Sportsman . . . 39; --Autonomer Agent,Agent,KĂŒnstliche Intelligenz

    Intrusiveness, Trust and Argumentation: Using Automated Negotiation to Inhibit the Transmission of Disruptive Information

    No full text
    The question of how to promote the growth and diffusion of information has been extensively addressed by a wide research community. A common assumption underpinning most studies is that the information to be transmitted is useful and of high quality. In this paper, we endorse a complementary perspective. We investigate how the growth and diffusion of high quality information can be managed and maximized by preventing, dampening and minimizing the diffusion of low quality, unwanted information. To this end, we focus on the conflict between pervasive computing environments and the joint activities undertaken in parallel local social contexts. When technologies for distributed activities (e.g. mobile technology) develop, both artifacts and services that enable people to participate in non-local contexts are likely to intrude on local situations. As a mechanism for minimizing the intrusion of the technology, we develop a computational model of argumentation-based negotiation among autonomous agents. A key component in the model is played by trust: what arguments are used and how they are evaluated depend on how trustworthy the agents judge one another. To gain an insight into the implications of the model, we conduct a number of virtual experiments. Results enable us to explore how intrusiveness is affected by trust, the negotiation network and the agents' abilities of conducting argumentation

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Full text link
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    Human-Agent Decision-making: Combining Theory and Practice

    Full text link
    Extensive work has been conducted both in game theory and logic to model strategic interaction. An important question is whether we can use these theories to design agents for interacting with people? On the one hand, they provide a formal design specification for agent strategies. On the other hand, people do not necessarily adhere to playing in accordance with these strategies, and their behavior is affected by a multitude of social and psychological factors. In this paper we will consider the question of whether strategies implied by theories of strategic behavior can be used by automated agents that interact proficiently with people. We will focus on automated agents that we built that need to interact with people in two negotiation settings: bargaining and deliberation. For bargaining we will study game-theory based equilibrium agents and for argumentation we will discuss logic-based argumentation theory. We will also consider security games and persuasion games and will discuss the benefits of using equilibrium based agents.Comment: In Proceedings TARK 2015, arXiv:1606.0729

    SAsSy – Scrutable Autonomous Systems

    Get PDF
    Abstract. An autonomous system consists of physical or virtual systems that can perform tasks without continuous human guidance. Autonomous systems are becoming increasingly ubiquitous, ranging from unmanned vehicles, to robotic surgery devices, to virtual agents which collate and process information on the internet. Existing autonomous systems are opaque, limiting their usefulness in many situations. In order to realise their promise, techniques for making such autonomous systems scrutable are therefore required. We believe that the creation of such scrutable autonomous systems rests on four foundations, namely an appropriate planning representation; the use of a human understandable reasoning mechanism, such as argumentation theory; appropriate natural language generation tools to translate logical statements into natural ones; and information presentation techniques to enable the user to cope with the deluge of information that autonomous systems can provide. Each of these foundations has its own unique challenges, as does the integration of all of these into a single system.

    The Evidence Hub: harnessing the collective intelligence of communities to build evidence-based knowledge

    Get PDF
    Conventional document and discussion websites provide users with no help in assessing the quality or quantity of evidence behind any given idea. Besides, the very meaning of what evidence is may not be unequivocally defined within a community, and may require deep understanding, common ground and debate. An Evidence Hub is a tool to pool the community collective intelligence on what is evidence for an idea. It provides an infrastructure for debating and building evidence-based knowledge and practice. An Evidence Hub is best thought of as a filter onto other websites — a map that distills the most important issues, ideas and evidence from the noise by making clear why ideas and web resources may be worth further investigation. This paper describes the Evidence Hub concept and rationale, the breath of user engagement and the evolution of specific features, derived from our work with different community groups in the healthcare and educational sector

    Rethinking network governance: new forms of analysis and the implications for IGR/MLG

    Get PDF
    Our position is that network governance can be understood as a communicative arena. Networks, then, are not defined by frequency of interactions between actors but by sharing of and contest between different clusters of ideas, theories and normative orientations (discourses) in relation to the specific context within which actors operate. A discourse comprises an ensemble of ideas, concepts and causal theories that give meaning to and reproduce ways of understanding the world (Chouliaraki and Fairclough 1999). Consequently, network governance can be understood as the inherently political process through which discourses are produced, reproduced and transformed. Democratic network governance thus becomes the study of the way in which the core challenges of democratic practice are addressed – how is legitimacy awarded, by what mechanisms are decisions reached, and how is accountability enabled. Three approaches to the discursive analysis of democracy in network governance are considered - argumentation analysis, inter-subjectivity, and critical discourse analysis – and their implications for the study of intergovernmental relations and multi-level governance (IGR/MLG) are discussed. Case examples are provided. We conclude that the value for the study of MLG/IGR is to complement existing forms of analysis by opening up the communicative and ideational aspects of interactions between levels of government and other actors

    Designing for interaction

    Get PDF
    At present, the design of computer-supported group-based learning (CS)GBL) is often based on subjective decisions regarding tasks, pedagogy and technology, or concepts such as ‘cooperative learning’ and ‘collaborative learning’. Critical review reveals these concepts as insufficiently substantial to serve as a basis for (CS)GBL design. Furthermore, the relationship between outcome and group interaction is rarely specified a priori. Thus, there is a need for a more systematic approach to designing (CS)GBL that focuses on the elicitation of expected interaction processes. A framework for such a process-oriented methodology is proposed. Critical elements that affect interaction are identified: learning objectives, task-type, level of pre-structuring, group size and computer support. The proposed process-oriented method aims to stimulate designers to adopt a more systematic approach to (CS)GBL design according to the interaction expected, while paying attention to critical elements that affect interaction. This approach may bridge the gap between observed quality of interaction and learning outcomes and foster (CS)GBL design that focuses on the heart of the matter: interaction
    • 

    corecore