24 research outputs found

    Normative systems in computer science. Ten guidelines for normative multiagent systems

    Get PDF
    In this paper we introduce and discuss ten guidelines for the use of normative systems in computer science. We adopt a multiagent sys- tems perspective, because norms are used to coordinate, organize, guide, regulate or control interaction among distributed autonomous systems. The first six guidelines are derived from the computer science literature. From the so-called ‘normchange’ definition of the first workshop on nor- mative multiagent systems in 2005 we derive the guidelines to motivate which definition of normative multiagent system is used, to make explicit why norms are a kind of (soft) constraints deserving special analysis, and to explain why and how norms can be changed at runtime. From the so-called ‘mechanism design’ definition of the second workshop on nor- mative multiagent systems in 2007 we derive the guidelines to discuss the use and role of norms as a mechanism in a game-theoretic setting, clarify the role of norms in the multiagent system, and to relate the no- tion of “norm” to the legal, social, or moral literature. The remaining four guidelines follow from the philosophical literature: use norms also to resolve dilemmas, and in general to coordinate, organize, guide, regulate or control interaction among agents, distinguish norms from obligations, prohibitions and permissions, use the deontic paradoxes only to illustrate the normative multiagent system, and consider regulative norms in rela- tion to other kinds of norms and other social-cognitive computer science concepts

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    The Norm Implementation Problem in Normative Multi-Agent Systems

    Get PDF
    Abstract. The norm implementation problem consists in how to see to it that the agents in a system comply with the norms specified for that system by the system designer. It is part of the more general problem of how to synthesize or create norms for multi-agent systems, by, for example, highlighting the choice between regimentation and enforcement, or the punishment associated with a norm violation. In this paper we discuss how various ways to implement norms in a multi-agent system can be distinguished in a formal game-theoretic framework. In particular, we show how different types of norm implementation can all be uniformly specified and verified as types of transformations of extensive games. We introduce the notion of retarded preconditions to implement norms, and we illustrate the framework and the various ways to implement norms in the blocks world environment

    Logic and Games of Norms: a Computational Perspective

    Get PDF

    Legal linked data ecosystems and the rule of law

    Get PDF
    This chapter introduces the notions of meta-rule of law and socio-legal ecosystems to both foster and regulate linked democracy. It explores the way of stimulating innovative regulations and building a regulatory quadrant for the rule of law. The chapter summarises briefly (i) the notions of responsive, better and smart regulation; (ii) requirements for legal interchange languages (legal interoperability); (iii) and cognitive ecology approaches. It shows how the protections of the substantive rule of law can be embedded into the semantic languages of the web of data and reflects on the conditions that make possible their enactment and implementation as a socio-legal ecosystem. The chapter suggests in the end a reusable multi-levelled meta-model and four notions of legal validity: positive, composite, formal, and ecological

    Proceedings of The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010)

    Get PDF
    http://ceur-ws.org/Vol-627/allproceedings.pdfInternational audienceMALLOW-2010 is a third edition of a series initiated in 2007 in Durham, and pursued in 2009 in Turin. The objective, as initially stated, is to "provide a venue where: the cost of participation was minimum; participants were able to attend various workshops, so fostering collaboration and cross-fertilization; there was a friendly atmosphere and plenty of time for networking, by maximizing the time participants spent together"

    COIN@AAMAS2015

    Get PDF
    COIN@AAMAS2015 is the nineteenth edition of the series and the fourteen papers included in these proceedings demonstrate the vitality of the community and will provide the grounds for a solid workshop program and what we expect will be a most enjoyable and enriching debate.Peer reviewe

    A modal approach to model computational trust

    Get PDF
    Le concept de confiance est un concept sociocognitif qui adresse la question de l'interaction dans les systèmes concurrents. Quand la complexité d'un système informatique prohibe l'utilisation de solutions traditionnelles de sécurité informatique en amont du processus de développement (solutions dites de type dur), la confiance est un concept candidat, pour le développement de systèmes d'aide à l'interaction. Dans cette thèse, notre but majeur est de présenter une vue d'ensemble de la discipline de la modélisation de la confiance dans les systèmes informatiques, et de proposer quelques modèles logiques pour le développement de module de confiance. Nous adoptons comme contexte applicatif majeur, les applications basées sur les architectures orientées services, qui sont utilisées pour modéliser des systèmes ouverts telle que les applications web. Nous utiliserons pour cela une abstraction qui modélisera ce genre de systèmes comme des systèmes multi-agents. Notre travail est divisé en trois parties, la première propose une étude de la discipline, nous y présentons les pratiques utilisées par les chercheurs et les praticiens de la confiance pour modéliser et utiliser ce concept dans différents systèmes, cette analyse nous permet de définir un certain nombre de points critiques, que la discipline doit aborder pour se développer. La deuxième partie de notre travail présente notre premier modèle de confiance. Cette première solution basée sur un formalisme logique (logique dynamique épistémique), démarre d'une interprétation de la confiance comme une croyance sociocognitive, ce modèle présentera une première modélisation de la confiance. Apres avoir prouvé la décidabilité de notre formalisme. Nous proposons une méthodologie pour inférer la confiance en des actions complexes : à partir de notre confiance dans des actions atomiques, nous illustrons ensuite comment notre solution peut être mise en pratique dans un cas d'utilisation basée sur la combinaison de service dans les architectures orientées services. La dernière partie de notre travail consiste en un modèle de confiance, où cette notion sera perçue comme une spécialisation du raisonnement causal tel qu'implémenté dans le formalisme des règles de production. Après avoir adapté ce formalisme au cas épistémique, nous décrivons trois modèles basés sur l'idée d'associer la confiance au raisonnement non monotone. Ces trois modèles permettent respectivement d'étudier comment la confiance est générée, comment elle-même génère les croyances d'un agent et finalement, sa relation avec son contexte d'utilisation.The concept of trust is a socio-cognitive concept that plays an important role in representing interactions within concurrent systems. When the complexity of a computational system and its unpredictability makes standard security solutions (commonly called hard security solutions) inapplicable, computational trust is one of the most useful concepts to design protocols of interaction. In this work, our main objective is to present a prospective survey of the field of study of computational trust. We will also present two trust models, based on logical formalisms, and show how they can be studied and used. While trying to stay general in our study, we use service-oriented architecture paradigm as a context of study when examples are needed. Our work is subdivided into three chapters. The first chapter presents a general view of the computational trust studies. Our approach is to present trust studies in three main steps. Introducing trust theories as first attempts to grasp notions linked to the concept of trust, fields of application, that explicit the uses that are traditionally associated to computational trust, and finally trust models, as an instantiation of a trust theory, w.r.t. some formal framework. Our survey ends with a set of issues that we deem important to deal with in priority in order to help the advancement of the field. The next two chapters present two models of trust. Our first model is an instantiation of Castelfranchi & Falcone's socio-cognitive trust theory. Our model is implemented using a Dynamic Epistemic Logic that we propose. The main originality of our solution is the fact that our trust definition extends the original model to complex action (programs, composed services, etc.) and the use of authored assignment as a special kind of atomic actions. The use of our model is then illustrated in a case study related to service-oriented architecture. Our second model extends our socio-cognitive definition to an abductive framework that allows us to associate trust to explanations. Our framework is an adaptation of Bochman's production relations to the epistemic case. Since Bochman approach was initially proposed to study causality, our definition of trust in this second model presents trust as a special case of causal reasoning, applied to a social context. We end our manuscript with a conclusion that presents how we would like to extend our work

    A platform-independent domain-specific modeling language for multiagent systems

    Get PDF
    Associated with the increasing acceptance of agent-based computing as a novel software engineering paradigm, recently a lot of research addresses the development of suitable techniques to support the agent-oriented software development. The state-of-the-art in agent-based software development is to (i) design the agent systems basing on an agent-based methodology and (ii) take the resulting design artifact as a base to manually implement the agent system using existing agent-oriented programming languages or general purpose languages like Java. Apart from failures made when manually transform an abstract specification into a concrete implementation, the gap between design and implementation may also result in the divergence of design and implementation. The framework discussed in this dissertation presents a platform-independent domain-specific modeling language for MASs called Dsml4MAS that allows modeling agent systems in a platform-independent and graphical manner. Apart from the abstract design, Dsml4MAS also allows to automatically (i) check the generated design artifacts against a formal semantic specification to guarantee the well-formedness of the design and (ii) translate the abstract specification into a concrete implementation. Taking both together, Dsml4MAS ensures that for any well-formed design, an associated implementation will be generated closing the gap between design and code.Aufgrund wachsender Akzeptanz von Agentensystemen zur Behandlung komplexer Problemstellungen wird der Schwerpunkt auf dem Gebiet der agentenorientierten Softwareentwicklung vor allem auf die Erforschung von geeignetem Entwicklungswerkzeugen gesetzt. Stand der Forschung ist es dabei das Agentendesign mittels einer Agentenmethodologie zu spezifizieren und die resultierenden Artefakte als Grundlage zur manuellen Programmierung zu verwenden. Fehler, die bei dieser manuellen Überführung entstehen, machen insbesondere das abstrakte Design weniger nützlich in Hinsicht auf die Nachhaltigkeit der entwickelten Softwareapplikation. Das in dieser Dissertation diskutierte Rahmenwerk erörtert eine plattformunabhängige domänenspezifische Modellierungssprache für Multiagentensysteme namens Dsml4MAS. Dsml4MAS erlaubt es Agentensysteme auf eine plattformunabhängige und graphische Art und Weise darzustellen. Die Modellierungssprache umfasst (i) eine abstrakte Syntax, die das Vokabular der Sprache definiert, (ii) eine konkrete Syntax, die die graphische Darstellung spezifiziert sowie (iii) eine formale Semantik, die dem Vokabular eine präzise Bedeutung gibt. Dsml4MAS ist Bestandteil einer (semi-automatischen) Methodologie, die es (i) erlaubt die abstrakte Spezifikation schrittweise bis hin zur konkreten Implementierung zu konkretisieren und (ii) die Interoperabilität zu alternativen Softwareparadigmen wie z.B. Dienstorientierte Architekturen zu gewährleisten
    corecore