22,647 research outputs found
Empirical-Rational Semantics of Agent Communication
The missing of an appropriate semantics of agent communication languages is one of the most challenging issues of contemporary AI. Although several approaches to this problem exist, none of them is really suitable for dealing with agent autonomy, which is a decisive property of artificial agents. This paper introduces an observation-based approach to the semantics of agent communication, which combines benefits of the two most influential traditional approaches to agent communication semantics, namely the mentalistic (agent-centric) and the objectivist (i.e., commitment- or protocol-oriented) approach. Our approach makes use of the fact that the most general meaning of agent utterances lays in their expectable consequences in terms of agent actions, and that communications result from hidden but nevertheless rational and to some extent reliable agent intentions. In this work, we present a formal framework which enables the empirical derivation of communication meanings from the observation of rational agent utterances, and introduce thereby a probabilistic and utility-oriented perspective of social commitments
Translating Neuralese
Several approaches have recently been proposed for learning decentralized
deep multiagent policies that coordinate via a differentiable communication
channel. While these policies are effective for many tasks, interpretation of
their induced communication strategies has remained a challenge. Here we
propose to interpret agents' messages by translating them. Unlike in typical
machine translation problems, we have no parallel data to learn from. Instead
we develop a translation model based on the insight that agent messages and
natural language strings mean the same thing if they induce the same belief
about the world in a listener. We present theoretical guarantees and empirical
evidence that our approach preserves both the semantics and pragmatics of
messages by ensuring that players communicating through a translation layer do
not suffer a substantial loss in reward relative to players with a common
language.Comment: Fixes typos and cleans ups some model presentation detail
On Automating the Doctrine of Double Effect
The doctrine of double effect () is a long-studied ethical
principle that governs when actions that have both positive and negative
effects are to be allowed. The goal in this paper is to automate
. We briefly present , and use a first-order
modal logic, the deontic cognitive event calculus, as our framework to
formalize the doctrine. We present formalizations of increasingly stronger
versions of the principle, including what is known as the doctrine of triple
effect. We then use our framework to simulate successfully scenarios that have
been used to test for the presence of the principle in human subjects. Our
framework can be used in two different modes: One can use it to build
-compliant autonomous systems from scratch, or one can use it to
verify that a given AI system is -compliant, by applying a
layer on an existing system or model. For the latter mode, the
underlying AI system can be built using any architecture (planners, deep neural
networks, bayesian networks, knowledge-representation systems, or a hybrid); as
long as the system exposes a few parameters in its model, such verification is
possible. The role of the layer here is akin to a (dynamic or
static) software verifier that examines existing software modules. Finally, we
end by presenting initial work on how one can apply our layer
to the STRIPS-style planning model, and to a modified POMDP model.This is
preliminary work to illustrate the feasibility of the second mode, and we hope
that our initial sketches can be useful for other researchers in incorporating
DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017;
Special Track on AI & Autonom
Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents
Socionics is an interdisciplinary approach with the objective to use sociological knowledge about the structures, mechanisms and processes of social interaction and social communication as a source of inspiration for the development of multi-agent systems, both for the purposes of engineering applications and of social theory construction and social simulation. The approach has been spelled out from 1998 on within the Socionics priority program funded by the German National research foundation. This special issue of the JASSS presents research results from five interdisciplinary projects of the Socionics program. The introduction gives an overview over the basic ideas of the Socionics approach and summarizes the work of these projects.Socionics, Sociology, Multi-Agent Systems, Artificial Social Systems, Hybrid Systems, Social Simulation
Epistemic Pluralism
The present paper wants to promote epistemic pluralism as an alternative view of non-classical logics. For this purpose, a bilateralist logic of acceptance and rejection is developed in order to make an important di erence between several concepts of epistemology, including information and justi cation. Moreover, the notion of disagreement corresponds to a set of epistemic oppositions between agents. The result is a non-standard theory of opposition for many-valued logics, rendering total and partial disagreement in terms of epistemic negation and semi-negations
On the nature and role of intersubjectivity in communication
We outline a theory of human agency and communication and discuss the role that the capability to share (that is, intersubjectivity) plays in it. All the notions discussed are cast in a mentalistic and radically constructivist framework. We also introduce and discuss the relevant literature
- …