48,276 research outputs found

    Norms and accountability in multi-agent societies

    Get PDF
    It is argued that norms are best understood as classes of constraints on practical reasoning, which an agent may consult either to select appropriate goals or commitments according to the circumstances, or to construct a discursive justification for a course of action after the event. We also discuss the question of how norm-conformance can be enforced in an open agent society, arguing that some form of peer pressure is needed in open agent societies lacking universally-recognised rules or any accepted authority structure. The paper includes formal specifications of some data structures that may be employed in reasoning about normative agents

    Norm Monitoring under Partial Action Observability

    Get PDF
    In the context of using norms for controlling multi-agent systems, a vitally important question that has not yet been addressed in the literature is the development of mechanisms for monitoring norm compliance under partial action observability. This paper proposes the reconstruction of unobserved actions to tackle this problem. In particular, we formalise the problem of reconstructing unobserved actions, and propose an information model and algorithms for monitoring norms under partial action observability using two different processes for reconstructing unobserved actions. Our evaluation shows that reconstructing unobserved actions increases significantly the number of norm violations and fulfilments detected.Comment: Accepted at the IEEE Transaction on Cybernetic

    07122 Abstracts Collection -- Normative Multi-agent Systems

    Get PDF
    From 18.03.07 to 23.03.07, the Dagstuhl Seminar 07122 ``Normative Multi-agent Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Machine Learning, Functions and Goals

    Get PDF
    Machine learning researchers distinguish between reinforcement learning and supervised learning and refer to reinforcement learning systems as “agents”. This paper vindicates the claim that systems trained by reinforcement learning are agents while those trained by supervised learning are not. Systems of both kinds satisfy Dretske’s criteria for agency, because they both learn to produce outputs selectively in response to inputs. However, reinforcement learning is sensitive to the instrumental value of outputs, giving rise to systems which exploit the effects of outputs on subsequent inputs to achieve good performance over episodes of interaction with their environments. Supervised learning systems, in contrast, merely learn to produce better outputs in response to individual inputs

    Vigilance and control

    Get PDF
    We sometimes fail unwittingly to do things that we ought to do. And we are, from time to time, culpable for these unwitting omissions. We provide an outline of a theory of responsibility for unwitting omissions. We emphasize two distinctive ideas: (i) many unwitting omissions can be understood as failures of appropriate vigilance, and; (ii) the sort of self-control implicated in these failures of appropriate vigilance is valuable. We argue that the norms that govern vigilance and the value of self-control explain culpability for unwitting omissions
    • …
    corecore