142 research outputs found

    C-IPS: Specifying decision interdependencies in negotiations

    Get PDF
    Abstract. Negotiation is an important mechanism of coordination in multiagent systems. Contrary to early conceptualizations of negotiating agents, we believe that decisions regarding the negotiation issue and the negotiation partner are equally important as the selection of negotiation steps. Our C-IPS approach considers these three aspects as separate decision processes. It requires an explicit specification of interdependencies between them. In this article we address the task of specifying the dynamic interdependencies by means of IPS dynamics. Thereby we introduce a new level of modeling negotiating agents that is above negotiation mechanism and protocol design. IPS dynamics are presented using state charts. We define some generally required states, predicates and actions. We illustrate the dynamics by a simple example. The example is first specified for an idealized scenario and is then extended to a more realistic model that captures some features of open multiagent systems. The well-structured reasoning process for negotiating agents enables more comprehensive and hence more flexible architectures. The explicit modeling of all involved decisions and dependencies eases the understanding, evaluation, and comparison of different approaches to negotiating agents.

    Rich Counter-Examples for Temporal-Epistemic Logic Model Checking

    Full text link
    Model checking verifies that a model of a system satisfies a given property, and otherwise produces a counter-example explaining the violation. The verified properties are formally expressed in temporal logics. Some temporal logics, such as CTL, are branching: they allow to express facts about the whole computation tree of the model, rather than on each single linear computation. This branching aspect is even more critical when dealing with multi-modal logics, i.e. logics expressing facts about systems with several transition relations. A prominent example is CTLK, a logic that reasons about temporal and epistemic properties of multi-agent systems. In general, model checkers produce linear counter-examples for failed properties, composed of a single computation path of the model. But some branching properties are only poorly and partially explained by a linear counter-example. This paper proposes richer counter-example structures called tree-like annotated counter-examples (TLACEs), for properties in Action-Restricted CTL (ARCTL), an extension of CTL quantifying paths restricted in terms of actions labeling transitions of the model. These counter-examples have a branching structure that supports more complete description of property violations. Elements of these counter-examples are annotated with parts of the property to give a better understanding of their structure. Visualization and browsing of these richer counter-examples become a critical issue, as the number of branches and states can grow exponentially for deeply-nested properties. This paper formally defines the structure of TLACEs, characterizes adequate counter-examples w.r.t. models and failed properties, and gives a generation algorithm for ARCTL properties. It also illustrates the approach with examples in CTLK, using a reduction of CTLK to ARCTL. The proposed approach has been implemented, first by extending the NuSMV model checker to generate and export branching counter-examples, secondly by providing an interactive graphical interface to visualize and browse them.Comment: In Proceedings IWIGP 2012, arXiv:1202.422

    MCMAS: an open-source model checker for the verification of multi-agent systems

    Get PDF
    We present MCMAS, a model checker for the verification of multi-agent systems. MCMAS supports efficient symbolic techniques for the verification of multi-agent systems against specifications representing temporal, epistemic and strategic properties. We present the underlying semantics of the specification language supported and the algorithms implemented in MCMAS, including its fairness and counterexample generation features. We provide a detailed description of the implementation. We illustrate its use by discussing a number of examples and evaluate its performance by comparing it against other model checkers for multi-agent systems on a common case study

    A weakness measure for GR(1) formulae

    Get PDF
    In spite of the theoretical and algorithmic developments for system synthesis in recent years, little effort has been dedicated to quantifying the quality of the specifications used for synthesis. When dealing with unrealizable specifications, finding the weakest environment assumptions that would ensure realizability is typically a desirable property; in such context the weakness of the assumptions is a major quality parameter. The question of whether one assumption is weaker than another is commonly interpreted using implication or, equivalently, language inclusion. However, this interpretation does not provide any further insight into the weakness of assumptions when implication does not hold. To our knowledge, the only measure that is capable of comparing two formulae in this case is entropy, but even it fails to provide a sufficiently refined notion of weakness in case of GR(1) formulae, a subset of linear temporal logic formulae which is of particular interest in controller synthesis. In this paper we propose a more refined measure of weakness based on the Hausdorff dimension, a concept that captures the notion of size of the omega-language satisfying a linear temporal logic formula. We identify the conditions under which this measure is guaranteed to distinguish between weaker and stronger GR(1) formulae. We evaluate our proposed weakness measure in the context of computing GR(1) assumptions refinements

    Using Agent JPF to Build Models for Other Model Checkers

    Full text link
    Abstract. We describe an extension to the AJPF agent program modelchecker so that it may be used to generate models for input into other, non-agent, model-checkers. We motivate this adaptation, arguing that it improves the efficiency of the model-checking process and provides access to richer property specification languages. We illustrate the approach by describing the export of AJPF program models to Spin and Prism. In the case of Spin we also investigate, experimentally, the effect the process has on the overall efficiency of modelchecking.

    Symbolic Model Checking for Dynamic Epistemic Logic

    Get PDF
    Dynamic Epistemic Logic (DEL) can model complex information scenarios in a way that appeals to logicians. However, existing DEL implementations are ad-hoc, so we do not know how the framework really performs. For this purpose, we want to hook up with the best available model-checking and SAT techniques in computational logic. We do this by first providing a bridge: a new faithful representation of DEL models as so-called knowledge structures that allow for symbolic model checking. Next, we show that we can now solve well-known benchmark problems in epistemic scenarios much faster than with existing DEL methods. Finally, we show that our method is not just a matter of implementation, but that it raises significant issues about logical representation and update

    Reports of the AAAI 2019 spring symposium series

    Get PDF
    Applications of machine learning combined with AI algorithms have propelled unprecedented economic disruptions across diverse fields in industry, military, medicine, finance, and others. With the forecast for even larger impacts, the present economic impact of machine learning is estimated in the trillions of dollars. But as autonomous machines become ubiquitous, recent problems have surfaced. Early on, and again in 2018, Judea Pearl warned AI scientists they must "build machines that make sense of what goes on in their environment," a warning still unheeded that may impede future development. For example, self-driving vehicles often rely on sparse data; self-driving cars have already been involved in fatalities, including a pedestrian; and yet machine learning is unable to explain the contexts within which it operates
    corecore