2,325 research outputs found

    Coordination under Limited Depth of Reasoning

    Get PDF
    This paper provides the first unified explanation of behavior in coordinated attack games under both public and private information. It demonstrates that the main experimental results, such as threshold strategies, comparative statics, and the differences in behavior under public and private information, are robust predictions of limited depth of reasoning models. This is in contrast to equilibrium, which mispredicts the coordinating roles of public and private information. The analysis has implications for understanding macroeconomic phenomena, like currency attacks and debt crises, which are commonly modeled using incomplete information coordinated attack games

    Explicit Evidence Systems with Common Knowledge

    Full text link
    Justification logics are epistemic logics that explicitly include justifications for the agents' knowledge. We develop a multi-agent justification logic with evidence terms for individual agents as well as for common knowledge. We define a Kripke-style semantics that is similar to Fitting's semantics for the Logic of Proofs LP. We show the soundness, completeness, and finite model property of our multi-agent justification logic with respect to this Kripke-style semantics. We demonstrate that our logic is a conservative extension of Yavorskaya's minimal bimodal explicit evidence logic, which is a two-agent version of LP. We discuss the relationship of our logic to the multi-agent modal logic S4 with common knowledge. Finally, we give a brief analysis of the coordinated attack problem in the newly developed language of our logic

    A computer scientist looks at game theory

    Full text link
    I consider issues in distributed computation that should be of relevance to game theory. In particular, I focus on (a) representing knowledge and uncertainty, (b) dealing with failures, and (c) specification of mechanisms.Comment: To appear, Games and Economic Behavior. JEL classification numbers: D80, D8

    Tiresias: Predicting Security Events Through Deep Learning

    Full text link
    With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (e.g., whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task

    Modeling Human Ad Hoc Coordination

    Full text link
    Whether in groups of humans or groups of computer agents, collaboration is most effective between individuals who have the ability to coordinate on a joint strategy for collective action. However, in general a rational actor will only intend to coordinate if that actor believes the other group members have the same intention. This circular dependence makes rational coordination difficult in uncertain environments if communication between actors is unreliable and no prior agreements have been made. An important normative question with regard to coordination in these ad hoc settings is therefore how one can come to believe that other actors will coordinate, and with regard to systems involving humans, an important empirical question is how humans arrive at these expectations. We introduce an exact algorithm for computing the infinitely recursive hierarchy of graded beliefs required for rational coordination in uncertain environments, and we introduce a novel mechanism for multiagent coordination that uses it. Our algorithm is valid in any environment with a finite state space, and extensions to certain countably infinite state spaces are likely possible. We test our mechanism for multiagent coordination as a model for human decisions in a simple coordination game using existing experimental data. We then explore via simulations whether modeling humans in this way may improve human-agent collaboration.Comment: AAAI 201
    • 

    corecore