18,415 research outputs found

    The role of information in multi-agent learning

    Get PDF
    This paper aims to contribute to the study of auction design within the domain of agent-based computational economics. In particular, we investigate the efficiency of different auction mechanisms in a bounded-rationality setting where heterogeneous artificial agents learn to compete for the supply of a homogeneous good. Two different auction mechanisms are compared: the uniform and the discriminatory pricing rules. Demand is considered constant and inelastic to price. Four learning algorithms representing different models of bounded rationality, are considered for modeling agents' learning capabilities. Results are analyzed according to two game-theoretic solution concepts, i.e., Nash equilibria and Pareto optima, and three performance metrics. Different computational experiments have been performed in different game settings, i.e., self-play and mixed-play competition with two, three and four market participants. This methodological approach permits to highlight properties which are invariant to the different market settings considered. The main economic result is that, irrespective of the learning model considered, the discriminatory pricing rule is a more e±cient market mechanism than the uniform one in the two and three players games, whereas identical outcomes are obtained in four players competitions. Important insights are also given for the use of multi-agent learning as a framework for market design.multi-agent learning; auction markets; design economics; agent-based computational economics

    A Spatial-Epistemic Logic for Reasoning about Security Protocols

    Full text link
    Reasoning about security properties involves reasoning about where the information of a system is located, and how it evolves over time. While most security analysis techniques need to cope with some notions of information locality and knowledge propagation, usually they do not provide a general language for expressing arbitrary properties involving local knowledge and knowledge transfer. Building on this observation, we introduce a framework for security protocol analysis based on dynamic spatial logic specifications. Our computational model is a variant of existing pi-calculi, while specifications are expressed in a dynamic spatial logic extended with an epistemic operator. We present the syntax and semantics of the model and logic, and discuss the expressiveness of the approach, showing it complete for passive attackers. We also prove that generic Dolev-Yao attackers may be mechanically determined for any deterministic finite protocol, and discuss how this result may be used to reason about security properties of open systems. We also present a model-checking algorithm for our logic, which has been implemented as an extension to the SLMC system.Comment: In Proceedings SecCo 2010, arXiv:1102.516

    Distributed Dictionary Learning

    Full text link
    The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multi-agent network with time-varying (nonsymmetric) connectivity. This formulation is relevant, for instance, in big-data scenarios where massive amounts of data are collected/stored in different spatial locations and it is unfeasible to aggregate and/or process all the data in a fusion center, due to resource limitations, communication overhead or privacy considerations. We develop a general distributed algorithmic framework for the (nonconvex) DL problem and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a gradient tracking mechanism instrumental to locally estimate the missing global information; and ii) a consensus step, as a mechanism to distribute the computations among the agents. To the best of our knowledge, this is the first distributed algorithm with provable convergence for the DL problem and, more in general, bi-convex optimization problems over (time-varying) directed graphs

    Distributed Stochastic Optimization under Imperfect Information

    Full text link
    We consider a stochastic convex optimization problem that requires minimizing a sum of misspecified agentspecific expectation-valued convex functions over the intersection of a collection of agent-specific convex sets. This misspecification is manifested in a parametric sense and may be resolved through solving a distinct stochastic convex learning problem. Our interest lies in the development of distributed algorithms in which every agent makes decisions based on the knowledge of its objective and feasibility set while learning the decisions of other agents by communicating with its local neighbors over a time-varying connectivity graph. While a significant body of research currently exists in the context of such problems, we believe that the misspecified generalization of this problem is both important and has seen little study, if at all. Accordingly, our focus lies on the simultaneous resolution of both problems through a joint set of schemes that combine three distinct steps: (i) An alignment step in which every agent updates its current belief by averaging over the beliefs of its neighbors; (ii) A projected (stochastic) gradient step in which every agent further updates this averaged estimate; and (iii) A learning step in which agents update their belief of the misspecified parameter by utilizing a stochastic gradient step. Under an assumption of mere convexity on agent objectives and strong convexity of the learning problems, we show that the sequences generated by this collection of update rules converge almost surely to the solution of the correctly specified stochastic convex optimization problem and the stochastic learning problem, respectively
    corecore