29,132 research outputs found

    Finite-time Convergence Policies in State-dependent Social Networks

    Get PDF
    This paper addresses the problem of finite-time convergence in a social network for a political party or an association, modeled as a distributed iterative system with a graph dynamics chosen to mimic how people interact. It is firstly shown that, in this setting, finite-time convergence is achieved only when nodes form a complete network, and that contacting with agents with distinct opinions reduces to a half the required interconnections. Two novel strategies are presented that enable finite-time convergence, even for the case where each node only contacts the two closest neighbors. These strategies are of prime importance, for instance, in a company environment where agents can be motivated to reach faster conclusions. The performance of the proposed policies is assessed through simulation, illustrating, in particular the finite-time convergence property

    Generalized Opinion Dynamics from Local Optimization Rules

    Full text link
    We study generalizations of the Hegselmann-Krause (HK) model for opinion dynamics, incorporating features and parameters that are natural components of observed social systems. The first generalization is one where the strength of influence depends on the distance of the agents' opinions. Under this setup, we identify conditions under which the opinions converge in finite time, and provide a qualitative characterization of the equilibrium. We interpret the HK model opinion update rule as a quadratic cost-minimization rule. This enables a second generalization: a family of update rules which possess different equilibrium properties. Subsequently, we investigate models in which a external force can behave strategically to modulate/influence user updates. We consider cases where this external force can introduce additional agents and cases where they can modify the cost structures for other agents. We describe and analyze some strategies through which such modulation may be possible in an order-optimal manner. Our simulations demonstrate that generalized dynamics differ qualitatively and quantitatively from traditional HK dynamics.Comment: 20 pages, under revie

    Spatial interactions in agent-based modeling

    Full text link
    Agent Based Modeling (ABM) has become a widespread approach to model complex interactions. In this chapter after briefly summarizing some features of ABM the different approaches in modeling spatial interactions are discussed. It is stressed that agents can interact either indirectly through a shared environment and/or directly with each other. In such an approach, higher-order variables such as commodity prices, population dynamics or even institutions, are not exogenously specified but instead are seen as the results of interactions. It is highlighted in the chapter that the understanding of patterns emerging from such spatial interaction between agents is a key problem as much as their description through analytical or simulation means. The chapter reviews different approaches for modeling agents' behavior, taking into account either explicit spatial (lattice based) structures or networks. Some emphasis is placed on recent ABM as applied to the description of the dynamics of the geographical distribution of economic activities, - out of equilibrium. The Eurace@Unibi Model, an agent-based macroeconomic model with spatial structure, is used to illustrate the potential of such an approach for spatial policy analysis.Comment: 26 pages, 5 figures, 105 references; a chapter prepared for the book "Complexity and Geographical Economics - Topics and Tools", P. Commendatore, S.S. Kayam and I. Kubin, Eds. (Springer, in press, 2014

    QDQD-Learning: A Collaborative Distributed Strategy for Multi-Agent Reinforcement Learning Through Consensus + Innovations

    Full text link
    The paper considers a class of multi-agent Markov decision processes (MDPs), in which the network agents respond differently (as manifested by the instantaneous one-stage random costs) to a global controlled state and the control actions of a remote controller. The paper investigates a distributed reinforcement learning setup with no prior information on the global state transition and local agent cost statistics. Specifically, with the agents' objective consisting of minimizing a network-averaged infinite horizon discounted cost, the paper proposes a distributed version of QQ-learning, QD\mathcal{QD}-learning, in which the network agents collaborate by means of local processing and mutual information exchange over a sparse (possibly stochastic) communication network to achieve the network goal. Under the assumption that each agent is only aware of its local online cost data and the inter-agent communication network is \emph{weakly} connected, the proposed distributed scheme is almost surely (a.s.) shown to yield asymptotically the desired value function and the optimal stationary control policy at each network agent. The analytical techniques developed in the paper to address the mixed time-scale stochastic dynamics of the \emph{consensus + innovations} form, which arise as a result of the proposed interactive distributed scheme, are of independent interest.Comment: Submitted to the IEEE Transactions on Signal Processing, 33 page

    Distributed Learning Policies for Power Allocation in Multiple Access Channels

    Full text link
    We analyze the problem of distributed power allocation for orthogonal multiple access channels by considering a continuous non-cooperative game whose strategy space represents the users' distribution of transmission power over the network's channels. When the channels are static, we find that this game admits an exact potential function and this allows us to show that it has a unique equilibrium almost surely. Furthermore, using the game's potential property, we derive a modified version of the replicator dynamics of evolutionary game theory which applies to this continuous game, and we show that if the network's users employ a distributed learning scheme based on these dynamics, then they converge to equilibrium exponentially quickly. On the other hand, a major challenge occurs if the channels do not remain static but fluctuate stochastically over time, following a stationary ergodic process. In that case, the associated ergodic game still admits a unique equilibrium, but the learning analysis becomes much more complicated because the replicator dynamics are no longer deterministic. Nonetheless, by employing results from the theory of stochastic approximation, we show that users still converge to the game's unique equilibrium. Our analysis hinges on a game-theoretical result which is of independent interest: in finite player games which admit a (possibly nonlinear) convex potential function, the replicator dynamics (suitably modified to account for nonlinear payoffs) converge to an eps-neighborhood of an equilibrium at time of order O(log(1/eps)).Comment: 11 pages, 8 figures. Revised manuscript structure and added more material and figures for the case of stochastically fluctuating channels. This version will appear in the IEEE Journal on Selected Areas in Communication, Special Issue on Game Theory in Wireless Communication
    corecore