10 research outputs found

    Let Cognitive Radios Imitate: Imitation-based Spectrum Access for Cognitive Radio Networks

    Full text link
    In this paper, we tackle the problem of opportunistic spectrum access in large-scale cognitive radio networks, where the unlicensed Secondary Users (SU) access the frequency channels partially occupied by the licensed Primary Users (PU). Each channel is characterized by an availability probability unknown to the SUs. We apply evolutionary game theory to model the spectrum access problem and develop distributed spectrum access policies based on imitation, a behavior rule widely applied in human societies consisting of imitating successful behavior. We first develop two imitation-based spectrum access policies based on the basic Proportional Imitation (PI) rule and the more advanced Double Imitation (DI) rule given that a SU can imitate any other SUs. We then adapt the proposed policies to a more practical scenario where a SU can only imitate the other SUs operating on the same channel. A systematic theoretical analysis is presented for both scenarios on the induced imitation dynamics and the convergence properties of the proposed policies to an imitation-stable equilibrium, which is also the ϵ\epsilon-optimum of the system. Simple, natural and incentive-compatible, the proposed imitation-based spectrum access policies can be implemented distributedly based on solely local interactions and thus is especially suited in decentralized adaptive learning environments as cognitive radio networks

    The lower convergence tendency of imitators compared to best responders

    Get PDF
    Imitation is widely observed in nature and often used to model populations of decision-making agents, but it is not yet known under what conditions a network of imitators will reach a state where they are satisfied with their decisions. We show that every network in which agents imitate the best performing strategy in their neighborhood will reach an equilibrium in finite time, provided that all agents are opponent coordinating, i.e., earn a higher payoff if their opponent plays the same strategy as they do. It follows that any non-convergence observed in imitative networks is not necessarily a result of population heterogeneity nor special network topology, but rather must be caused by other factors such as the presence of non-opponent-coordinating agents. To strengthen this result, we show that large classes of imitative networks containing non-opponent-coordinating agents never equilibrate even when the population is homogeneous. Comparing to best-response dynamics where equilibration is guaranteed for every network of homogeneous agents playing 2 × 2 matrix games, our results imply that networks of imitators have a lower equilibration tendency

    A survey on the analysis and control of evolutionary matrix games

    Get PDF
    In support of the growing interest in how to efficiently influence complex systems of interacting self interested agents, we present this review of fundamental concepts, emerging research, and open problems related to the analysis and control of evolutionary matrix games, with particular emphasis on applications in social, economic, and biological networks. (C) 2018 Elsevier Ltd. All rights reserved

    Concurrent imitation dynamics in congestion games

    No full text
    Imitating successful behavior is a natural and frequently applied approach when facing complex decision problems. In this paper, we design protocols for distributed latency minimization in atomic congestion games based on imitation. We propose to study concurrent dynamics that emerge when each agent samples another agent and possibly imitates this agent's strategy if the anticipated latency gain is sufficiently large. Our focus is on convergence properties. We show convergence in a monotonic fashion to stable states, in which none of the agents can improve their latency by imitating others. As our main result, we show rapid convergence to approximate equilibria, in which only a small fraction of agents sustains a latency significantly above or below average. Imitation dynamics behave like an FPTAS, and the convergence time depends only logarithmically on the number of agents. Imitation processes cannot discover unused strategies, and strategies may become extinct with non-zero probability. For singleton games we show that the probability of this event occurring is negligible. Additionally, we prove that the social cost of a stable state reached by our dynamics is not much worse than an optimal state in singleton games with linear latency functions. We concentrate on the case of symmetric network congestion games, but our results do not use the network structure and continue to hold accordingly for general symmetric games. They even apply to asymmetric games when agents sample within the set of agents with the same strategy space. Finally, we discuss how the protocol can be extended such that, in the long run, dynamics converge to a pure Nash equilibrium
    corecore