12,856 research outputs found

    Fixation times in evolutionary games under weak selection

    Full text link
    In evolutionary game dynamics, reproductive success increases with the performance in an evolutionary game. If strategy AA performs better than strategy BB, strategy AA will spread in the population. Under stochastic dynamics, a single mutant will sooner or later take over the entire population or go extinct. We analyze the mean exit times (or average fixation times) associated with this process. We show analytically that these times depend on the payoff matrix of the game in an amazingly simple way under weak selection, ie strong stochasticity: The payoff difference Δπ\Delta \pi is a linear function of the number of AA individuals ii, Δπ=ui+v\Delta \pi = u i + v. The unconditional mean exit time depends only on the constant term vv. Given that a single AA mutant takes over the population, the corresponding conditional mean exit time depends only on the density dependent term uu. We demonstrate this finding for two commonly applied microscopic evolutionary processes.Comment: Forthcoming in New Journal of Physic

    Simple and Near-Optimal Mechanisms For Market Intermediation

    Full text link
    A prevalent market structure in the Internet economy consists of buyers and sellers connected by a platform (such as Amazon or eBay) that acts as an intermediary and keeps a share of the revenue of each transaction. While the optimal mechanism that maximizes the intermediary's profit in such a setting may be quite complicated, the mechanisms observed in reality are generally much simpler, e.g., applying an affine function to the price of the transaction as the intermediary's fee. Loertscher and Niedermayer [2007] initiated the study of such fee-setting mechanisms in two-sided markets, and we continue this investigation by addressing the question of when an affine fee schedule is approximately optimal for worst-case seller distribution. On one hand our work supplies non-trivial sufficient conditions on the buyer side (i.e. linearity of marginal revenue function, or MHR property of value and value minus cost distributions) under which an affine fee schedule can obtain a constant fraction of the intermediary's optimal profit for all seller distributions. On the other hand we complement our result by showing that proper affine fee-setting mechanisms (e.g. those used in eBay and Amazon selling plans) are unable to extract a constant fraction of optimal profit in the worst-case seller distribution. As subsidiary results we also show there exists a constant gap between maximum surplus and maximum revenue under the aforementioned conditions. Most of the mechanisms that we propose are also prior-independent with respect to the seller, which signifies the practical implications of our result.Comment: To appear in WINE'14, the 10th conference on Web and Internet Economic

    Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations

    Full text link
    We study algorithms for online linear optimization in Hilbert spaces, focusing on the case where the player is unconstrained. We develop a novel characterization of a large class of minimax algorithms, recovering, and even improving, several previous results as immediate corollaries. Moreover, using our tools, we develop an algorithm that provides a regret bound of O(UTlog(UTlog2T+1))\mathcal{O}\Big(U \sqrt{T \log(U \sqrt{T} \log^2 T +1)}\Big), where UU is the L2L_2 norm of an arbitrary comparator and both TT and UU are unknown to the player. This bound is optimal up to loglogT\sqrt{\log \log T} terms. When TT is known, we derive an algorithm with an optimal regret bound (up to constant factors). For both the known and unknown TT case, a Normal approximation to the conditional value of the game proves to be the key analysis tool.Comment: Proceedings of the 27th Annual Conference on Learning Theory (COLT 2014

    It Takes (Only) Two: Adversarial Generator-Encoder Networks

    Full text link
    We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures
    corecore