12,856 research outputs found
Fixation times in evolutionary games under weak selection
In evolutionary game dynamics, reproductive success increases with the
performance in an evolutionary game. If strategy performs better than
strategy , strategy will spread in the population. Under stochastic
dynamics, a single mutant will sooner or later take over the entire population
or go extinct. We analyze the mean exit times (or average fixation times)
associated with this process. We show analytically that these times depend on
the payoff matrix of the game in an amazingly simple way under weak selection,
ie strong stochasticity: The payoff difference is a linear
function of the number of individuals , . The
unconditional mean exit time depends only on the constant term . Given that
a single mutant takes over the population, the corresponding conditional
mean exit time depends only on the density dependent term . We demonstrate
this finding for two commonly applied microscopic evolutionary processes.Comment: Forthcoming in New Journal of Physic
Simple and Near-Optimal Mechanisms For Market Intermediation
A prevalent market structure in the Internet economy consists of buyers and
sellers connected by a platform (such as Amazon or eBay) that acts as an
intermediary and keeps a share of the revenue of each transaction. While the
optimal mechanism that maximizes the intermediary's profit in such a setting
may be quite complicated, the mechanisms observed in reality are generally much
simpler, e.g., applying an affine function to the price of the transaction as
the intermediary's fee. Loertscher and Niedermayer [2007] initiated the study
of such fee-setting mechanisms in two-sided markets, and we continue this
investigation by addressing the question of when an affine fee schedule is
approximately optimal for worst-case seller distribution. On one hand our work
supplies non-trivial sufficient conditions on the buyer side (i.e. linearity of
marginal revenue function, or MHR property of value and value minus cost
distributions) under which an affine fee schedule can obtain a constant
fraction of the intermediary's optimal profit for all seller distributions. On
the other hand we complement our result by showing that proper affine
fee-setting mechanisms (e.g. those used in eBay and Amazon selling plans) are
unable to extract a constant fraction of optimal profit in the worst-case
seller distribution. As subsidiary results we also show there exists a constant
gap between maximum surplus and maximum revenue under the aforementioned
conditions. Most of the mechanisms that we propose are also prior-independent
with respect to the seller, which signifies the practical implications of our
result.Comment: To appear in WINE'14, the 10th conference on Web and Internet
Economic
Unconstrained Online Linear Learning in Hilbert Spaces: Minimax Algorithms and Normal Approximations
We study algorithms for online linear optimization in Hilbert spaces,
focusing on the case where the player is unconstrained. We develop a novel
characterization of a large class of minimax algorithms, recovering, and even
improving, several previous results as immediate corollaries. Moreover, using
our tools, we develop an algorithm that provides a regret bound of
, where is
the norm of an arbitrary comparator and both and are unknown to
the player. This bound is optimal up to terms. When is
known, we derive an algorithm with an optimal regret bound (up to constant
factors). For both the known and unknown case, a Normal approximation to
the conditional value of the game proves to be the key analysis tool.Comment: Proceedings of the 27th Annual Conference on Learning Theory (COLT
2014
It Takes (Only) Two: Adversarial Generator-Encoder Networks
We present a new autoencoder-type architecture that is trainable in an
unsupervised mode, sustains both generation and inference, and has the quality
of conditional and unconditional samples boosted by adversarial learning.
Unlike previous hybrids of autoencoders and adversarial networks, the
adversarial game in our approach is set up directly between the encoder and the
generator, and no external mappings are trained in the process of learning. The
game objective compares the divergences of each of the real and the generated
data distributions with the prior distribution in the latent space. We show
that direct generator-vs-encoder game leads to a tight coupling of the two
components, resulting in samples and reconstructions of a comparable quality to
some recently-proposed more complex architectures
- …