1,193 research outputs found
Strong Nash Equilibria in Games with the Lexicographical Improvement Property
We introduce a class of finite strategic games with the property that every
deviation of a coalition of players that is profitable to each of its members
strictly decreases the lexicographical order of a certain function defined on
the set of strategy profiles. We call this property the Lexicographical
Improvement Property (LIP) and show that it implies the existence of a
generalized strong ordinal potential function. We use this characterization to
derive existence, efficiency and fairness properties of strong Nash equilibria.
We then study a class of games that generalizes congestion games with
bottleneck objectives that we call bottleneck congestion games. We show that
these games possess the LIP and thus the above mentioned properties. For
bottleneck congestion games in networks, we identify cases in which the
potential function associated with the LIP leads to polynomial time algorithms
computing a strong Nash equilibrium. Finally, we investigate the LIP for
infinite games. We show that the LIP does not imply the existence of a
generalized strong ordinal potential, thus, the existence of SNE does not
follow. Assuming that the function associated with the LIP is continuous,
however, we prove existence of SNE. As a consequence, we prove that bottleneck
congestion games with infinite strategy spaces and continuous cost functions
possess a strong Nash equilibrium
Frame Permutation Quantization
Frame permutation quantization (FPQ) is a new vector quantization technique
using finite frames. In FPQ, a vector is encoded using a permutation source
code to quantize its frame expansion. This means that the encoding is a partial
ordering of the frame expansion coefficients. Compared to ordinary permutation
source coding, FPQ produces a greater number of possible quantization rates and
a higher maximum rate. Various representations for the partitions induced by
FPQ are presented, and reconstruction algorithms based on linear programming,
quadratic programming, and recursive orthogonal projection are derived.
Implementations of the linear and quadratic programming algorithms for uniform
and Gaussian sources show performance improvements over entropy-constrained
scalar quantization for certain combinations of vector dimension and coding
rate. Monte Carlo evaluation of the recursive algorithm shows that mean-squared
error (MSE) decays as 1/M^4 for an M-element frame, which is consistent with
previous results on optimal decay of MSE. Reconstruction using the canonical
dual frame is also studied, and several results relate properties of the
analysis frame to whether linear reconstruction techniques provide consistent
reconstructions.Comment: 29 pages, 5 figures; detailed added to proof of Theorem 4.3 and a few
minor correction
Two Combinatorial Models with identical Statics yet different Dynamics
Motivated by the problem of sorting, we introduce two simple combinatorial
models with distinct Hamiltonians yet identical spectra (and hence partition
function) and show that the local dynamics of these models are very different.
After a deep quench, one model slowly relaxes to the sorted state whereas the
other model becomes blocked by the presence of stable local minima.Comment: 23 pages, 11 figure
Exact Markov Chain-based Runtime Analysis of a Discrete Particle Swarm Optimization Algorithm on Sorting and OneMax
Meta-heuristics are powerful tools for solving optimization problems whose
structural properties are unknown or cannot be exploited algorithmically. We
propose such a meta-heuristic for a large class of optimization problems over
discrete domains based on the particle swarm optimization (PSO) paradigm. We
provide a comprehensive formal analysis of the performance of this algorithm on
certain "easy" reference problems in a black-box setting, namely the sorting
problem and the problem OneMAX. In our analysis we use a Markov-model of the
proposed algorithm to obtain upper and lower bounds on its expected
optimization time. Our bounds are essentially tight with respect to the
Markov-model. We show that for a suitable choice of algorithm parameters the
expected optimization time is comparable to that of known algorithms and,
furthermore, for other parameter regimes, the algorithm behaves less greedy and
more explorative, which can be desirable in practice in order to escape local
optima. Our analysis provides a precise insight on the tradeoff between
optimization time and exploration. To obtain our results we introduce the
notion of indistinguishability of states of a Markov chain and provide bounds
on the solution of a recurrence equation with non-constant coefficients by
integration
Convergence in Models with Bounded Expected Relative Hazard Rates
We provide a general framework to study stochastic sequences related to
individual learning in economics, learning automata in computer sciences,
social learning in marketing, and other applications. More precisely, we study
the asymptotic properties of a class of stochastic sequences that take values
in and satisfy a property called "bounded expected relative hazard
rates." Sequences that satisfy this property and feature "small step-size" or
"shrinking step-size" converge to 1 with high probability or almost surely,
respectively. These convergence results yield conditions for the learning
models in B\"orgers, Morales, and Sarin (2004), Erev and Roth (1998), and
Schlag (1998) to choose expected payoff maximizing actions with probability one
in the long run.Comment: After revision. Accepted for publication by Journal of Economic
Theor
- …