8,897 research outputs found
Zero-sum stopping games with asymmetric information
We study a model of two-player, zero-sum, stopping games with asymmetric
information. We assume that the payoff depends on two continuous-time Markov
chains (X, Y), where X is only observed by player 1 and Y only by player 2,
implying that the players have access to stopping times with respect to
different filtrations. We show the existence of a value in mixed stopping times
and provide a variational characterization for the value as a function of the
initial distribution of the Markov chains. We also prove a verification theorem
for optimal stopping rules which allows to construct optimal stopping times.
Finally we use our results to solve explicitly two generic examples
Using HMM in Strategic Games
In this paper we describe an approach to resolve strategic games in which
players can assume different types along the game. Our goal is to infer which
type the opponent is adopting at each moment so that we can increase the
player's odds. To achieve that we use Markov games combined with hidden Markov
model. We discuss a hypothetical example of a tennis game whose solution can be
applied to any game with similar characteristics.Comment: In Proceedings DCM 2013, arXiv:1403.768
Sequential Estimation of Dynamic Discrete Games
This paper studies the estimation of dynamic discrete games of incomplete information. Two main econometric issues appear in the estimation of these models: the indeterminacy problem associated with the existence of multiple equilibria, and the computational burden in the solution of the game. We propose a class of pseudo maximum likelihood (PML) estimators that deals with these problems and we study the asymptotic and finite sample properties of several estimators in this class. We first focus on two-step PML estimators which, though attractive for their computational simplicity, have some important limitations: they are seriously biased in small samples; they require consistent nonparametric estimators of players' choice probabilities in the first step, which are not always feasible for some models and data; and they are asymptotically inefficient. Second, we show that a recursive extension of the two-step PML, which we call nested pseudo likelihood (NPL), addresses those drawbacks at a relatively small additional computational cost. The NPL estimator is particularly useful in applications where consistent nonparametric estimates of choice probabilities are either not available or very imprecise, e.g., models with permanent unobserved heterogeneity. Finally, we illustrate these methods in Montecarlo experiments and in an empirical application to a model of firm entry and exit in oligopoly markets using Chilean data from several retail industries.Dynamic discrete games, Multiple equilibria, Pseudo maximum likelihood estimation, Entry and exit in oligopoly markets.
OPTIMAL USE OF COMMUNICATION RESOURCES
We study a repeated game with asymmetric information about a dynamic state of nature. In the course of the game, the better informed player can communicate some or all of his information with the other. Our model covers costly and/or bounded communication. We characterize the set of equilibrium payoffs, and contrast these with the communication equilibrium payoffs, which by definition entail no communication costs.Repeated games, communication, entropy
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
- …