15,793 research outputs found
microPhantom: Playing microRTS under uncertainty and chaos
This competition paper presents microPhantom, a bot playing microRTS and
participating in the 2020 microRTS AI competition. microPhantom is based on our
previous bot POAdaptive which won the partially observable track of the 2018
and 2019 microRTS AI competitions. In this paper, we focus on decision-making
under uncertainty, by tackling the Unit Production Problem with a method based
on a combination of Constraint Programming and decision theory. We show that
using our method to decide which units to train improves significantly the win
rate against the second-best microRTS bot from the partially observable track.
We also show that our method is resilient in chaotic environments, with a very
small loss of efficiency only. To allow replicability and to facilitate further
research, the source code of microPhantom is available, as well as the
Constraint Programming toolkit it uses
Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-run Average Objectives
Turn-based stochastic games and its important subclass Markov decision
processes (MDPs) provide models for systems with both probabilistic and
nondeterministic behaviors. We consider turn-based stochastic games with two
classical quantitative objectives: discounted-sum and long-run average
objectives. The game models and the quantitative objectives are widely used in
probabilistic verification, planning, optimal inventory control, network
protocol and performance analysis. Games and MDPs that model realistic systems
often have very large state spaces, and probabilistic abstraction techniques
are necessary to handle the state-space explosion. The commonly used
full-abstraction techniques do not yield space-savings for systems that have
many states with similar value, but does not necessarily have similar
transition structure. A semi-abstraction technique, namely Magnifying-lens
abstractions (MLA), that clusters states based on value only, disregarding
differences in their transition relation was proposed for qualitative
objectives (reachability and safety objectives). In this paper we extend the
MLA technique to solve stochastic games with discounted-sum and long-run
average objectives. We present the MLA technique based abstraction-refinement
algorithm for stochastic games and MDPs with discounted-sum objectives. For
long-run average objectives, our solution works for all MDPs and a sub-class of
stochastic games where every state has the same value
An Extended Mean Field Game for Storage in Smart Grids
We consider a stylized model for a power network with distributed local power
generation and storage. This system is modeled as network connection a large
number of nodes, where each node is characterized by a local electricity
consumption, has a local electricity production (e.g. photovoltaic panels), and
manages a local storage device. Depending on its instantaneous consumption and
production rates as well as its storage management decision, each node may
either buy or sell electricity, impacting the electricity spot price. The
objective at each node is to minimize energy and storage costs by optimally
controlling the storage device. In a non-cooperative game setting, we are led
to the analysis of a non-zero sum stochastic game with players where the
interaction takes place through the spot price mechanism. For an infinite
number of agents, our model corresponds to an Extended Mean-Field Game (EMFG).
In a linear quadratic setting, we obtain and explicit solution to the EMFG, we
show that it provides an approximate Nash-equilibrium for -player game, and
we compare this solution to the optimal strategy of a central planner.Comment: 27 pages, 5 figures. arXiv admin note: text overlap with
arXiv:1607.02130 by other author
- …