134,726 research outputs found
Feature Markov Decision Processes
General purpose intelligent learning agents cycle through (complex,non-MDP)
sequences of observations, actions, and rewards. On the other hand,
reinforcement learning is well-developed for small finite state Markov Decision
Processes (MDPs). So far it is an art performed by human designers to extract
the right state representation out of the bare observations, i.e. to reduce the
agent setup to the MDP framework. Before we can think of mechanizing this
search for suitable MDPs, we need a formal objective criterion. The main
contribution of this article is to develop such a criterion. I also integrate
the various parts into one learning algorithm. Extensions to more realistic
dynamic Bayesian networks are developed in a companion article.Comment: 7 page
Robust Markov Decision Processes
Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1-Ă. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1 - Ă. Our method involves the solution of tractable conic programs of moderate size.
Multiple-Environment Markov Decision Processes
We introduce Multi-Environment Markov Decision Processes (MEMDPs) which are
MDPs with a set of probabilistic transition functions. The goal in a MEMDP is
to synthesize a single controller with guaranteed performances against all
environments even though the environment is unknown a priori. While MEMDPs can
be seen as a special class of partially observable MDPs, we show that several
verification problems that are undecidable for partially observable MDPs, are
decidable for MEMDPs and sometimes have even efficient solutions
Scalable Verification of Markov Decision Processes
Markov decision processes (MDP) are useful to model concurrent process
optimisation problems, but verifying them with numerical methods is often
intractable. Existing approximative approaches do not scale well and are
limited to memoryless schedulers. Here we present the basis of scalable
verification for MDPSs, using an O(1) memory representation of
history-dependent schedulers. We thus facilitate scalable learning techniques
and the use of massively parallel verification.Comment: V4: FMDS version, 12 pages, 4 figure
Limit Synchronization in Markov Decision Processes
Markov decision processes (MDP) are finite-state systems with both strategic
and probabilistic choices. After fixing a strategy, an MDP produces a sequence
of probability distributions over states. The sequence is eventually
synchronizing if the probability mass accumulates in a single state, possibly
in the limit. Precisely, for 0 <= p <= 1 the sequence is p-synchronizing if a
probability distribution in the sequence assigns probability at least p to some
state, and we distinguish three synchronization modes: (i) sure winning if
there exists a strategy that produces a 1-synchronizing sequence; (ii)
almost-sure winning if there exists a strategy that produces a sequence that
is, for all epsilon > 0, a (1-epsilon)-synchronizing sequence; (iii) limit-sure
winning if for all epsilon > 0, there exists a strategy that produces a
(1-epsilon)-synchronizing sequence.
We consider the problem of deciding whether an MDP is sure, almost-sure,
limit-sure winning, and we establish the decidability and optimal complexity
for all modes, as well as the memory requirements for winning strategies. Our
main contributions are as follows: (a) for each winning modes we present
characterizations that give a PSPACE complexity for the decision problems, and
we establish matching PSPACE lower bounds; (b) we show that for sure winning
strategies, exponential memory is sufficient and may be necessary, and that in
general infinite memory is necessary for almost-sure winning, and unbounded
memory is necessary for limit-sure winning; (c) along with our results, we
establish new complexity results for alternating finite automata over a
one-letter alphabet
- âŚ