525 research outputs found
Decision-theoretic planning with non-Markovian rewards
A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decision-theoretic planning, where many desirable behaviours are more naturally expressed a
LTLf/LDLf Non-Markovian Rewards
In Markov Decision Processes (MDPs), the reward obtained in a state is Markovian, i.e., depends on the last state and action. This dependency makes it difficult to reward more interesting long-term behaviors, such as always closing a door after it has been opened, or providing coffee only following a request. Extending MDPs to handle non-Markovian reward functions was the subject of two previous lines of work. Both use LTL variants to specify the reward function and then compile the new model back into a Markovian model. Building on recent progress in temporal logics over finite traces, we adopt LDLf for specifying non-Markovian rewards and provide an elegant automata construction for building a Markovian model, which extends that of previous work and offers strong minimality and compositionality guarantees
Decision-Theoretic Planning with non-Markovian Rewards
A decision process in which rewards depend on history rather than merely on
the current state is called a decision process with non-Markovian rewards
(NMRDP). In decision-theoretic planning, where many desirable behaviours are
more naturally expressed as properties of execution sequences rather than as
properties of states, NMRDPs form a more natural model than the commonly
adopted fully Markovian decision process (MDP) model. While the more tractable
solution methods developed for MDPs do not directly apply in the presence of
non-Markovian rewards, a number of solution methods for NMRDPs have been
proposed in the literature. These all exploit a compact specification of the
non-Markovian reward function in temporal logic, to automatically translate the
NMRDP into an equivalent MDP which is solved using efficient MDP solution
methods. This paper presents NMRDPP (Non-Markovian Reward Decision Process
Planner), a software platform for the development and experimentation of
methods for decision-theoretic planning with non-Markovian rewards. The current
version of NMRDPP implements, under a single interface, a family of methods
based on existing as well as new approaches which we describe in detail. These
include dynamic programming, heuristic search, and structured methods. Using
NMRDPP, we compare the methods and identify certain problem features that
affect their performance. NMRDPPs treatment of non-Markovian rewards is
inspired by the treatment of domain-specific search control knowledge in the
TLPlan planner, which it incorporates as a special case. In the First
International Probabilistic Planning Competition, NMRDPP was able to compete
and perform well in both the domain-independent and hand-coded tracks, using
search control knowledge in the latter
Learning Task Specifications from Demonstrations
Real world applications often naturally decompose into several sub-tasks. In
many settings (e.g., robotics) demonstrations provide a natural way to specify
the sub-tasks. However, most methods for learning from demonstrations either do
not provide guarantees that the artifacts learned for the sub-tasks can be
safely recombined or limit the types of composition available. Motivated by
this deficit, we consider the problem of inferring Boolean non-Markovian
rewards (also known as logical trace properties or specifications) from
demonstrations provided by an agent operating in an uncertain, stochastic
environment. Crucially, specifications admit well-defined composition rules
that are typically easy to interpret. In this paper, we formulate the
specification inference task as a maximum a posteriori (MAP) probability
inference problem, apply the principle of maximum entropy to derive an analytic
demonstration likelihood model and give an efficient approach to search for the
most likely specification in a large candidate pool of specifications. In our
experiments, we demonstrate how learning specifications can help avoid common
problems that often arise due to ad-hoc reward composition.Comment: NIPS 201
Anytime-Constrained Reinforcement Learning
We introduce and study constrained Markov Decision Processes (cMDPs) with
anytime constraints. An anytime constraint requires the agent to never violate
its budget at any point in time, almost surely. Although Markovian policies are
no longer sufficient, we show that there exist optimal deterministic policies
augmented with cumulative costs. In fact, we present a fixed-parameter
tractable reduction from anytime-constrained cMDPs to unconstrained MDPs. Our
reduction yields planning and learning algorithms that are time and
sample-efficient for tabular cMDPs so long as the precision of the costs is
logarithmic in the size of the cMDP. However, we also show that computing
non-trivial approximately optimal policies is NP-hard in general. To circumvent
this bottleneck, we design provable approximation algorithms that efficiently
compute or learn an arbitrarily accurate approximately feasible policy with
optimal value so long as the maximum supported cost is bounded by a polynomial
in the cMDP or the absolute budget. Given our hardness results, our
approximation guarantees are the best possible under worst-case analysis
Bayesian Reinforcement Learning via Deep, Sparse Sampling
We address the problem of Bayesian reinforcement learning using efficient
model-based online planning. We propose an optimism-free Bayes-adaptive
algorithm to induce deeper and sparser exploration with a theoretical bound on
its performance relative to the Bayes optimal policy, with a lower
computational complexity. The main novelty is the use of a candidate policy
generator, to generate long-term options in the planning tree (over beliefs),
which allows us to create much sparser and deeper trees. Experimental results
on different environments show that in comparison to the state-of-the-art, our
algorithm is both computationally more efficient, and obtains significantly
higher reward in discrete environments.Comment: Published in AISTATS 202
Une Approche basée sur la Simulation pour l'Optimisation des Processus Décisionnels Semi-Markoviens Généralisés
Time is a crucial variable in planning and often requires special attention since it introduces a specific structure along with additional complexity, especially in the case of decision under uncertainty. In this paper, after reviewing and comparing MDP frameworks designed to deal with temporal problems, we focus on Generalized Semi-Markov Decision Processes (GSMDP) with observable time. We highlight the inherent structure and complexity of these problems and present the differences with classical reinforcement learning problems. Finally, we introduce a new simulation-based reinforcement learning method for solving GSMDP, bringing together results from simulation-based policy iteration, regression techniques and simulation theory. We illustrate our approach on a subway network control example
- …