19,207 research outputs found
FLECS: Planning with a Flexible Commitment Strategy
There has been evidence that least-commitment planners can efficiently handle
planning problems that involve difficult goal interactions. This evidence has
led to the common belief that delayed-commitment is the "best" possible
planning strategy. However, we recently found evidence that eager-commitment
planners can handle a variety of planning problems more efficiently, in
particular those with difficult operator choices. Resigned to the futility of
trying to find a universally successful planning strategy, we devised a planner
that can be used to study which domains and problems are best for which
planning strategies. In this article we introduce this new planning algorithm,
FLECS, which uses a FLExible Commitment Strategy with respect to plan-step
orderings. It is able to use any strategy from delayed-commitment to
eager-commitment. The combination of delayed and eager operator-ordering
commitments allows FLECS to take advantage of the benefits of explicitly using
a simulated execution state and reasoning about planning constraints. FLECS can
vary its commitment strategy across different problems and domains, and also
during the course of a single planning problem. FLECS represents a novel
contribution to planning in that it explicitly provides the choice of which
commitment strategy to use while planning. FLECS provides a framework to
investigate the mapping from planning domains and problems to efficient
planning strategies.Comment: See http://www.jair.org/ for an online appendix and other files
accompanying this articl
Generation of Policy-Level Explanations for Reinforcement Learning
Though reinforcement learning has greatly benefited from the incorporation of
neural networks, the inability to verify the correctness of such systems limits
their use. Current work in explainable deep learning focuses on explaining only
a single decision in terms of input features, making it unsuitable for
explaining a sequence of decisions. To address this need, we introduce
Abstracted Policy Graphs, which are Markov chains of abstract states. This
representation concisely summarizes a policy so that individual decisions can
be explained in the context of expected future transitions. Additionally, we
propose a method to generate these Abstracted Policy Graphs for deterministic
policies given a learned value function and a set of observed transitions,
potentially off-policy transitions used during training. Since no restrictions
are placed on how the value function is generated, our method is compatible
with many existing reinforcement learning methods. We prove that the worst-case
time complexity of our method is quadratic in the number of features and linear
in the number of provided transitions, . By applying
our method to a family of domains, we show that our method scales well in
practice and produces Abstracted Policy Graphs which reliably capture
relationships within these domains.Comment: Accepted to Proceedings of the Thirty-Third AAAI Conference on
Artificial Intelligence (2019
Isomorphism of Intransitive Linear Lie Equations
We show that formal isomorphism of intransitive linear Lie equations along
transversal to the orbits can be extended to neighborhoods of these
transversal. In analytic cases, the word formal is dropped from theorems. Also,
we associate an intransitive Lie algebra with each intransitive linear Lie
equation, and from the intransitive Lie algebra we recover the linear Lie
equation, unless of formal isomorphism. The intransitive Lie algebra gives the
structure functions introduced by E. Cartan
On Graph Refutation for Relational Inclusions
We introduce a graphical refutation calculus for relational inclusions: it
reduces establishing a relational inclusion to establishing that a graph
constructed from it has empty extension. This sound and complete calculus is
conceptually simpler and easier to use than the usual ones.Comment: In Proceedings LSFA 2011, arXiv:1203.542
- …