7,426 research outputs found
Evolutionary Algorithms for Reinforcement Learning
There are two distinct approaches to solving reinforcement learning problems,
namely, searching in value function space and searching in policy space.
Temporal difference methods and evolutionary algorithms are well-known examples
of these approaches. Kaelbling, Littman and Moore recently provided an
informative survey of temporal difference methods. This article focuses on the
application of evolutionary algorithms to the reinforcement learning problem,
emphasizing alternative policy representations, credit assignment methods, and
problem-specific genetic operators. Strengths and weaknesses of the
evolutionary approach to reinforcement learning are presented, along with a
survey of representative applications
Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming
Autonomously training interpretable control strategies, called policies,
using pre-existing plant trajectory data is of great interest in industrial
applications. Fuzzy controllers have been used in industry for decades as
interpretable and efficient system controllers. In this study, we introduce a
fuzzy genetic programming (GP) approach called fuzzy GP reinforcement learning
(FGPRL) that can select the relevant state features, determine the size of the
required fuzzy rule set, and automatically adjust all the controller parameters
simultaneously. Each GP individual's fitness is computed using model-based
batch reinforcement learning (RL), which first trains a model using available
system samples and subsequently performs Monte Carlo rollouts to predict each
policy candidate's performance. We compare FGPRL to an extended version of a
related method called fuzzy particle swarm reinforcement learning (FPSRL),
which uses swarm intelligence to tune the fuzzy policy parameters. Experiments
using an industrial benchmark show that FGPRL is able to autonomously learn
interpretable fuzzy policies with high control performance.Comment: Accepted at Genetic and Evolutionary Computation Conference 2018
(GECCO '18
Constructing Parsimonious Analytic Models for Dynamic Systems via Symbolic Regression
Developing mathematical models of dynamic systems is central to many
disciplines of engineering and science. Models facilitate simulations, analysis
of the system's behavior, decision making and design of automatic control
algorithms. Even inherently model-free control techniques such as reinforcement
learning (RL) have been shown to benefit from the use of models, typically
learned online. Any model construction method must address the tradeoff between
the accuracy of the model and its complexity, which is difficult to strike. In
this paper, we propose to employ symbolic regression (SR) to construct
parsimonious process models described by analytic equations. We have equipped
our method with two different state-of-the-art SR algorithms which
automatically search for equations that fit the measured data: Single Node
Genetic Programming (SNGP) and Multi-Gene Genetic Programming (MGGP). In
addition to the standard problem formulation in the state-space domain, we show
how the method can also be applied to input-output models of the NARX
(nonlinear autoregressive with exogenous input) type. We present the approach
on three simulated examples with up to 14-dimensional state space: an inverted
pendulum, a mobile robot, and a bipedal walking robot. A comparison with deep
neural networks and local linear regression shows that SR in most cases
outperforms these commonly used alternative methods. We demonstrate on a real
pendulum system that the analytic model found enables a RL controller to
successfully perform the swing-up task, based on a model constructed from only
100 data samples
Discrete and fuzzy dynamical genetic programming in the XCSF learning classifier system
A number of representation schemes have been presented for use within
learning classifier systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using discrete and fuzzy
dynamical system representations within the XCSF learning classifier system. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules in the discrete case and
asynchronous fuzzy logic networks in the continuous-valued case. It is shown
possible to use self-adaptive, open-ended evolution to design an ensemble of
such dynamical systems within XCSF to solve a number of well-known test
problems
- …