25,442 research outputs found
Reinforcement Learning: A Survey
This paper surveys the field of reinforcement learning from a
computer-science perspective. It is written to be accessible to researchers
familiar with machine learning. Both the historical basis of the field and a
broad selection of current work are summarized. Reinforcement learning is the
problem faced by an agent that learns behavior through trial-and-error
interactions with a dynamic environment. The work described here has a
resemblance to work in psychology, but differs considerably in the details and
in the use of the word ``reinforcement.'' The paper discusses central issues of
reinforcement learning, including trading off exploration and exploitation,
establishing the foundations of the field via Markov decision theory, learning
from delayed reinforcement, constructing empirical models to accelerate
learning, making use of generalization and hierarchy, and coping with hidden
state. It concludes with a survey of some implemented systems and an assessment
of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
Recent advances on filtering and control for nonlinear stochastic complex systems with incomplete information: A survey
This Article is provided by the Brunel Open Access Publishing Fund - Copyright @ 2012 Hindawi PublishingSome recent advances on the filtering and control problems for nonlinear stochastic complex systems with incomplete information are surveyed. The incomplete information under consideration mainly includes missing measurements, randomly varying sensor delays, signal quantization, sensor saturations, and signal sampling. With such incomplete information, the developments on various filtering and control issues are reviewed in great detail. In particular, the addressed nonlinear stochastic complex systems are so comprehensive that they include conventional nonlinear stochastic systems, different kinds of complex networks, and a large class of sensor networks. The corresponding filtering and control technologies for such nonlinear stochastic complex systems are then discussed. Subsequently, some latest results on the filtering and control problems for the complex systems with incomplete information are given. Finally, conclusions are drawn and several possible future research directions are pointed out.This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61134009, 61104125, 61028008, 61174136, 60974030, and 61074129, the Qing Lan Project of Jiangsu Province of China, the Project sponsored by SRF for ROCS of SEM of China, the Engineering and Physical Sciences Research Council EPSRC of the UK under Grant GR/S27658/01, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany
Supervised Quantum Learning without Measurements
We propose a quantum machine learning algorithm for efficiently solving a
class of problems encoded in quantum controlled unitary operations. The central
physical mechanism of the protocol is the iteration of a quantum time-delayed
equation that introduces feedback in the dynamics and eliminates the necessity
of intermediate measurements. The performance of the quantum algorithm is
analyzed by comparing the results obtained in numerical simulations with the
outcome of classical machine learning methods for the same problem. The use of
time-delayed equations enhances the toolbox of the field of quantum machine
learning, which may enable unprecedented applications in quantum technologies
Differential Dynamic Programming for time-delayed systems
Trajectory optimization considers the problem of deciding how to control a
dynamical system to move along a trajectory which minimizes some cost function.
Differential Dynamic Programming (DDP) is an optimal control method which
utilizes a second-order approximation of the problem to find the control. It is
fast enough to allow real-time control and has been shown to work well for
trajectory optimization in robotic systems. Here we extend classic DDP to
systems with multiple time-delays in the state. Being able to find optimal
trajectories for time-delayed systems with DDP opens up the possibility to use
richer models for system identification and control, including recurrent neural
networks with multiple timesteps in the state. We demonstrate the algorithm on
a two-tank continuous stirred tank reactor. We also demonstrate the algorithm
on a recurrent neural network trained to model an inverted pendulum with
position information only.Comment: 7 pages, 6 figures, conference, Decision and Control (CDC), 2016 IEEE
55th Conference o
Learning Representations in Model-Free Hierarchical Reinforcement Learning
Common approaches to Reinforcement Learning (RL) are seriously challenged by
large-scale applications involving huge state spaces and sparse delayed reward
feedback. Hierarchical Reinforcement Learning (HRL) methods attempt to address
this scalability issue by learning action selection policies at multiple levels
of temporal abstraction. Abstraction can be had by identifying a relatively
small set of states that are likely to be useful as subgoals, in concert with
the learning of corresponding skill policies to achieve those subgoals. Many
approaches to subgoal discovery in HRL depend on the analysis of a model of the
environment, but the need to learn such a model introduces its own problems of
scale. Once subgoals are identified, skills may be learned through intrinsic
motivation, introducing an internal reward signal marking subgoal attainment.
In this paper, we present a novel model-free method for subgoal discovery using
incremental unsupervised learning over a small memory of the most recent
experiences (trajectories) of the agent. When combined with an intrinsic
motivation learning mechanism, this method learns both subgoals and skills,
based on experiences in the environment. Thus, we offer an original approach to
HRL that does not require the acquisition of a model of the environment,
suitable for large-scale applications. We demonstrate the efficiency of our
method on two RL problems with sparse delayed feedback: a variant of the rooms
environment and the first screen of the ATARI 2600 Montezuma's Revenge game
- …