9 research outputs found
Multiagent Deep Reinforcement Learning: Challenges and Directions Towards Human-Like Approaches
This paper surveys the field of multiagent deep reinforcement learning. The
combination of deep neural networks with reinforcement learning has gained
increased traction in recent years and is slowly shifting the focus from
single-agent to multiagent environments. Dealing with multiple agents is
inherently more complex as (a) the future rewards depend on the joint actions
of multiple players and (b) the computational complexity of functions
increases. We present the most common multiagent problem representations and
their main challenges, and identify five research areas that address one or
more of these challenges: centralised training and decentralised execution,
opponent modelling, communication, efficient coordination, and reward shaping.
We find that many computational studies rely on unrealistic assumptions or are
not generalisable to other settings; they struggle to overcome the curse of
dimensionality or nonstationarity. Approaches from psychology and sociology
capture promising relevant behaviours such as communication and coordination.
We suggest that, for multiagent reinforcement learning to be successful, future
research addresses these challenges with an interdisciplinary approach to open
up new possibilities for more human-oriented solutions in multiagent
reinforcement learning.Comment: 37 pages, 6 figure
Policy Gradient Methods for the Noisy Linear Quadratic Regulator over a Finite Horizon
We explore reinforcement learning methods for finding the optimal policy in
the linear quadratic regulator (LQR) problem. In particular, we consider the
convergence of policy gradient methods in the setting of known and unknown
parameters. We are able to produce a global linear convergence guarantee for
this approach in the setting of finite time horizon and stochastic state
dynamics under weak assumptions. The convergence of a projected policy gradient
method is also established in order to handle problems with constraints. We
illustrate the performance of the algorithm with two examples. The first
example is the optimal liquidation of a holding in an asset. We show results
for the case where we assume a model for the underlying dynamics and where we
apply the method to the data directly. The empirical evidence suggests that the
policy gradient method can learn the global optimal solution for a larger class
of stochastic systems containing the LQR framework and that it is more robust
with respect to model mis-specification when compared to a model-based
approach. The second example is an LQR system in a higher dimensional setting
with synthetic data.Comment: 49 pages, 9 figure
Stock Market Prediction via Deep Learning Techniques: A Survey
The stock market prediction has been a traditional yet complex problem
researched within diverse research areas and application domains due to its
non-linear, highly volatile and complex nature. Existing surveys on stock
market prediction often focus on traditional machine learning methods instead
of deep learning methods. Deep learning has dominated many domains, gained much
success and popularity in recent years in stock market prediction. This
motivates us to provide a structured and comprehensive overview of the research
on stock market prediction focusing on deep learning techniques. We present
four elaborated subtasks of stock market prediction and propose a novel
taxonomy to summarize the state-of-the-art models based on deep neural networks
from 2011 to 2022. In addition, we also provide detailed statistics on the
datasets and evaluation metrics commonly used in the stock market. Finally, we
highlight some open issues and point out several future directions by sharing
some new perspectives on stock market prediction
Deep multiagent reinforcement learning: challenges and directions
This paper surveys the field of deep multiagent reinforcement learning (RL). The combination of deep neural networks with RL has gained increased traction in recent years and is slowly shifting the focus from single-agent to multiagent environments. Dealing with multiple agents is inherently more complex as (a) the future rewards depend on multiple players' joint actions and (b) the computational complexity increases. We present the most common multiagent problem representations and their main challenges, and identify five research areas that address one or more of these challenges: centralised training and decentralised execution, opponent modelling, communication, efficient coordination, and reward shaping. We find that many computational studies rely on unrealistic assumptions or are not generalisable to other settings; they struggle to overcome the curse of dimensionality or nonstationarity. Approaches from psychology and sociology capture promising relevant behaviours, such as communication and coordination, to help agents achieve better performance in multiagent settings. We suggest that, for multiagent RL to be successful, future research should address these challenges with an interdisciplinary approach to open up new possibilities in multiagent RL.Algorithms and the Foundations of Software technolog