The role of reward signal in deep reinforcement learning

Abstract

The goal of the thesis is to study the role of the reward signal in deep reinforcement learning. The reward signal is a scalar quantity received by the agent, and it has a big impact on both the training process of a reinforcement learning algorithm and its resulting behaviour. Firstly, we study the behaviour of an agent that is learning with different reward signals in the same environment with the same learning algorithm. We introduce and measure agents’ happiness as a relation between agents’ actual reward obtained from the environment, as compared to the possible maximum and minimum rewards in a given setting. The experiments show that the rewards intended to result in a given behaviour during training do not result in the same behaviour when agents interact with each other. Secondly, we use these observations to investigate the role of the reward signal further. Namely, we explore the space of all possible reward signals in a given environment through an evolutionary algorithm. Through experiments, we demonstrate that it is possible to learn complex behaviours of winning, losing, and cooperating through reward signal evolution. Some of the solutions found by the algorithm are surprising, in the sense that they would probably not have been chosen by a person trying to hand-code a given behaviour through a specific reward signal. The results presented in the thesis indicate that the role of the reward signal in reinforcement learning is likely bigger than indicated by its current coverage in the literature and is worth investigating in greater detail. Not only can it lead to programmes with less overfitting, but it can also improve our understanding of what reinforcement learning algorithms are really learning. This in turn will give us more robust, explainable, and overall safer systems

    Similar works