How can an intelligent agent learn an effective representation of its world? This dissertation applies the psychological principle of cognitive economy to the problem of representation in reinforcement learning. Psychologists have shown that humans cope with difficult tasks by simplifying the task domain, focusing on relevant features and generalizing over states of the world which are “the same” with respect to the task. This dissertation defines a principled set of requirements for representations in reinforcement learning, by applying these principles of cognitive economy to the agent's need to choose the correct actions in its task.\ud \ud The dissertation formalizes the principle of cognitive economy into algorithmic criteria for feature extraction in reinforcement learning. To do this, it develops mathematical definitions of feature importance, sound decisions, state compatibility, and necessary distinctions, in terms of the rewards expected by the agent in the task. The analysis shows how the representation determines the apparent values of the agent's actions, and proves that the state compatibility criteria presented here result in representations which satisfy a criterion for task learnability.\ud \ud The dissertation reports on experiments that illustrate one implementation of these ideas in a system which constructs its representation as it goes about learning the task. Results with the puck-on-a-hill task and the pole-balancing task show that the ideas are sound and can be of practical benefit. The principal contributions of this dissertation are a new framework for thinking about feature extraction in terms of cognitive economy, and a demonstration of the effectiveness of an algorithm based on this new framework
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.