1 research outputs found

    Bayesian nonparametric methods for reinforcement learning in partially observable domains

    No full text
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 149-163).Making intelligent decisions from incomplete information is critical in many applications: for example, medical decisions must often be made based on a few vital signs, without full knowledge of a patient's condition, and speech-based interfaces must infer a user's needs from noisy microphone inputs. What makes these tasks hard is that we do not even have a natural representation with which to model the task; we must learn about the task's properties while simultaneously performing the task. Learning a representation for a task also involves a trade-off between modeling the data that we have seen previously and being able to make predictions about new data streams. In this thesis, we explore one approach for learning representations of stochastic systems using Bayesian nonparametric statistics. Bayesian nonparametric methods allow the sophistication of a representation to scale gracefully with the complexity in the data. We show how the representations learned using Bayesian nonparametric methods result in better performance and interesting learned structure in three contexts related to reinforcement learning in partially-observable domains: learning partially observable Markov Decision processes, taking advantage of expert demonstrations, and learning complex hidden structures such as dynamic Bayesian networks. In each of these contexts, Bayesian nonparametric approach provide advantages in prediction quality and often computation time.by Finale Doshi-Velez.Ph.D
    corecore