9,110 research outputs found

    An introduction to predictive processing models of perception and decision-making

    Get PDF
    The predictive processing framework includes a broad set of ideas, which might be articulated and developed in a variety of ways, concerning how the brain may leverage predictive models when implementing perception, cognition, decision-making, and motor control. This article provides an up-to-date introduction to the two most influential theories within this framework: predictive coding and active inference. The first half of the paper (Sections 2–5) reviews the evolution of predictive coding, from early ideas about efficient coding in the visual system to a more general model encompassing perception, cognition, and motor control. The theory is characterized in terms of the claims it makes at Marr's computational, algorithmic, and implementation levels of description, and the conceptual and mathematical connections between predictive coding, Bayesian inference, and variational free energy (a quantity jointly evaluating model accuracy and complexity) are explored. The second half of the paper (Sections 6–8) turns to recent theories of active inference. Like predictive coding, active inference models assume that perceptual and learning processes minimize variational free energy as a means of approximating Bayesian inference in a biologically plausible manner. However, these models focus primarily on planning and decision-making processes that predictive coding models were not developed to address. Under active inference, an agent evaluates potential plans (action sequences) based on their expected free energy (a quantity that combines anticipated reward and information gain). The agent is assumed to represent the world as a partially observable Markov decision process with discrete time and discrete states. Current research applications of active inference models are described, including a range of simulation work, as well as studies fitting models to empirical data. The paper concludes by considering future research directions that will be important for further development of both models

    Coordinated Multi-Agent Imitation Learning

    Get PDF
    We study the problem of imitation learning from demonstrations of multiple coordinating agents. One key challenge in this setting is that learning a good model of coordination can be difficult, since coordination is often implicit in the demonstrations and must be inferred as a latent variable. We propose a joint approach that simultaneously learns a latent coordination model along with the individual policies. In particular, our method integrates unsupervised structure learning with conventional imitation learning. We illustrate the power of our approach on a difficult problem of learning multiple policies for fine-grained behavior modeling in team sports, where different players occupy different roles in the coordinated team strategy. We show that having a coordination model to infer the roles of players yields substantially improved imitation loss compared to conventional baselines.Comment: International Conference on Machine Learning 201

    Approximate Decentralized Bayesian Inference

    Get PDF
    This paper presents an approximate method for performing Bayesian inference in models with conditional independence over a decentralized network of learning agents. The method first employs variational inference on each individual learning agent to generate a local approximate posterior, the agents transmit their local posteriors to other agents in the network, and finally each agent combines its set of received local posteriors. The key insight in this work is that, for many Bayesian models, approximate inference schemes destroy symmetry and dependencies in the model that are crucial to the correct application of Bayes' rule when combining the local posteriors. The proposed method addresses this issue by including an additional optimization step in the combination procedure that accounts for these broken dependencies. Experiments on synthetic and real data demonstrate that the decentralized method provides advantages in computational performance and predictive test likelihood over previous batch and distributed methods.Comment: This paper was presented at UAI 2014. Please use the following BibTeX citation: @inproceedings{Campbell14_UAI, Author = {Trevor Campbell and Jonathan P. How}, Title = {Approximate Decentralized Bayesian Inference}, Booktitle = {Uncertainty in Artificial Intelligence (UAI)}, Year = {2014}
    • …
    corecore