3,460 research outputs found
Few-Shot Bayesian Imitation Learning with Logical Program Policies
Humans can learn many novel tasks from a very small number (1--5) of
demonstrations, in stark contrast to the data requirements of nearly tabula
rasa deep learning methods. We propose an expressive class of policies, a
strong but general prior, and a learning algorithm that, together, can learn
interesting policies from very few examples. We represent policies as logical
combinations of programs drawn from a domain-specific language (DSL), define a
prior over policies with a probabilistic grammar, and derive an approximate
Bayesian inference algorithm to learn policies from demonstrations. In
experiments, we study five strategy games played on a 2D grid with one shared
DSL. After a few demonstrations of each game, the inferred policies generalize
to new game instances that differ substantially from the demonstrations. Our
policy learning is 20--1,000x more data efficient than convolutional and fully
convolutional policy learning and many orders of magnitude more computationally
efficient than vanilla program induction. We argue that the proposed method is
an apt choice for tasks that have scarce training data and feature significant,
structured variation between task instances.Comment: AAAI 202
Analysing symbolic music with probabilistic grammars
Recent developments in computational linguistics offer ways to approach the analysis of musical structure by inducing probabilistic models (in the form of grammars) over a corpus of music. These can produce idiomatic sentences from a probabilistic model of the musical language and thus offer explanations of the musical structures they model. This chapter surveys historical and current work in musical analysis using grammars, based on computational linguistic approaches. We outline the theory of probabilistic grammars and illustrate their implementation in Prolog using PRISM. Our experiments on learning the probabilities for simple grammars from pitch sequences in two kinds of symbolic musical corpora are summarized. The results support our claim that probabilistic grammars are a promising framework for computational music analysis, but also indicate that further work is required to establish their superiority over Markov models
Natural Language Syntax Complies with the Free-Energy Principle
Natural language syntax yields an unbounded array of hierarchically
structured expressions. We claim that these are used in the service of active
inference in accord with the free-energy principle (FEP). While conceptual
advances alongside modelling and simulation work have attempted to connect
speech segmentation and linguistic communication with the FEP, we extend this
program to the underlying computations responsible for generating syntactic
objects. We argue that recently proposed principles of economy in language
design - such as "minimal search" criteria from theoretical syntax - adhere to
the FEP. This affords a greater degree of explanatory power to the FEP - with
respect to higher language functions - and offers linguistics a grounding in
first principles with respect to computability. We show how both tree-geometric
depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression
algorithm) can be used to accurately predict legal operations on syntactic
workspaces, directly in line with formulations of variational free energy
minimization. This is used to motivate a general principle of language design
that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of
linguists with the normative account of self-organization furnished by the FEP,
by marshalling evidence from theoretical linguistics and psycholinguistics to
ground core principles of efficient syntactic computation within active
inference
- …