Implicit stochastic models, where the data-generation distribution is
intractable but sampling is possible, are ubiquitous in the natural sciences.
The models typically have free parameters that need to be inferred from data
collected in scientific experiments. A fundamental question is how to design
the experiments so that the collected data are most useful. The field of
Bayesian experimental design advocates that, ideally, we should choose designs
that maximise the mutual information (MI) between the data and the parameters.
For implicit models, however, this approach is severely hampered by the high
computational cost of computing posteriors and maximising MI, in particular
when we have more than a handful of design variables to optimise. In this
paper, we propose a new approach to Bayesian experimental design for implicit
models that leverages recent advances in neural MI estimation to deal with
these issues. We show that training a neural network to maximise a lower bound
on MI allows us to jointly determine the optimal design and the posterior.
Simulation studies illustrate that this gracefully extends Bayesian
experimental design for implicit models to higher design dimensions.Comment: Accepted at the thirty-seventh International Conference on Machine
Learning (ICML) 2020. Camera-ready versio