research

Neural network based approximations to posterior densities: a class of flexible sampling methods with applications to reduced rank models

Abstract

Likelihoods and posteriors of econometric models with strong endogeneity and weakinstruments may exhibit rather non-elliptical contours in the parameter space.This feature also holds for cointegration models when near non-stationarity occursand determining the number of cointegrating relations is a nontrivial issue, and in mixture processes where the modes are relatively far apart. The performance ofMonte Carlo integration methods like importance sampling or Markov ChainMonte Carlo procedures greatly depends in all these cases on the choice of the importance or candidate density. Such a density has to be `close' to the targetdensity in order to yield numerically accurate results with efficient sampling. Neural networks seem to be natural importance or candidate densities, as they havea universal approximation property and are easy to sample from. That is, conditionallyupon the specification of the neural network, sampling can be done either directly orusing a Gibbs sampling technique, possibly using auxiliary variables. A key step in the proposed class of methods is the construction of a neural network that approximatesthe target density accurately. The methods are tested on a set of illustrative modelswhich include a mixture of normal distributions, a Bayesian instrumental variable regression problem with weak instruments and near non-identification, a cointegrationmodel with near non-stationarity and a two-regime growth model for US recessionsand expansions. These examples involve experiments with non-standard, non-ellipticalposterior distributions. The results indicate the feasibility of theneural network approach.Markov chain Monte Carlo;Bayesian inference;neural networks;importance sample

    Similar works