216,212 research outputs found

    Functional approximations to posterior densities: a neural network approach to efficient sampling

    Get PDF
    The performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate results with efficient sampling. Neural networks seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from. That is, conditional upon the specification of the neural network, sampling can be done either directly or using a Gibbs sampling technique, possibly using auxiliary variables. A key step in the proposed class of methods is the construction of a neural network that approximates the target density accurately. The methods are tested on a set of illustrative models which include a mixture of normal distributions, a Bayesian instrumental variable regression problem with weak instruments and near-identification, and two-regime growth model for US recessions and expansions. These examples involve experiments with non-standard, non-elliptical posterior distributions. The results indicate the feasibility of the neural network approach.Markov chain Monte Carlo;Bayesian inference;importance sampling;neural networks

    Functional Approximations to Likelihoods/Posterior Densities: A Neural Network Approach to Efficient Sampling

    Get PDF
    The performance of Monte Carlo integration methods like importance-sampling or Markov-Chain Monte-Carlo procedures depends greatly on the choice of the importance- or candidate-density. Such a density must typically be "close" to the target density to yield numerically accurate results with efficient sampling. Neural networks are natural importance- or candidate-densities since they have a universal approximation property and are easy to sample from. That is, conditional upon the specified neural network, sampling can be done either directly or using a Gibbs sampling technique, possibly with auxiliary variables. We propose such a class of methods, a key step for which is the construction of a neural network that approximates the target density accurately. The methods are tested on a set of illustrative models that includes a mixture of normal distributions, a Bayesian instrumental-variable regression problem with weak instruments and near-identification, and a two-regime growth model for US recessions and expansions. These examples involve experiments with non-standard, non-elliptical posterior distributions. The results indicate the feasibility of the neural network approachMarkov chain Monte Carlo, importance sampling, neural networks, Bayesian inference

    MadNIS -- Neural Multi-Channel Importance Sampling

    Full text link
    Theory predictions for the LHC require precise numerical phase-space integration and generation of unweighted events. We combine machine-learned multi-channel weights with a normalizing flow for importance sampling, to improve classical methods for numerical integration. We develop an efficient bi-directional setup based on an invertible network, combining online and buffered training for potentially expensive integrands. We illustrate our method for the Drell-Yan process with an additional narrow resonance

    Neural BRDF Representation and Importance Sampling

    Get PDF
    Controlled capture of real-world material appearance yields tabulated sets of highly realistic reflectance data. In practice, however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering while remaining faithful to the original. Previous works in appearance encoding often prioritized one of these requirements at the expense of the other, by either applying high-fidelity array compression strategies not suited for efficient queries during rendering, or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network-based representation of BRDF data that combines high-accuracy reconstruction with efficient practical rendering via built-in interpolation of reflectance. We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for the accurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenable to importance sampling: rather than inverting the trained networks, we learn to encode them in a more compact embedding that can be mapped to parameters of an analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFs from multiple real-world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analytic models

    On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks

    Get PDF
    Likelihoods and posteriors of instrumental variable regression models with strongendogeneity and/or weak instruments may exhibit rather non-elliptical contours inthe parameter space. This may seriously affect inference based on Bayesian crediblesets. When approximating such contours using Monte Carlo integration methods likeimportance sampling or Markov chain Monte Carlo procedures the speed of the algorithmand the quality of the results greatly depend on the choice of the importance orcandidate density. Such a density has to be `close' to the target density in order toyield accurate results with numerically efficient sampling. For this purpose we introduce neural networks which seem to be natural importance or candidate densities, as they have a universal approximation property and are easy to sample from.A key step in the proposed class of methods is the construction of a neural network that approximates the target density accurately. The methods are tested on a set ofillustrative models. The results indicate the feasibility of the neural networkapproach.Markov chain Monte Carlo;Bayesian inference;credible sets;importance sampling;instrumental variables;neural networks;reduced rank
    corecore