1,661 research outputs found

    Bayesian Credibility for GLMs

    Get PDF
    We revisit the classical credibility results of Jewell and B\"uhlmann to obtain credibility premiums for a GLM using a modern Bayesian approach. Here the prior distributions can be chosen without restrictions to be conjugate to the response distribution. It can even come from out-of-sample information if the actuary prefers. Then we use the relative entropy between the "true" and the estimated models as a loss function, without restricting credibility premiums to be linear. A numerical illustration on real data shows the feasibility of the approach, now that computing power is cheap, and simulations software readily available

    Entropic effects in large-scale Monte Carlo simulations

    Get PDF
    The efficiency of Monte Carlo samplers is dictated not only by energetic effects, such as large barriers, but also by entropic effects that are due to the sheer volume that is sampled. The latter effects appear in the form of an entropic mismatch or divergence between the direct and reverse trial moves. We provide lower and upper bounds for the average acceptance probability in terms of the Renyi divergence of order 1/2. We show that the asymptotic finitude of the entropic divergence is the necessary and sufficient condition for non-vanishing acceptance probabilities in the limit of large dimensions. Furthermore, we demonstrate that the upper bound is reasonably tight by showing that the exponent is asymptotically exact for systems made up of a large number of independent and identically distributed subsystems. For the last statement, we provide an alternative proof that relies on the reformulation of the acceptance probability as a large deviation problem. The reformulation also leads to a class of low-variance estimators for strongly asymmetric distributions. We show that the entropy divergence causes a decay in the average displacements with the number of dimensions n that are simultaneously updated. For systems that have a well-defined thermodynamic limit, the decay is demonstrated to be n^{-1/2} for random-walk Monte Carlo and n^{-1/6} for Smart Monte Carlo (SMC). Numerical simulations of the LJ_38 cluster show that SMC is virtually as efficient as the Markov chain implementation of the Gibbs sampler, which is normally utilized for Lennard-Jones clusters. An application of the entropic inequalities to the parallel tempering method demonstrates that the number of replicas increases as the square root of the heat capacity of the system.Comment: minor corrections; the best compromise for the value of the epsilon parameter in Eq. A9 is now shown to be log(2); 13 pages, 4 figures, to appear in PR

    Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    Get PDF
    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be robust to approximate moment conditions

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    Efficient computation of the first passage time distribution of the generalized master equation by steady-state relaxation

    Full text link
    The generalized master equation or the equivalent continuous time random walk equations can be used to compute the macroscopic first passage time distribution (FPTD) of a complex stochastic system from short-term microscopic simulation data. The computation of the mean first passage time and additional low-order FPTD moments can be simplified by directly relating the FPTD moment generating function to the moments of the local FPTD matrix. This relationship can be physically interpreted in terms of steady-state relaxation, an extension of steady-state flow. Moreover, it is amenable to a statistical error analysis that can be used to significantly increase computational efficiency. The efficiency improvement can be extended to the FPTD itself by modelling it using a Gamma distribution or rational function approximation to its Laplace transform
    corecore