1,238 research outputs found

    Marginal Likelihood Estimation with the Cross-Entropy Method

    Get PDF
    We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications the proposed CE method compares favorably to existing estimators

    A Comparative Study of Monte Carlo Methods for Efficient Evaluation of Marginal Likelihoods

    Get PDF
    Strategic choices for efficient and accurate evaluation of marginal likelihoods by means of Monte Carlo simulation methods are studied for the case of highly non-elliptical posterior distributions. A comparative analysis is presented of possible advantages and limitations of different simulation techniques; of possible choices of candidate distributions and choices of target or warped target distributions; and finally of numerical standard errors. The importance of a robust and flexible estimation strategy is demonstrated where the complete posterior distribution is explored. Given an appropriately yet quickly tuned adaptive candidate, straightforward importance sampling provides a computationally efficient estimator of the marginal likelihood (and a reliable and easily computed corresponding numerical standard error) in the cases investigated in this paper, which include a non-linear regression model and a mixture GARCH model. Warping the posterior density can lead to a further gain in efficiency, but it is more important that the posterior kernel is appropriately wrapped by the candidate distribution than that is warped

    To Bridge, to Warp or to Wrap?

    Get PDF
    Important choices for efficient and accurate evaluation of marginal likelihoods by means of Monte Carlo simulation methods are studied for the case of highly non-elliptical posterior distributions. We focus on the situation where one makes use of importance sampling or the independence chain Metropolis-Hastings algorithm for posterior analysis. A comparative analysis is presented of possible advantages and limitations of different simulation techniques; of possible choices of candidate distributions and choices of target or warped target distributions; and finally of numerical standard errors. The importance of a robust and flexible estimation strategy is demonstrated where the complete posterior distribution is explored. In this respect, the adaptive mixture of Student-t distributions of Hoogerheide et al.(2007) works particularly well. Given an appropriately yet quickly tuned candidate, straightforward importance sampling provides the most efficient estimator of the marginal likelihood in the cases investigated in this paper, which include a non-linear regression model of Ritter and Tanner (1992) and a conditional normal distribution of Gelman and Meng (1991). A poor choice of candidate density may lead to a huge loss of efficiency where the numerical standard error may be highly unreliable

    Modeling operational risk data reported above a time-varying threshold

    Full text link
    Typically, operational risk losses are reported above a threshold. Fitting data reported above a constant threshold is a well known and studied problem. However, in practice, the losses are scaled for business and other factors before the fitting and thus the threshold is varying across the scaled data sample. A reporting level may also change when a bank changes its reporting policy. We present both the maximum likelihood and Bayesian Markov chain Monte Carlo approaches to fitting the frequency and severity loss distributions using data in the case of a time varying threshold. Estimation of the annual loss distribution accounting for parameter uncertainty is also presented

    Generalized Factor Models: A Bayesian Approach

    Get PDF
    There is recent interest in the generalization of classical factor models in which the idiosyncratic factors are assumed to be orthogonal and there are identification restrictions on cross-sectional and time dimensions. In this study, we describe and implement a Bayesian approach to generalized factor models. A flexible framework is developed to determine the variations attributed to common and idiosyncratic factors. We also propose a unique methodology to select the (generalized) factor model that best fits a given set of data. Applying the proposed methodology to the simulated data and the foreign exchange rate data, we provide a comparative analysis between the classical and generalized factor models. We find that when there is a shift from classical to generalized, there are significant changes in the estimates of the structures of the covariance and correlation matrices while there are less dramatic changes in the estimates of the factor loadings and the variation attributed to common factors.

    Posterior analysis of stochastic frontier models using Gibbs sampling

    Get PDF
    In this paper we describe the use of Gibbs sampling methods for making posterior inferences in stochastic frontier models with composed error. We show how the Gibbs sampler can greatly reduce the computational difficulties involved in analyzing such models. Our fidings are illustrated in an empirical example

    Joint Tests for Long Memory and Non-linearity: The Case of Purchasing Power Parity

    Get PDF
    A pervasive finding of unit roots in macroeconomic data often runs counter to intuition regarding the stochastic nature of the process under consideration. Two econometric techniques have been utilized in an attempt to resolve the finding of unit roots, namely long memory and models that depart from linearity. While the use of long memory and stochastic regime switching models have developed almost independently of each other, it is now clear that the two modeling techniques can be intimately linked. In particular, both modeling techniques have been used in isolation to study the dynamics of the real exchange rate. To determine the importance of each technique in this context, I employ a testing and estimation procedure that allows one to jointly test for long memory and non-linearity (regime switching behavior) of the STAR variety. I find that there is substantial evidence of non-linear behavior for the real exchange rate for many developing and European countries, with little evidence for ESTAR non-linearity for countries outside the European continent including Japan and Canada. In cases where non-linearity is found, I also find significant evidence of long memory for the majority of the countries in my sample. Thus, long memory and non-linearity can also be viewed as compliments rather than substitutes. On the other hand, a combination of long memory and non-linearity may be a promising research avenue for pursuing an answer to the paradoxreal exchange rates, long memory, ESTAR non-linearity
    • …
    corecore