49,119 research outputs found

    A review of R-packages for random-intercept probit regression in small clusters

    Get PDF
    Generalized Linear Mixed Models (GLMMs) are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM) framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ), Penalized Quasi-Likelihood (PQL), an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors). As the cluster size increases, however, AGQ becomes the best choice for both bias and precision

    Structural reliability prediction of a steel bridge element using dynamic object oriented Bayesian Network (DOOBN)

    Get PDF
    Different from conventional methods for structural reliability evaluation, such as, first/second-order reliability methods (FORM/SORM) or Monte Carlo simulation based on corresponding limit state functions, a novel approach based on dynamic objective oriented Bayesian network (DOOBN) for prediction of structural reliability of a steel bridge element has been proposed in this paper. The DOOBN approach can effectively model the deterioration processes of a steel bridge element and predict their structural reliability over time. This approach is also able to achieve Bayesian updating with observed information from measurements, monitoring and visual inspection. Moreover, the computational capacity embedded in the approach can be used to facilitate integrated management and maintenance optimization in a bridge system. A steel bridge girder is used to validate the proposed approach. The predicted results are compared with those evaluated by FORM method

    Bayesian Analysis of Structural Credit Risk Models with Microstructure Noises

    Get PDF
    In this paper a Markov chain Monte Carlo (MCMC) technique is developed for the Bayesian analysis of structural credit risk models with microstructure noises. The technique is based on the general Bayesian approach with posterior computations performed by Gibbs sampling. Simulations from the Markov chain, whose stationary distribution converges to the posterior distribution, enable exact finite sample inferences of model parameters. The exact inferences can easily be extended to latent state variables and any nonlinear transformation of state variables and parameters, facilitating practical credit risk applications. In addition, the comparison of alternative models can be based on devian information criterion (DIC) which is straightforwardly obtained from the MCMC output. The method is implemented on the basic structural credit risk model with pure microstructure noises and some more general specifications using daily equity data from US and emerging markets. We find empirical evidence that microstructure noises are positively correlated with the firm values in emerging markets.MCMC, Credit risk, Microstructure noise, Devian information criterion

    Marginal Likelihood Estimation with the Cross-Entropy Method

    Get PDF
    We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications the proposed CE method compares favorably to existing estimators

    Bayesian comparison of latent variable models: Conditional vs marginal likelihoods

    Full text link
    Typical Bayesian methods for models with latent variables (or random effects) involve directly sampling the latent variables along with the model parameters. In high-level software code for model definitions (using, e.g., BUGS, JAGS, Stan), the likelihood is therefore specified as conditional on the latent variables. This can lead researchers to perform model comparisons via conditional likelihoods, where the latent variables are considered model parameters. In other settings, however, typical model comparisons involve marginal likelihoods where the latent variables are integrated out. This distinction is often overlooked despite the fact that it can have a large impact on the comparisons of interest. In this paper, we clarify and illustrate these issues, focusing on the comparison of conditional and marginal Deviance Information Criteria (DICs) and Watanabe-Akaike Information Criteria (WAICs) in psychometric modeling. The conditional/marginal distinction corresponds to whether the model should be predictive for the clusters that are in the data or for new clusters (where "clusters" typically correspond to higher-level units like people or schools). Correspondingly, we show that marginal WAIC corresponds to leave-one-cluster out (LOcO) cross-validation, whereas conditional WAIC corresponds to leave-one-unit out (LOuO). These results lead to recommendations on the general application of the criteria to models with latent variables.Comment: Manuscript in press at Psychometrika; 31 pages, 8 figure

    Extracting the Italian output gap: a Bayesian approach

    Full text link
    During the last decades particular effort has been directed towards understanding and predicting the relevant state of the business cycle with the objective of decomposing permanent shocks from those having only a transitory impact on real output. This trend--cycle decomposition has a relevant impact on several economic and fiscal variables and constitutes by itself an important indicator for policy purposes. This paper deals with trend--cycle decomposition for the Italian economy having some interesting peculiarities which makes it attractive to analyse from both a statistic and an historical perspective. We propose an univariate model for the quarterly real GDP, subsequently extended to include the price dynamics through a Phillips curve. This study considers a series of the Italian quarterly real GDP recently released by OECD which includes both the 1960s and the recent global financial crisis of 2007--2008. Parameters estimate as well as the signal extraction are performed within the Bayesian paradigm which effectively handles complex models where the parameters enter the log--likelihood function in a strongly nonlinear way. A new Adaptive Independent Metropolis--within--Gibbs sampler is then developed to efficiently simulate the parameters of the unobserved cycle. Our results suggest that inflation influences the Output Gap estimate, making the extracted Italian OG an important indicator of inflation pressures on the real side of the economy, as stated by the Phillips theory. Moreover, our estimate of the sequence of peaks and troughs of the Output Gap is in line with the OECD official dating of the Italian business cycle
    corecore