22 research outputs found

    Random effects diagonal metric multidimensional scaling models

    Full text link
    By assuming a distribution for the subject weights in a diagonal metric (INDSCAL) multidimensional scaling model, the subject weights become random effects. Including random effects in multidimensional scaling models offers several advantages over traditional diagonal metric models such as those fitted by the INDSCAL, ALSCAL, and other multidimensional scaling programs. Unlike traditional models, the number of parameters does not increase with the number of subjects, and, because the distribution of the subject weights is modeled, the construction of linear models of the subject weights and the testing of those models is immediate. Here we define a random effects diagonal metric multidimensional scaling model, give computational algorithms, describe our experiences with these algorithms, and provide an example illustrating the use of the model and algorithms.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/45758/1/11336_2005_Article_BF02295730.pd

    High-dimensional maximum marginal likelihood item factor analysis by adaptive quadrature

    Full text link
    Although the Bock–Aitkin likelihood-based estimation method for factor analysis of dichotomous item response data has important advantages over classical analysis of item tetrachoric correlations, a serious limitation of the method is its reliance on fixed-point Gauss-Hermite (G-H) quadrature in the solution of the likelihood equations and likelihood-ratio tests. When the number of latent dimensions is large, computational considerations require that the number of quadrature points per dimension be few. But with large numbers of items, the dispersion of the likelihood, given the response pattern, becomes so small that the likelihood cannot be accurately evaluated with the sparse fixed points in the latent space. In this paper, we demonstrate that substantial improvement in accuracy can be obtained by adapting the quadrature points to the location and dispersion of the likelihood surfaces corresponding to each distinct pattern in the data. In particular, we show that adaptive G-H quadrature, combined with mean and covariance adjustments at each iteration of an EM algorithm, produces an accurate fast-converging solution with as few as two points per dimension. Evaluations of this method with simulated data are shown to yield accurate recovery of the generating factor loadings for models of upto eight dimensions. Unlike an earlier application of adaptive Gibbs sampling to this problem by Meng and Schilling, the simulations also confirm the validity of the present method in calculating likelihood-ratio chi-square statistics for determining the number of factors required in the model. Finally, we apply the method to a sample of real data from a test of teacher qualifications.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43596/1/11336_2003_Article_1141.pd

    Application of a predictive distribution formula to Bayesian computation for incomplete data models

    Get PDF
    We consider exact and approximate Bayesian computation in the presence of latent variables or missing data. Specifically we explore the application of a posterior predictive distribution formula derived in Sweeting And Kharroubi (2003), which is a particular form of Laplace approximation, both as an importance function and a proposal distribution. We show that this formula provides a stable importance function for use within poor man’s data augmentation schemes and that it can also be used as a proposal distribution within a Metropolis-Hastings algorithm for models that are not analytically tractable. We illustrate both uses in the case of a censored regression model and a normal hierarchical model, with both normal and Student t distributed random effects. Although the predictive distribution formula is motivated by regular asymptotic theory, it is not necessary that the likelihood has a closed form or that it possesses a local maximum
    corecore