649 research outputs found

    Multilevel IRT Modeling in Practice with the Package mlirt

    Get PDF
    Variance component models are generally accepted for the analysis of hierarchical structured data. A shortcoming is that outcome variables are still treated as measured without an error. Unreliable variables produce biases in the estimates of the other model parameters. The variability of the relationships across groups and the group-effects on individuals' outcomes differ substantially when taking the measurement error in the dependent variable of the model into account. The multilevel model can be extended to handle measurement error using an item response theory (IRT) model, leading to a multilevel IRT model. This extended multilevel model is in particular suitable for the analysis of educational response data where students are nested in schools and schools are nested within cities/countries.\u

    Multilevel IRT Modeling in Practice with the Package mlirt

    Get PDF
    Variance component models are generally accepted for the analysis of hierarchical structured data. A shortcoming is that outcome variables are still treated as measured without an error. Unreliable variables produce biases in the estimates of the other model parameters. The variability of the relationships across groups and the group-effects on individuals' outcomes differ substantially when taking the measurement error in the dependent variable of the model into account. The multilevel model can be extended to handle measurement error using an item response theory (IRT) model, leading to a multilevel IRT model. This extended multilevel model is in particular suitable for the analysis of educational response data where students are nested in schools and schools are nested within cities/countries.

    On the Returns to Occupational Qualification in Terms of Subjective and Objective Variables: A GEE-type Approach to the Estimation of Two-Equation Panel Models

    Get PDF
    This article proposes an estimation approach for panel models with mixed continuous and ordered categorical outcomes based on generalized estimating equations for the mean and pseudo-score equations for the covariance parameters. A numerical study suggests that efficiency can be gained as concerns the mean parameter estimators by using individual covariance matrices in the estimating equations for the mean parameters. The approach is applied to estimate the returns to occupational qualification in terms of income and perceived job security in a nine-year period based on the German Socio-Economic Panel (SOEP). To compensate for missing data, a combined multiple imputation/weighting approach is adopted.Generalized estimating equations, mean and covariance model, multiple imputation, pseudo-score equations, status inconsistency, weighting

    ltm: An R Package for Latent Variable Modeling and Item Response Analysis

    Get PDF
    The R package ltm has been developed for the analysis of multivariate dichotomous and polytomous data using latent variable models, under the Item Response Theory approach. For dichotomous data the Rasch, the Two-Parameter Logistic, and Birnbaum's Three-Parameter models have been implemented, whereas for polytomous data Semejima's Graded Response model is available. Parameter estimates are obtained under marginal maximum likelihood using the Gauss-Hermite quadrature rule. The capabilities and features of the package are illustrated using two real data examples.

    Bayesian analysis of stochastic constraints in structural equation model with polytomous variables in serveral groups.

    Get PDF
    by Tung-lok Ng.Thesis (M.Phil.)--Chinese University of Hong Kong, 1990.Bibliography: leaves 57-59.Chapter Chapter 1 --- Introduction --- p.1Chapter Chapter 2 --- Full Maximum Likelihood Estimation of the General Model --- p.4Chapter 2.1 --- Introduction --- p.4Chapter 2.2 --- Model --- p.4Chapter 2.3 --- Identification of the model --- p.5Chapter 2.4 --- Maximum likelihood estimation --- p.7Chapter 2.5 --- Computational Procedure --- p.12Chapter 2.6 --- Tests of Hypothesis --- p.13Chapter 2.7 --- Example --- p.14Chapter Chapter 3 --- Bayesian Analysis of Stochastic Prior Information --- p.17Chapter 3.1 --- Introduction --- p.17Chapter 3.2 --- Bayesian Analysis of the general model --- p.18Chapter 3.3 --- Computational Procedure --- p.22Chapter 3.4 --- Test the Compatibility of the Prior Information --- p.24Chapter 3.5 --- Example --- p.25Chapter Chapter 4 --- Simulation Study --- p.27Chapter 4.1 --- Introduction --- p.27Chapter 4.2 --- Simulation1 --- p.27Chapter 4.3 --- Simulation2 --- p.30Chapter 4.4 --- Summary and Discussion --- p.31Chapter Chapter 5 --- Concluding Remarks --- p.33TablesReferences --- p.5

    Logistic Regression and Item Response Theory: Estimation Item and Ability Parameters by Using Logistic Regression in IRT.

    Get PDF
    The purpose of this study was to investigate the utility of logistic regression procedures as a means of estimating item and ability parameters in unidimensional and multidimensional item response theory models for dichotomous and polytomous data instead of IRT models. Unlike the IRT models, single logistic regression model can be easily extended from unidimensional models to multidimensional models, from dichotomous response data to polytomous response data and the assumptions such as all slopes are the same and intercept is zero are unnecessary. Based on the findings of this study, the following preliminary conclusions can be drawn: Item and ability parameters in IRT can be estimated by using the logistic regression models instead of IRT model currently used. Item characteristic curve, probability of correct answer, and related concepts can be interpreted the same in the framework of the logistic regression as in the framework of the IRT. Correlation coefficients between item and ability parameter estimates obtained from the logistic regression models and item and ability parameter estimates obtained from the IRT models are almost perfect. That means item and ability parameters can be equivalently estimated by using logistic regression models instead of IRT models currently used. Item and ability parameter estimates of the Rasch model can be equivalently estimated by the logistic regression model, assuming all β\betas are 1. Item and ability parameter estimates of the Rasch model can be equivalently estimated by the logistic regression model with intercept only model. Item difficulty in IRT is equal to median effect level in the logistic regression model. Sample size effect in the logistic regression parameter estimates can be investigated the same as the IRT models. When sample size increases, invariance properties of the logistic regression models increase and goodness of fit statistics becomes consistent. Test length in the logistic regression parameter estimates can be investigated the same as the IRT models. When test length increases, invariance properties of the logistic regression models increase and goodness of fit statistics becomes consistent. The logistic regression models are more flexible than IRT models. They can be easily extended from the dichotomous data to polytomous data

    Apples and Oranges? The Problem of Equivalence in Comparative Research

    Get PDF
    Researchers in comparative research are increasingly relying on individual level data to test theories involving unobservable constructs like attitudes and preferences. Estimation is carried out using large-scale cross-national survey data providing responses from individuals living in widely varying contexts. This strategy rests on the assumption of equivalence, that is, no systematic distortion in response behavior of individuals from different countries exists. However, this assumption is frequently violated with rather grave consequences for comparability and interpretation. I present a multilevel mixture ordinal item response model with item bias effects that is able to establish equivalence. It corrects for systematic measurement error induced by unobserved country heterogeneity, and it allows for the simultaneous estimation of structural parameters of interest.</jats:p

    Bayesian approach to structural equation models for ordered categorical and dichotomous data

    Get PDF
    Structural equation modeling (SEM) is a statistical methodology that is commonly used to study the relationships between manifest variables and latent variables. In analysing ordered categorical and dichotomous data, the basic assumption in SEM that the variables come from a continuous normal distribution is clearly violated. A rigorous analysis that takes into account the discrete nature of the variables is therefore necessary. A better approach for assessing these kinds of discrete data is to treat them as observations that come from a hidden continuous normal distribution with a threshold specification. A censored normal distribution and truncated normal distribution, each includes interval, right and left where the later are with known parameters, are used to handle the problem of ordered categorical and dichotomous data in Bayesian non-linear SEMs. The truncated normal distribution is used to handle the problem of non-normal data (ordered categorical and dichotomous) in the covariates in the structural model. Two types of thresholds (having equal and unequal spaces) are used in this research. The Bayesian approach (Gibbs sampling method) is applied to estimate the parameters. SEM treats the latent variables as missing data, and imputes them as part of Markov chain Monte Carlo (MCMC) simulation results in the full posterior distribution using data augmentation. An example using simulation data, case study and bootstrapping method are presented to illustrate these methods. In addition to Bayesian estimation, this research provide the standard error estimates (SE), highest posterior density (HPD) intervals and a goodness-of-fit test using the Deviance Information Criterion (DIC) to compare with the proposed methods. Here, in terms of parameter estimation and goodness-of-fit statistics, it is found that the results with a censored normal distribution are better than the results with a truncated normal distribution, with equal and unequal spaces of thresholds. Furthermore, the results with unequal spaces of thresholds are less than the results of equal spaces of thresholds in the interval of the censored and truncated normal distributions, this is including the left censored and truncated normal distributions. The results of equal spaces of thresholds are less than the results of unequal spaces of thresholds in right censored and truncated normal distributions. In other cases, the results of bootstrapping method are better than the real data results in terms of SE and DIC. The results of convergence showed that dichotomous data needs more iterations to convergence than ordered categorical data

    THE EFFECTS OF MISSING DATA TREATMENT ON PERSON ABILITY ESTIMATES USING IRT MODELS

    Get PDF
    Unplanned missing responses are common to surveys and tests including large scale assessments. There has been an ongoing debate on how missing responses should be handled and some approaches are preferred over others, especially in the context of the item response theory (IRT) models. In this context, examinees’ abilities are normally estimated with the missing responses generally ignored or treated as incorrect. Most of the studies that have explored the performance of missing data handling approaches have used simulated data. This study uses the SERCE (UNESCO, 2006) dataset and missingness pattern to evaluate the performance of three approaches: treating omitted as incorrect, midpoint imputation, and multiple imputation with and without auxiliary variables. Using the Rasch and 2PL models, the results showed that treating omitted as incorrect had a reduced average error in the estimation of ability but tended to underestimate the examinee’s ability. Multiple imputation with and without auxiliary variables had similar performances to one another. Consequently, the use of auxiliary variable may not harm the estimation, but it can become an unnecessary burden during the imputation process. The midpoint imputation did not differ much from multiple imputation in its performance and thus should be preferred over the latter for practical reasons. The main implication is that SERCE might have underestimated the student’s ability. Limitations and further directions are discussed. Adviser: R. J. De Ayal
    corecore