578 research outputs found

    Modelling Ordinal Responses with Uncertainty: a Hierarchical Marginal Model with Latent Uncertainty components

    Full text link
    In responding to rating questions, an individual may give answers either according to his/her knowledge/awareness or to his/her level of indecision/uncertainty, typically driven by a response style. As ignoring this dual behaviour may lead to misleading results, we define a multivariate model for ordinal rating responses, by introducing, for every item, a binary latent variable that discriminates aware from uncertain responses. Some independence assumptions among latent and observable variables characterize the uncertain behaviour and make the model easier to interpret. Uncertain responses are modelled by specifying probability distributions that can depict different response styles characterizing the uncertain raters. A marginal parametrization allows a simple and direct interpretation of the parameters in terms of association among aware responses and their dependence on explanatory factors. The effectiveness of the proposed model is attested through an application to real data and supported by a Monte Carlo study

    ltm: An R Package for Latent Variable Modeling and Item Response Analysis

    Get PDF
    The R package ltm has been developed for the analysis of multivariate dichotomous and polytomous data using latent variable models, under the Item Response Theory approach. For dichotomous data the Rasch, the Two-Parameter Logistic, and Birnbaum's Three-Parameter models have been implemented, whereas for polytomous data Semejima's Graded Response model is available. Parameter estimates are obtained under marginal maximum likelihood using the Gauss-Hermite quadrature rule. The capabilities and features of the package are illustrated using two real data examples.

    Getting caught-up in the process: Does it really matter?

    Get PDF
    Likert items are the most commonly used item-type for measuring attitudes and beliefs. However, responses from Likert items are often plagued with construct-irrelevant variance due to response style behavior. In other words, variability from Likert-item scores can be parsed into: 1) variance pertinent to the construct or trait of interest, and 2) variance irrelevant to the construct or trait of interest. Multidimensional Item Response Theory (MIRT) is an increasingly common modeling approach to parse out information regarding the response style traits and the trait of interest. These MIRT approaches are categorized into threshold-based approaches and response process approaches. An increasingly common response process approach is the IRTree family of models. Often, researchers describe IRTree models as superior to other MIRT methods (e.g., threshold based approaches). However, IRTree models assume a particular response process. I investigate the effects of assuming an incorrect response process on person trait recovery, specifically on the recovery of the trait of interest. I conducted a 4-factor simulation study to investigate the effects of assuming an incorrect response process on person trait recovery, where the factors were the assumed response process, the true response process, correlations between traits, and scale length. The results indicated that assuming an incorrect response process does impact person trait recovery. In some conditions, the effect of assuming an incorrect response process on trait recovery depends on other factors such as scale length. Furthermore, the results indicate that the response process models had better person trait recovery compared to a threshold-based model, even when the response process model was incorrectly specified

    THE MIXTURE DISTRIBUTION POLYTOMOUS RASCH MODEL USED TO ACCOUNT FOR RESPONSE STYLES ON RATING SCALES: A SIMULATION STUDY OF PARAMETER RECOVERY AND CLASSIFICATION ACCURACY

    Get PDF
    Response styles presented in rating scale use have been recognized as an important source of systematic measurement bias in self-report assessment. People with the same amount of a latent trait may be a victim of a biased test score due to the construct's irrelevant effect of response styles. The mixture polytomous Rasch model has been proposed as a tool to deal with the response style problems. This model can be used to classify respondents with different response styles into different latent classes and provides person trait estimates that have been corrected for the effect of a response style. This study investigated how well the mixture partial credit model (MPCM) recovered model parameters under various testing conditions. Item responses that characterized extreme response style (ERS), middle-category response style (MRS), and acquiescent response style (ARS) on a 5-category Likert scale as well as ordinary response style (ORS), which does not involve distorted rating scale use, were generated. The study results suggested that ARS respondents could be almost perfectly classified from other response-style respondents while the distinction between MRS and ORS respondents was most difficult followed by the distinction between ERS and ORS respondents. The classifications were more difficult when the distorted response styles were present in small proportions within the sample. Ten-items and a sample size of 3000 appeared to warrant reasonable threshold and person parameter estimation under the simulated conditions in this study. As the structure of mixture of response styles became more complex, increased sample size, test length, and balanced mixing proportion were needed in order to achieve the same level of recovery accuracy. Misclassification impacted the overall accuracy of person trait estimation. BIC was found to be the most effective data-model fit statistic in identifying the correct number of latent classes under this modeling approach. The model-based correction of score bias was explored with up to four different response-style latent classes. Problems with the estimation of the model including non-convergence, boundary threshold estimates, and label switching were discussed

    Bayesian Estimation of Mixture IRT Models using NUTS

    Get PDF
    The No-U-Turn Sampler (NUTS) is a relatively new Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior that common MCMC algorithms such as Gibbs sampling or Metropolis Hastings usually exhibit. Given the fact that NUTS can efficiently explore the entire space of the target distribution, the sampler converges to high-dimensional target distributions more quickly than other MCMC algorithms and is hence less computational expensive. The focus of this study is on applying NUTS to one of the complex IRT models, specifically the two-parameter mixture IRT (Mix2PL) model, and further to examine its performance in estimating model parameters when sample size, test length, and number of latent classes are manipulated. The results indicate that overall, NUTS performs well in recovering model parameters. However, the recovery of the class membership of individual persons is not satisfactory for the three-class conditions. Also, the results indicate that WAIC performs better than LOO in recovering the number of latent classes, in terms of the proportion of the time the correct model was selected as the best fitting model. However, when the effective number of parameters was also considered in selecting the best fitting model, both fully Bayesian fit indices perform equally well. In addition, the results suggest that when multiple latent classes exist, using either fully Bayesian fit indices (WAIC or LOO) would not select the conventional IRT model. On the other hand, when all examinees came from a single unified population, fitting MixIRT models using NUTS causes problems in convergence

    A flexible approach to modelling over‐, under‐ and equidispersed count data in IRT: the Two‐Parameter Conway–Maxwell–Poisson model

    Get PDF
    Several psychometric tests and self-reports generate count data (e.g., divergent thinking tasks). The most prominent count data item response theory model, the Rasch Poisson Counts Model (RPCM), is limited in applicability by two restrictive assumptions: equal item discriminations and equidispersion (conditional mean equal to conditional variance). Violations of these assumptions lead to impaired reliability and standard error estimates. Previous work generalized the RPCM but maintained some limitations. The two-parameter Poisson counts model allows for varying discriminations but retains the equidispersion assumption. The Conway–Maxwell–Poisson Counts Model allows for modelling over- and underdispersion (conditional mean less than and greater than conditional variance, respectively) but still assumes constant discriminations. The present work introduces the Two-Parameter Conway–Maxwell–Poisson (2PCMP) model which generalizes these three models to allow for varying discriminations and dispersions within one model, helping to better accommodate data from count data tests and self-reports. A marginal maximum likelihood method based on the EM algorithm is derived. An implementation of the 2PCMP model in R and C++ is provided. Two simulation studies examine the model's statistical properties and compare the 2PCMP model to established models. Data from divergent thinking tasks are reanalysed with the 2PCMP model to illustrate the model's flexibility and ability to test assumptions of special cases.Correction for this article: https://doi.org/10.1111/bmsp.1231

    Extending an IRT mixture model to detect random responders on non-cognitive polytomously scored assessments

    Get PDF
    This study represents an attempt to distinguish two classes of examinees – random responders and valid responders – on non-cognitive assessments in low-stakes testing. The majority of existing literature regarding the detection of random responders in low-stakes settings exists in regard to cognitive tests that are dichotomously scored. However, evidence suggests that random responding occurs on non-cognitive assessments, and as with cognitive measures, the data derived from such measures are used to inform practice. Thus, a threat to test score validity exists if examinees’ response selections do not accurately reflect their underlying level on the construct being assessed. As with cognitive tests, using data from measures in which students did not give their best effort could have negative implications for future decisions. Thus, there is a need for a method of detecting random responders on non-cognitive assessments that are polytomously scored. This dissertation provides an overview of existing techniques for identifying low-motivated or amotivated examinees within low-stakes cognitive testing contexts including motivation filtering, response time effort, and item response theory mixture modeling, with particular attention paid to an IRT mixture model referred to in this dissertation as the Random Responders model – Graded Response model (RRM-GRM). Two studies, a simulation and an applied study, were conducted to explore the utility of the RRM-GRM for detecting and accounting for random responders on non-cognitive instruments in low-stakes testing settings. The findings from the simulation study show considerable bias and RMSE in parameter estimates and bias in theta estimates when the proportion of random responders is greater than 5%. Use of the RRM-GRM with the same data sets provides parameter estimates with minimal to no bias and RMSE and theta estimates that are essentially bias free. The applied study demonstrated that when fitting the RRM-GRM to authentic data, 5.6% of the responders were identified as random responders. Respondents classified as random responders were found to have higher odds of being males and of having lower scores on importance of the test, as well as lower average total scores on the UMUM-15 measure used in the study. Limitations of the RRM-GRM technique are discussed

    Expectation Propagation for Poisson Data

    Get PDF
    The Poisson distribution arises naturally when dealing with data involving counts, and it has found many applications in inverse problems and imaging. In this work, we develop an approximate Bayesian inference technique based on expectation propagation for approximating the posterior distribution formed from the Poisson likelihood function and a Laplace type prior distribution, e.g., the anisotropic total variation prior. The approach iteratively yields a Gaussian approximation, and at each iteration, it updates the Gaussian approximation to one factor of the posterior distribution by moment matching. We derive explicit update formulas in terms of one-dimensional integrals, and also discuss stable and efficient quadrature rules for evaluating these integrals. The method is showcased on two-dimensional PET images.Comment: 25 pages, to be published at Inverse Problem
    • 

    corecore