14 research outputs found

    Advancing Human Assessment: The Methodological, Psychological and Policy Contributions of ETS

    Get PDF
    ​This book describes the extensive contributions made toward the advancement of human assessment by scientists from one of the world’s leading research institutions, Educational Testing Service. The book’s four major sections detail research and development in measurement and statistics, education policy analysis and evaluation, scientific psychology, and validity. Many of the developments presented have become de-facto standards in educational and psychological measurement, including in item response theory (IRT), linking and equating, differential item functioning (DIF), and educational surveys like the National Assessment of Educational Progress (NAEP), the Programme of international Student Assessment (PISA), the Progress of International Reading Literacy Study (PIRLS) and the Trends in Mathematics and Science Study (TIMSS). In addition to its comprehensive coverage of contributions to the theory and methodology of educational and psychological measurement and statistics, the book gives significant attention to ETS work in cognitive, personality, developmental, and social psychology, and to education policy analysis and program evaluation. The chapter authors are long-standing experts who provide broad coverage and thoughtful insights that build upon decades of experience in research and best practices for measurement, evaluation, scientific psychology, and education policy analysis. Opening with a chapter on the genesis of ETS and closing with a synthesis of the enormously diverse set of contributions made over its 70-year history, the book is a useful resource for all interested in the improvement of human assessment

    Advancing Human Assessment: The Methodological, Psychological and Policy Contributions of ETS

    Get PDF
    Educational Testing Service (ETS); large-scale assessment; policy research; psychometrics; admissions test

    Validity of the overclaiming technique as a method to account for response bias in self-assessment questions : analysis on the basis of the PISA 2012 data

    Get PDF
    The presented work is devoted to study the validity of overclaiming technique (OCT) as a measure of response (positivity) bias. Three main aims of the analyses performed were: a) assess methods' utility to enhance predictive validity of self-report by accounting for response biases, b) investigate proposed mechanisms of overclaiming, c) expand nomological network of the method by presenting a wide set of both individual-level and cluster-level (school) correlates. The obtained results pointed that OCT can be used in order to account for response biases in self-report data. Important differences regarding use and interpretation of the different OCT scoring systems were found and commented. Two systems, one based of signal detection theory (SDT), other on item response theory model (IRT), were proposed as viable scorings of OCT. Choice between them is not trivial as it influences results' interpretation and model specification. Three possible mechanisms of overclaiming were tested: a) motivated response bias (self-favouring bias, socially desirable responding), b) memory bias (overgeneralised knowledge or faulty memory control) and c) response styles and careless responding. The results pointed that all three mechanisms are probable and that overclaiming is most probably a heterogenous phenomenon of multiple causes. However, the analyses pointed out that one of the memory bias hypotheses, the overgeneralised knowledge account, does not hold and that there is much more evidence for the competitive metacognitive account. It is to said that overclaiming is at least partially attributable to insufficient monitoring of one's knowledge. Evidence for a relation between careless responding and overclaiming was also obtained, indicating that at least some of the overclaimed responses can be attributed due to inattentive responding. Obtained results on the relations between response styles and overclaiming were complicated; they warrant further studies as the results here probably greatly depend on the technical details of analysis, e.g. response style definition and coding adopted. The analysed cluster-level covariates demonstrated that only very limited portion of OCT variance can be ascribed to the school-level of analysis. Gender, socio-economic status and locus of control proved to be significantly related to overclaiming among the individual-level correlates assessed. Boys yielded higher overclaiming bias than girls and students of external locus of control were more biased in their self-reports in comparison to students of internal locus of control. The work comprises also analysis of the PISA's OCT latent structure. The results evidenced bifactor structure of the scale, with the general factor interpreted as math ability while the two specific factors were given a tentative explanation concentrated around item difficulty (one specific factor emerged for easy items, one for hard items). These findings point to a multi-dimensional character of OCT and a large role played by domain ability in OCT responding. Moreover, latent class analysis (LCA) performed identified an "overclaiming" group among the participants which was characterised by high overclaiming and unwarrantedly high self-report profile regarding math-related abilities and social life. However, this group counted only around 9% of the total sample. Implications of these findings are commented in the work, along with theoretical integration and ideas for future studies with the use of OCT

    A further proposal to perform multiple imputation on a bunch of polytomous items based on latent class analysis

    No full text
    This work advances an imputation procedure for categorical scales which relays on the results of Latent Class Analysis and Multiple Imputation Analysis. The procedure allows us to use the information stored in the joint multivariate structure of the data set and to take into account the uncertainty related to the true unobserved values. The accuracy of the results is validated in the Item Response Models framework by assessing the accuracy in estimation of key parameters in a data set in which observations are simulated Missing at Random. The sensitivity of the multiple imputation methods is assessed with respect to the following factors: The number of latent classes set up in the Latent Class Model and the rate of missing observations in each variable. The relative accuracy in estimation is assessed with respect to the Multiple Imputation By Chained Equation missing data handling method for categorical variables

    Essentials of Business Analytics

    Get PDF

    Subjective well-being in online and mixed educational settings

    Get PDF
    corecore