3,392 research outputs found

    Gender Fairness within the Force Concept Inventory

    Get PDF
    Research on the test structure of the Force Concept Inventory (FCI) has largely ignored gender, and research on FCI gender effects (often reported as "gender gaps") has seldom interrogated the structure of the test. These rarely-crossed streams of research leave open the possibility that the FCI may not be structurally valid across genders, particularly since many reported results come from calculus-based courses where 75% or more of the students are men. We examine the FCI considering both psychometrics and gender disaggregation (while acknowledging this as a binary simplification), and find several problematic questions whose removal decreases the apparent gender gap. We analyze three samples (total Npre=5,391N_{pre}=5,391, Npost=5,769N_{post}=5,769) looking for gender asymmetries using Classical Test Theory, Item Response Theory, and Differential Item Functioning. The combination of these methods highlights six items that appear substantially unfair to women and two items biased in favor of women. No single physical concept or prior experience unifies these questions, but they are broadly consistent with problematic items identified in previous research. Removing all significantly gender-unfair items halves the gender gap in the main sample in this study. We recommend that instructors using the FCI report the reduced-instrument score as well as the 30-item score, and that credit or other benefits to students not be assigned using the biased items.Comment: 18 pages, 3 figures, 5 tables; submitted to Phys. Rev. PE

    A new method for detecting differential item functioning in the Rasch model

    Get PDF
    Differential item functioning (DIF) can lead to an unfair advantage or disadvantage for certain subgroups in educational and psychological testing. Therefore, a variety of statistical methods has been suggested for detecting DIF in the Rasch model. Most of these methods are designed for the comparison of pre-specified focal and reference groups, such as males and females. Latent class approaches, on the other hand, allow to detect previously unknown groups exhibiting DIF. However, this approach provides no straightforward interpretation of the groups with respect to person characteristics. Here we propose a new method for DIF detection based on model-based recursive partitioning that can be considered as a compromise between those two extremes. With this approach it is possible to detect groups of subjects exhibiting DIF, which are not prespecified, but result from combinations of observed ovariates. These groups are directly interpretable and can thus help understand the psychological sources of DIF. The statistical background and construction of the new method is first introduced by means of an instructive example, and then applied to data from a general knowledge quiz and a teaching evaluation.item response theory, IRT, Rasch model, di erential item functioning, DIF, structural change, multidimensionality.

    Assessing probabilistic reasoning in verbal-numerical and graphical-pictorial formats: An evaluation of the psychometric properties of an instrument

    Get PDF
    Research on the graphical facilitation of probabilistic reasoning has been characterised by the effort expended to identify valid assessment tools. The authors developed an assessment instrument to compare reasoning performances when problems were presented in verbal-numerical and graphical-pictorial formats. A sample of undergraduate psychology students (n=676) who had not developed statistical skills, solved problems requiring probabilistic reasoning. They attended universities in Spain (n=127; f=71.7%) and Italy (n=549; f=72.9%). In Italy 173 undergraduates solved these problems under time pressure. The remaining students solved the problems without time limits. Classical Test Theory (CTT) and Item Response Theory (IRT) were applied to assess the effect of two formats and to evaluate criterion and discriminant validity. The instrument showed acceptable psychometric properties, providing preliminary evidence of validity

    Comparing Elo, Glicko, IRT, and Bayesian IRT Statistical Models for Educational and Gaming Data

    Get PDF
    Statistical models used for estimating skill or ability levels often vary by field, however their underlying mathematical models can be very similar. Differences in the underlying models can be due to the need to accommodate data with different underlying formats and structure. As the models from varying fields increase in complexity, their ability to be applied to different types of data may have the ability to increase. Models that are applied to educational or psychological data have advanced to accommodate a wide range of data formats, including increased estimation accuracy with sparsely populated data matrices. Conversely, the field of online gaming has expanded over the last two decades to include the use of more complex statistical models to provide real-time game matching based on ability estimates. It can be useful to see how statistical models from educational and gaming fields compare as different datasets may benefit from different ability estimation procedures. This study compared statistical models typically used in game match making systems (Elo, Glicko) to models used in psychometric modeling (item response theory and Bayesian item response theory) using both simulated data and real data under a variety of conditions. Results indicated that conditions with small numbers of items or matches had the most accurate skill estimates using the Bayesian IRT (item response theory) one-parameter logistic (1PL) model, regardless of whether educational or gaming data were used. This held true for all sample sizes with small numbers of items. However, the Elo and the non-Bayesian IRT 1PL models were close to the Bayesian IRT 1PL model’s estimations for both gaming and educational data. While the 2PL models were not shown to be accurate for the gaming study conditions, the IRT 2PL and Bayesian IRT 2PL models outperformed the 1PL models when 2PL educational data were generated with the larger sample size and item condition. Overall, the Bayesian IRT 1PL model seemed to be the best choice across the smaller sample and match size conditions

    Integrating knowledge tracing and item response theory: A tale of two frameworks

    Get PDF
    Traditionally, the assessment and learning science commu-nities rely on different paradigms to model student performance. The assessment community uses Item Response Theory which allows modeling different student abilities and problem difficulties, while the learning science community uses Knowledge Tracing, which captures skill acquisition. These two paradigms are complementary - IRT cannot be used to model student learning, while Knowledge Tracing assumes all students and problems are the same. Recently, two highly related models based on a principled synthesis of IRT and Knowledge Tracing were introduced. However, these two models were evaluated on different data sets, using different evaluation metrics and with different ways of splitting the data into training and testing sets. In this paper we reconcile the models' results by presenting a unified view of the two models, and by evaluating the models under a common evaluation metric. We find that both models are equivalent and only differ in their training procedure. Our results show that the combined IRT and Knowledge Tracing models offer the best of assessment and learning sciences - high prediction accuracy like the IRT model, and the ability to model student learning like Knowledge Tracing

    Technology and Testing

    Get PDF
    From early answer sheets filled in with number 2 pencils, to tests administered by mainframe computers, to assessments wholly constructed by computers, it is clear that technology is changing the field of educational and psychological measurement. The numerous and rapid advances have immediate impact on test creators, assessment professionals, and those who implement and analyze assessments. This comprehensive new volume brings together leading experts on the issues posed by technological applications in testing, with chapters on game-based assessment, testing with simulations, video assessment, computerized test development, large-scale test delivery, model choice, validity, and error issues. Including an overview of existing literature and ground-breaking research, each chapter considers the technological, practical, and ethical considerations of this rapidly-changing area. Ideal for researchers and professionals in testing and assessment, Technology and Testing provides a critical and in-depth look at one of the most pressing topics in educational testing today

    New measurement paradigms

    Get PDF
    This collection of New Measurement Paradigms papers represents a snapshot of the variety of measurement methods in use at the time of writing across several projects funded by the National Science Foundation (US) through its REESE and DR K–12 programs. All of the projects are developing and testing intelligent learning environments that seek to carefully measure and promote student learning, and the purpose of this collection of papers is to describe and illustrate the use of several measurement methods employed to achieve this. The papers are deliberately short because they are designed to introduce the methods in use and not to be a textbook chapter on each method. The New Measurement Paradigms collection is designed to serve as a reference point for researchers who are working in projects that are creating e-learning environments in which there is a need to make judgments about students’ levels of knowledge and skills, or for those interested in this but who have not yet delved into these methods
    • …
    corecore