11 research outputs found
Recommended from our members
Expected Classification Accuracy using the Latent Distribution
Rudner (2001, 2005) proposed a method for evaluating classification accuracy in tests based on item response theory (IRT). In this paper, a latent distribution method is developed. For comparison, both methods are applied to a set of real data from a state test. While the latent distribution method relaxes several of the assumptions needed to apply Rudner’s method, both approaches yield extremely comparable results. A simplified approach for applying Rudner’s method and a short SPSS routine are presented. Accessed 13,137 times on https://pareonline.net from October 17, 2006 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
Recommended from our members
Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing (CAT), unselected items (i.e., responses that are not observed) remain at random even though selected items (i.e., responses that are observed) have been associated with a test taker’s latent trait that is being measured. In multidimensional adaptive testing (MAT), however, the missingness in the response data partially depends on the unobserved data because items are selected based on various types of information including the covariance among latent traits. This eventually may lead to violations of MAR. This study aimed to evaluate the potential impact such a violation of MAR in MAT could have on FIML estimation performance. The results showed an increase in estimation errors in item parameter estimation when the MAT response data were used, and differences in the level of the impact depending on how items loaded on multiple latent traits. Accessed 4,728 times on https://pareonline.net from May 19, 2014 to December 31, 2019. For downloads from January 1, 2020 forward, please click on the PlumX Metrics link to the right
Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing
In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times. Besides, a combination of the response-time model with a regular response model in an hierarchical framework was used in an alternative procedure for the detection of aberrant response times, in which collateral information on the test takers’ speed is derived from their response vectors. The procedures are illustrated using a data set for the Graduate Management Admission Test® (GMAT®). In addition, a power study was conducted using simulated cheating behavior on an adaptive test