17 research outputs found

    Classification Performance of Answer-Copying Indices Under Different Types of IRT Models

    No full text
    Test fraud has recently received increased attention in the field of educational testing, and the use of comprehensive integrity analysis after test administration is recommended for investigating different types of potential test frauds. One type of test fraud involves answer copying between two examinees, and numerous statistical methods have been proposed in the literature to screen and identify unusual response similarity or irregular response patterns on multiple-choice tests. The current study examined the classification performance of answer-copying indices measured by the area under the receiver operating characteristic (ROC) curve under different item response theory (IRT) models (one- [1PL], two- [2PL], three-parameter [3PL] models, nominal response model [NRM]) using both simulated and real response vectors. The results indicated that although there is a slight increase in the performance for low amount of copying conditions (20%), when nominal response outcomes were used, these indices performed in a similar manner for 40% and 60% copying conditions when dichotomous response outcomes were utilized. The results also indicated that the performance with simulated response vectors was almost identically reproducible with real response vectors
    corecore