An investigation of the cross-mode comparability of a paper and computer-based multiple-choice cloze reading assessment for ESL learners

Abstract

This study was designed to determine whether a computer-based version of a standardized cloze reading test for second language learners is comparable to its traditional paper-based counterpart and to identify how test takers’ computer familiarity and perceptions of paper and computer-based tests related to their performance across testing modes. Previous comparability research for second language speakers revealed that some studies found that the two forms are comparable while others found they are not. Findings on the connection between computer attitudes and computer test performance were also mixed. One hundred and twenty high school ELL students were recruited for the study. The research instruments included both paper and computer-based versions of a locally developed reading assessment. The two tests are the same in terms of content, questions,pagination and layout. The design was a Latin squares so that two groups of learners took the tests in the opposite order and their scores were compared. Participants were also asked to complete questionnaires about their familiarity with computers and their perceptions of each of the two testing modes. Results indicate that the paper and computer-based versions of the test are comparable. A regression analysis showed that there is a relationship between computer familiarity and computer-based LOMERA performance. Mode preference survey data pointed to differences in preferences depending on each unique test feature. These results help validate the cross-mode comparability of assessments outside of the traditional discrete point multiple choice tests which tends to predominate in current research.Education, Faculty ofLanguage and Literacy Education (LLED), Department ofGraduat

    Similar works