4,004 research outputs found

    Paper Based Testing vs. Mobile Device Based Testing in an EFL Environment: What’s the Difference?

    Get PDF
    Mobile devices are becoming increasingly more ubiquitous. This trend is especially true with young people. An instructor’s job is to best service their students. If there are possible testing means that are available, it is the responsibility of instructors to know if these mobile devices are as capable of performing assessments as traditional paper and pencil tests. It is the purpose of this research to evaluate if there is a difference in actual performance in Mobile Device Testing (MDT) versus Paper Based Testing (PBT) and if there are any perceived differences. Participants (N=150) of university EFL learners in South Korea were broken into groups, two different EFL tests were given, the majority received PBT first followed by the MDT and the remaining performed the tests in reverse order. Upon completion of both tests, the participants completed a survey evaluating both testing mediums. Analysis of Variance (ANOVA), F-tests and t-tests were used to validate the comparability of the two different EFL tests, check for overall correlation and test direct comparisons of one group versus another. The results found that the tests were comparable in the performance of the participants, there was no overall group that had a variance that could be attributed to the testing medium, students perceived no difference in difficulty based on testing medium, and that students actually preferred the MDT method over the PBT. These results indicate that MDT is a viable alternative to PBT due to the comparability in performance and student motivational factors

    Introducing computer-marked tests in an online Financial Accounting course: patterns in academic performance and approaches to assessment design

    Get PDF
    In the last two decades online computer-marked assignments (CMAs) have been widely used in accounting education. Although there is a growing body of research on this form of online assessment, most of the previous studies relied on small samples of respondents or focused on student self-report using survey methods. This exploratory mixed-method study aims to combine a quantitative analysis of learners’ academic performance on an online Financial Accounting course with a more in-depth exploration of learner experiences using qualitative methods. The quantitative findings suggest that student previous educational qualifications, age and experience of studying a similar subject are strongly associated with CMA completion, which is also linked to scores on other pieces of assessed work. The qualitative results show that from the learners’ perspective, diversifying assessment methods, introducing low-stakes assessment activities and creating opportunities for situational interest are viewed as key aspects of online CMA design. The paper concludes with discussing the implications of the study for designing and delivering online courses in accounting, particularly in the light of the growing popularity of massive open online courses (MOOCs)

    Introducing Dynamic Testing to Teachers in Malaysia: An Experimental Investigation of Its Effects on Teachers’ Beliefs and Practices about Assessment

    Get PDF
    Assessment is central to the effectiveness of teaching and improvement of learning. In the context of Malaysia, however, it appears that assessment has not fulfilled its promises, as the increase of low-performing schools and the urban-rural performance gap remains prevalent. The concern is, why does assessment “fail” to bring the intended positive impacts on instructional improvements to foster progress in students’ learning experience? Research presented in this thesis argues that it is potentially caused by two factors: (i) teachers’ lack of understanding of assessment; and (ii) the unsuitability of the currently used assessment tools. Specifically, this study aims to investigate the attitude of teachers towards the implementation of the existing assessment tool, i.e., the Form 1 Diagnostic Test (F1DT), particularly looking at their assessment beliefs and practices. In addition, critically reflecting upon the limitations of FIDT, this study intends to introduce an alternative assessment approach, i.e., dynamic testing (DT). Deploying an intervention-control group and pre-test-post-test experimental design, the answers to the formulated eight research questions were obtained through a self-developed questionnaire, the Survey of Educational Assessment (SEA), and teachers’ written feedback. Due to the nested structure of the data, sampled from 862 teachers from six educational zones, the analysis of the questionnaire responses was largely conducted using Hierarchical Linear Modelling (HLM). A thematic analysis was used to analyse the data collected from teachers’ written comments. The findings revealed that teachers still viewed F1DT as a useful diagnostic tool particularly in measuring prior students’ attainment and identifying learning problems. The relationship between teachers’ beliefs and practices regarding the implementation and utilisation of F1DT was found to be strong. However, after attending the educational workshop on DT, teachers indicated a lower level of agreement regarding their beliefs and practices about the purposes and uses of F1DT. Accordingly, this positive appraisal for DT implies that teachers became more aware of the limitations of the information provided by F1DT, especially for the purposes intended (e.g., identifying causes for unsatisfactory academic performance resulting in potentially ineffective instruction). As this is a pioneering study, in terms of its large scale of sample and the employment of an experimental design, it offers novel insights to the field of assessment beliefs and practices and DT application. It therefore has the potential to make a significant contribution in improving professional practices in assessment-related activities and ultimately, in addressing the developmental challenges of the education system in Malaysia

    TRANSITIONING TO AN ALTERNATIVE ASSESSMENT: COMPUTER-BASED TESTING AND KEY FACTORS RELATED TO TESTING MODE

    Get PDF
    Computer-Based Testing (CBT) is becoming widespread due to its many identified positive merits including productive item development, flexible delivery testing mode, existence of self-selection options for test takers, immediate feedback, results management, standard setting and so on. Transitioning to CBT raised the concern over the effects of testing administration mode on test takers’ scores compared to Paper-and-Pencil-Based testing. In this comparability study, we compared the effects of two different media (CBT vs. PPT) by investigating the score comparability of General English test taken by Iranian graduate students studying at Chabahar Maritime University to see whether test scores obtained from two testing modes were different. To achieve this goal, two versions of the same test were administered to 100 intermediate-level test takers organized in one testing group in two separate testing occasions. Using paired sample t-test to compare the means, the findings revealed the priority of CBT over PPT with .01 degree of difference at p<05. Utilizing ANOVA, the results indicated that two prior computer familiarity and attitudes external moderator factors had no significant effect on test takers’ CBT scores. Furthermore, according to the results, the greatest percentage of test takers preferred test features presented on computerized version of the test.  Article visualizations

    ICT and its assessment at 16: an enquiry into the perceptions of year 11 students

    Get PDF
    This study, conducted between 2006 and 2011, enquired into student perceptions of Information and Communications Technology (ICT) and its assessment at aged 16. The prevailing orthodoxies amongst writers, commentators and educationalists are that the subject does not reflect the learning and use made by young people of technology. The voice of the learner, so often lauded in aspects of school democracy and in formative assessment, has not been heard in respect of the high-stakes assessment at the end of Key Stage (KS) 4 in schools in England. This research was a step in filling that void. Taking an interpretive phenomenological approach three phases of empirical data collection were used each building on the previous ones. To bring the student perception and voice to the fore a repertory grid analysis was initially used to elicit constructs of learning and assessment directly from the students. This was followed by a questionnaire and semi-structured interviews across a sample of state-funded schools in England. The use of a multiple-phase data collection allowed phenomena to be distilled with successively more depth at each phase
    • 

    corecore