5 research outputs found

    An IRT–Multiple Indicators Multiple Causes (MIMIC) Approach as a Method of Examining Item Response Latency

    Get PDF
    The analysis of response time has received increasing attention during the last decades, since evidence from several studies supported the argument that there is a direct relationship between item response time and test performance. The aim of this study was to investigate whether item response latency affects person's ability parameters, in that it represents an adaptive or maladaptive practice. To examine the above research question data from 8,475 individuals completing the computerized version of the Postgraduate General Aptitude Test (PAGAT) were analyzed. To determine the extent to which response latency affects person's ability, we used a Multiple Indicators Multiple Causes (MIMIC) model, in which every item in a scale was linked to its corresponding covariate (i.e., item response latency). We ran the MIMIC model within the Item Response Theory (IRT) framework (2-PL model). The results supported the hypothesis that item response latency could provide valuable information for getting more accurate estimations for persons' ability levels. Results indicated that for individuals who invest more time on easy items, their likelihood of success does not improve, most likely because slow and fast responders have significantly different levels of ability (fast responders are of higher ability compared to slow responders). Consequently, investing more time for low ability individuals does not prove to be adaptive. The opposite was found for difficult items: individuals spending more time on difficult items increase their likelihood of success, more likely because they are high achievers (in difficult items individuals who spent more time were of significantly higher ability compared to fast responders). Thus, it appears that there is an interaction between the difficulty of the item and person abilities that explain the effects of response time on likelihood of success. We concluded that accommodating item response latency in a computerized assessment model, can inform test quality and test takers' behavior, and in that way, enhance score measurement accuracy

    Saudi National Assessment of Educational Progress (SNAEP)

    Get PDF
    To provide a universal basic education, Saudi Arabia initially employed a rapid quantitative educational strategy, later developing a qualitative focus to improve standards of education delivery and quality of student outcomes. Despite generous resources provided for education, however, there is no national assessment system to provide statistical evidence on students’ learning outcomes. Educators are querying the curricula and quality of delivery for Saudi education, especially following low student performances on the Trends in International Mathematics and Science Study (TIMSS) in 2003 and 2007. There is a growing demand for national assessment standards for all key subject areas to monitor students’ learning progress. This study acknowledges extant research on this important topic and offers a strategy of national assessment to guide educational reform

    An investigation of performance-based assessment in science in Saudi primary schools

    No full text
    This study was undertaken to develop a performance-based assessment approach in science learning and to investigate its effects on students' achievement and attitudes toward science as well as the readiness of Saudi primary schools in relation to its implementation. The approach links the assessment methods to cognitive and social constructivist learning theories and science curriculum reforms. Twelve science classes comprising 289 primary school students and six teachers in the city of Riyadh formed the sample for the study. Six classes were randomly selected and were instructed using a performance-based assessment approach. A second cohort of six classes was instructed traditionally as control groups. The same teachers directed both experimental and control groups for nine weeks. Data were collected by different tools involving tests, interviews, and questionnaires. Science tests and Students' Attitudes toward Science Survey were administered as pre- and post-tests to evaluate the control and experimental groups. The Teacher Performance Assessment Questionnaire (TPAQ) was applied as pre- and post-tests for the science teachers' responses to the program. Interviews involving all six teachers and 12 randomly-selected students were conducted at the end of the nine week period. Both quantitative and qualitative analyses were applied to the collected data. Quantitative analysis involved both descriptive and inferential analyses using means, standard deviations, and parametric tests; whilst QRS Nvivo was used as the coding method for qualitative analyses. The results of quantitative analysis showed that students in the experimental group had significantly higher scores in the science post-test than the students in the control groups. There was also a significant attitudinal difference towards science between the experimental and control groups in favour of the experimental group. The performance-based assessment procedures were found capable of predicting approximately 23 per cent of variation in the students' final science test scores. Qualitative analysis' results from the teachers' data indicated that they evaluated performance-based assessment approach highly: it gave students the opportunity to be active, and interactive, and greater responsibility toward learning. In addition, the teachers responded well to the experimental program and reported they had received professional development: formulating open ended questions, administering groups, designing experiments and using formative assessments. They considered changes to classroom practices to incorporate these factors from performance-based assessment and give students more opportunity for control over their learning. The result of the paired sample t-test showed no significant improvement on teachers' assessment standards as measured by the TPAQ, whereas the effect size indicated a large change in teachers' performance. Teachers reported some disadvantages of performance-based assessment. Teachers reported that it was time consuming, required extra work, was difficult to assess, and did not fit into the current Saudi school environment. Qualitative analysis of the students' data showed that students from the experimental groups found the performance assessment approach an opportunity for greater control over learning processes, to actively participate in the science class, and importantly, group work encouraged them to work cooperatively. Students reported performance-based assessment was useful, and this study's results confirmed that the processes undertaken supported self-efficacy development
    corecore