31 research outputs found
A concept-based learning progression for rational numbers
Rational number understanding is viewed as fundamental and critical to developing future knowledge and skills, and is therefore essential for success in the 21st century world. This report describes a provisional learning progression for rational numbers, specifically as embodied in fractions and decimals, that was designed to be useful towards the development of formative assessment
Understanding Test Takers' Choices in a Self-Adapted Test: A Hidden Markov Modeling of Process Data
With the rise of more interactive assessments, such as simulation- and game-based assessment, process data are available to learn about students' cognitive processes as well as motivational aspects. Since process data can be complicated due to interdependencies in time, our traditional psychometric models may not necessarily fit, and we need to look for additional ways to analyze such data. In this study, we draw process data from a study on self-adapted test under different goal conditions (Arieli-Attali, 2016) and use hidden Markov models to learn about test takers' choice making behavior. Self-adapted test is designed to allow test takers to choose the level of difficulty of the items they receive. The data includes test results from two conditions of goal orientation (performance goal and learning goal), as well as confidence ratings on each question. We show that using HMM we can learn about transition probabilities from one state to another as dependent on the goal orientation, the accumulated score and accumulated confidence, and the interactions therein. The implications of such insights are discussed
The Expanded Evidence-Centered Design (e-ECD) for Learning and Assessment Systems: A Framework for Incorporating Learning Goals and Processes Within Assessment Design
Evidence-centered design (ECD) is a framework for the design and development of assessments that ensures consideration and collection of validity evidence from the onset of the test design. Blending learning and assessment requires integrating aspects of learning at the same level of rigor as aspects of testing. In this paper, we describe an expansion to the ECD framework (termed e-ECD) such that it includes the specifications of the relevant aspects of learning at each of the three core models in the ECD, as well as making room for specifying the relationship between learning and assessment within the system. The framework proposed here does not assume a specific learning theory or particular learning goals, rather it allows for their inclusion within an assessment framework, such that they can be articulated by researchers or assessment developers that wish to focus on learning
The Many Faces of Cognitive Labs in Educational Measurement
Cognitive labs are becoming increasingly popular over the past decades as methods for gathering detailed data on the processes by which test‑takers understand and solve assessment items and tasks. Yet, there’s still misunderstandings and misconceptions about this method, and there is somewhat skepticism about the benefits of the method as well as lack of best practices for using it. This study’s purpose was to clear out some of the misconceptions about cognitive labs, and specifically to show through theory and examples of use, the concrete benefits and best practices of cognitive labs in different stages of assessment development, ranging from early stages of conceptualizing and designing the task or item to later stages of gathering validity evidence for it. Previous literature review on the topic revealed that even the term “cognitive labs” describes different techniques, originated in three different fields of study (Arieli-Attali, King, & Zaromb, 2011): 1) Cognitive Psychology and Artificial Intelligence research (“Think Aloud” studies, e.g., Ericsson and Simon, 1993); 2) Survey development studies (“Cognitive Interviews”, e.g., Willis, 2005); and 3) software development studies (“Usability Test”, e.g., Nielsen and Mack, 1994). While the latter two fields draw from the first original method, the different terminology and practices might have been the cause for skepticism and avoidance of use in educational measurement. This study maps the various ways of applying the method, shedding light on which variation can be used in which context of assessment development, in order to answer the research questions. We conclude that while it is evident that uninterrupted think aloud is needed for collecting response process validity, more flexible techniques may be used in contexts of usability or for assessment fairness or accessibility purposes
Gamification as transformative assessment in Higher Education
Gamification in education is still a very new concept in South Africa. Being a 21st-century
invention, it has already established itself in the world within the environs of the corporate
market, marketing, training and the social world. This article will first discuss gamification
(and all its other designations) and its applications in general; thereafter, the focus will be on
the application of gamification within the environment of education, and more specifically
with an emphasis on assessment. The burning question for South Africa is whether
gamification can enhance a module or course on the level of higher education so much that an
educational institution cannot do without it anymore, knowing that we are working with
students belonging to the ‘Digital Wisdom generation’. This article would like to open the
way for the implementation of gamification as a transformative online assessment tool in
higher educationChristian Spirituality, Church History and Missiolog
Book Review: <i>Innovative assessment of collaboration (Methodology of educational measurement and assessment)</i> by von Davier, A. A., Zhu, M., & Kyllonen, P. C. (Eds.).
Self adapted testing as formative assessment: Effects of feedback and scoring on engagement and performance
This dissertation investigated the feasibility of self-adapted testing (SAT) as a formative assessment tool with the focus on learning. Under two different orientation goals—to excel on a test (performance goal) or to learn from the test (learning goal)—I examined the effect of different scoring rules provided as interactive feedback, on test takers (TTs) choice behavior, performance and engagement. Results indicated that choice behavior differed under the two orientation goals, with score maximization behavior observed under the performance goal, and more exploration behavior was observed under the learning goal. Engagement did not differ under the goal manipulation, which suggests that SAT with a learning goal can be as engaging as when there are external incentives to succeed. The scoring rules that I applied in this study were characterized such that they differ in the weight they award to a correct answer (equal or unequal weight), crossed with a framing of the score as either monotonically increasing or not. An unweighted scoring rule, the number right rule (NR), is monotonically increasing with each additional correct answer, but can be framed as “percent correct” rule (PC), which shows reduction in score after an incorrect answer. Similarly, the two weighted scores that were used in the study, one was monotonically increasing (the reward points rule; RW), while the other was non-monotonically increasing (the “ability estimate” rule; AE). The results on TTs behavior indicated that the weighted scores encouraged TTs to challenge themselves to select more difficult items, however, risk aversion was observed for the non-monotonically increasing scores. By and large, TTs choices were guided by their perceived ability, affected by the study’s factors, both goal orientation and score feedback. Interaction was not found for the most part, except for effect on strategy (the dependency on previous item feedback) which was not affected in the learning oriented condition but was affected in the performance oriented group, indicating more attention to score feedback under the latter context. Cognitive involvement was also affected differently; score feedback (any of the four tested) increased involvement under a performance goal, but decreased it under a learning goal. Implication of the results are discussed
