18 research outputs found

    A concept-based learning progression for rational numbers

    Get PDF
    Rational number understanding is viewed as fundamental and critical to developing future knowledge and skills, and is therefore essential for success in the 21st century world. This report describes a provisional learning progression for rational numbers, specifically as embodied in fractions and decimals, that was designed to be useful towards the development of formative assessment

    Understanding Test Takers' Choices in a Self-Adapted Test: A Hidden Markov Modeling of Process Data

    Get PDF
    With the rise of more interactive assessments, such as simulation- and game-based assessment, process data are available to learn about students' cognitive processes as well as motivational aspects. Since process data can be complicated due to interdependencies in time, our traditional psychometric models may not necessarily fit, and we need to look for additional ways to analyze such data. In this study, we draw process data from a study on self-adapted test under different goal conditions (Arieli-Attali, 2016) and use hidden Markov models to learn about test takers' choice making behavior. Self-adapted test is designed to allow test takers to choose the level of difficulty of the items they receive. The data includes test results from two conditions of goal orientation (performance goal and learning goal), as well as confidence ratings on each question. We show that using HMM we can learn about transition probabilities from one state to another as dependent on the goal orientation, the accumulated score and accumulated confidence, and the interactions therein. The implications of such insights are discussed

    The Expanded Evidence-Centered Design (e-ECD) for Learning and Assessment Systems: A Framework for Incorporating Learning Goals and Processes Within Assessment Design

    Get PDF
    Evidence-centered design (ECD) is a framework for the design and development of assessments that ensures consideration and collection of validity evidence from the onset of the test design. Blending learning and assessment requires integrating aspects of learning at the same level of rigor as aspects of testing. In this paper, we describe an expansion to the ECD framework (termed e-ECD) such that it includes the specifications of the relevant aspects of learning at each of the three core models in the ECD, as well as making room for specifying the relationship between learning and assessment within the system. The framework proposed here does not assume a specific learning theory or particular learning goals, rather it allows for their inclusion within an assessment framework, such that they can be articulated by researchers or assessment developers that wish to focus on learning

    Gamification as transformative assessment in Higher Education

    Get PDF
    Gamification in education is still a very new concept in South Africa. Being a 21st-century invention, it has already established itself in the world within the environs of the corporate market, marketing, training and the social world. This article will first discuss gamification (and all its other designations) and its applications in general; thereafter, the focus will be on the application of gamification within the environment of education, and more specifically with an emphasis on assessment. The burning question for South Africa is whether gamification can enhance a module or course on the level of higher education so much that an educational institution cannot do without it anymore, knowing that we are working with students belonging to the ‘Digital Wisdom generation’. This article would like to open the way for the implementation of gamification as a transformative online assessment tool in higher educationChristian Spirituality, Church History and Missiolog

    Self adapted testing as formative assessment: Effects of feedback and scoring on engagement and performance

    Full text link
    This dissertation investigated the feasibility of self-adapted testing (SAT) as a formative assessment tool with the focus on learning. Under two different orientation goals—to excel on a test (performance goal) or to learn from the test (learning goal)—I examined the effect of different scoring rules provided as interactive feedback, on test takers (TTs) choice behavior, performance and engagement. Results indicated that choice behavior differed under the two orientation goals, with score maximization behavior observed under the performance goal, and more exploration behavior was observed under the learning goal. Engagement did not differ under the goal manipulation, which suggests that SAT with a learning goal can be as engaging as when there are external incentives to succeed. The scoring rules that I applied in this study were characterized such that they differ in the weight they award to a correct answer (equal or unequal weight), crossed with a framing of the score as either monotonically increasing or not. An unweighted scoring rule, the number right rule (NR), is monotonically increasing with each additional correct answer, but can be framed as “percent correct” rule (PC), which shows reduction in score after an incorrect answer. Similarly, the two weighted scores that were used in the study, one was monotonically increasing (the reward points rule; RW), while the other was non-monotonically increasing (the “ability estimate” rule; AE). The results on TTs behavior indicated that the weighted scores encouraged TTs to challenge themselves to select more difficult items, however, risk aversion was observed for the non-monotonically increasing scores. By and large, TTs choices were guided by their perceived ability, affected by the study’s factors, both goal orientation and score feedback. Interaction was not found for the most part, except for effect on strategy (the dependency on previous item feedback) which was not affected in the learning oriented condition but was affected in the performance oriented group, indicating more attention to score feedback under the latter context. Cognitive involvement was also affected differently; score feedback (any of the four tested) increased involvement under a performance goal, but decreased it under a learning goal. Implication of the results are discussed

    Measurement of ability in adaptive learning and assessment systems when learners can use on-demand hints

    Get PDF
    Adaptive learning and assessment systems support learners in acquiring knowledge and skills in a particular domain. The learners’ progress is monitored through them solving items matching their level and aiming at specific learning goals. Scaffolding and providing learners with hints are powerful tools in helping the learning process. One way of introducing hints is to make hint use the choice of the student. When the learner is certain of their response, they answer without hints, but if the learner is not certain or does not know how to approach the item they can request a hint. We develop measurement models for applications where such on-demand hints are available. Such models take into account that hint use may be informative of ability, but at the same time may be influenced by other individual characteristics. Two modeling strategies are considered: (1) The measurement model is based on a scoring rule for ability which includes both response accuracy and hint use. (2) The choice to use hints and response accuracy conditional on this choice are modeled jointly using Item Response Tree models. The properties of different models and their implications are discussed. An application to data from Duolingo, an adaptive language learning system, is presented. Here, the best model is the scoring-rule-based model with full credit for correct responses without hints, partial credit for correct responses with hints, and no credit for all incorrect responses. The second dimension in the model accounts for the individual differences in the tendency to use hints

    Measurement of ability in adaptive learning and assessment systems when learners can use on-demand hints

    Full text link
    Adaptive learning and assessment systems support learners in acquiring knowledge and skills in a particular domain. The learners’ progress is monitored through them solving items matching their level and aiming at specific learning goals. Scaffolding and providing learners with hints are powerful tools in helping the learning process. One way of introducing hints is to make hint use the choice of the student. When the learner is certain of their response, they answer without hints, but if the learner is not certain or does not know how to approach the item they can request a hint. We develop measurement models for applications where such on-demand hints are available. Such models take into account that hint use may be informative of ability, but at the same time may be influenced by other individual characteristics. Two modeling strategies are considered: (1) The measurement model is based on a scoring rule for ability which includes both response accuracy and hint use. (2) The choice to use hints and response accuracy conditional on this choice are modeled jointly using Item Response Tree models. The properties of different models and their implications are discussed. An application to data from Duolingo, an adaptive language learning system, is presented. Here, the best model is the scoring-rule-based model with full credit for correct responses without hints, partial credit for correct responses with hints, and no credit for all incorrect responses. The second dimension in the model accounts for the individual differences in the tendency to use hints
    corecore