1,722 research outputs found
Adaptive formative assessment system based on computerized adaptive testing and the learning memory cycle for personalized learning
Computerized adaptive testing (CAT) can effectively facilitate student assessment by dynamically selecting questions on the basis of learner knowledge and item difficulty. However, most CAT models are designed for one-time evaluation rather than improving learning through formative assessment. Since students cannot remember everything, encouraging them to repeatedly evaluate their knowledge state and identify their weaknesses is critical when developing an adaptive formative assessment system in real educational contexts. This study aims to achieve this goal by proposing an adaptive formative assessment system based on CAT and the learning memory cycle to enable the repeated evaluation of students' knowledge. The CAT model measures student knowledge and item difficulty, and the learning memory cycle component of the system accounts for studentsā retention of information learned from each item. The proposed system was compared with an adaptive assessment system based on CAT only and a traditional nonadaptive assessment system. A 7-week experiment was conducted among students in a university programming course. The experimental results indicated that the students who used the proposed assessment system outperformed the students who used the other two systems in terms of learning performance and engagement in practice tests and reading materials. The present study provides insights for researchers who wish to develop formative assessment systems that can adaptively generate practice tests
Urnings:A new method for tracking dynamically changing parameters in paired comparison systems
We introduce a new rating system for tracking the development of parameters based on a stream of observations that can be viewed as paired comparisons. Rating systems are applied in competitive games, adaptive learning systems and platforms for product and service reviews. We model each observation as an outcome of a game of chance that depends on the parameters of interest (e.g. the outcome of a chess game depends on the abilities of the two players). Determining the probabilities of the different game outcomes is conceptualized as an urn problem, where a rating is represented by a probability (i.e. proportion of balls in the urn). This setup allows for evaluating the standard errors of the ratings and performing statistical inferences about the development of, and relations between, parameters. Theoretical properties of the system in terms of the invariant distributions of the ratings and their convergence are derived. The properties of the rating system are illustrated with simulated examples and its potential for answering research questions is illustrated using data from competitive chess, a movie review system, and an adaptive learning system for math
Recommended from our members
Iāve (Urn)ed This: An Application and Criterion-based Evaluation of the Urnings Algorithm
There is increased interest in personalized learning and making e-learning environments more adaptable. Some e-learning systems may use an Item Response Theory (IRT)-based assessment system. An important distinction between assessment and learning contexts is that learner proficiency is expected to remain constant across an assessment, while it is expected to change over time in a learning context. Constant learner proficiency during an assessment enables conventional approaches to estimating person and item parameters using IRT. These IRT-based systems could be abandoned for alternative approaches to modeling learners and system learning content, but assessments may provide more functions than adapting learning material to students. Thus, there is the question, how can e-learning systems with IRT-based assessment components more dynamically adapt their learning content? Is there a solution that leverages IRT for adapting the learning content of the system? A promising solution is the Urnings algorithm. Like other candidate algorithms, it is computationally light, but this algorithm has mechanisms for preventing variance inflation and is suitable for e-learning contexts. It also provides a measure of uncertainty around estimates. It has been studied both through simulations and applications to e-learning systems. Results are promising; however, there has not been an application of the Urnings algorithm to an e-learning context where there are conventionally estimated person parameters to compare the algorithm estimates to. This study addresses this gap by applying the Urnings algorithm to a Kā8 reading and mathematics learning platform. In data from this platform, we have person parameter estimates across academic years from an in-system diagnostic assessment. Results from this study will help industry researchers understand the feasibility of the Urnings algorithm for large e-learning systems with IRT-based assessment components
Modeling language learning using specialized Elo ratings
Automatic assessment of the proficiency levels of the learner is a critical part of Intelligent Tutoring Systems. We present methods for assessment in the context of language learning. We use a specialized Elo formula used in conjunction with educational data mining. We simultaneously obtain ratings for the proficiency of the learners and for the difficulty of the linguistic concepts that the learners are trying to master. From the same data we also learn a graph structure representing a domain model capturing the relations among the concepts. This application of Elo provides ratings for learners and concepts which correlate well with subjective proficiency levels of the learners and difficulty levels of the concepts
- ā¦