8,256 research outputs found

    Transforming a competency model to assessment items

    No full text
    The problem of comparing and matching different learners’ knowledge arises when assessment systems use a one-dimensional numerical value to represent “knowledge level”. Such assessment systems may measure inconsistently because they estimate this level differently and inadequately. The multi-dimensional competency model called COMpetence-Based learner knowledge for personalized Assessment (COMBA) is being developed to represent a learner’s knowledge in a multi-dimensional vector space. The heart of this model is to treat knowledge, not as possession, but as a contextualized space of capability either actual or potential. The paper discusses the automatic generation of an assessment from the COMBA competency model as a “guideon- the–side”

    Using evaluation to inform the development of a user-focused assessment engine

    Get PDF
    This paper reports on the evaluation of a new assessment system, Technologies for Online Interoperability (TOIA). TOIA was built from a user-focussed specification of an assessment system. The formative evaluation of the project complemented this initial specification by ensuring that user feedback on the development and use of the system was iteratively fed back into the development process. The paper begins by summarising some of the key barriers and enablers to the use of assessment systems and the uptake of Computer-Assisted Assessment (CAA). It goes on to provide a critique of the impact of technology on assessment and considers whether innovative uses of information and communication technology (ICT) might result in new e-pedagogies and practices in assessment. The paper then reports on the findings of the TOIA evaluation and discusses how these were used to inform the development of the system

    Transforming a competency model to assessment items

    No full text
    The problem of comparing and matching different learners’ knowledge arises when assessment systems use a one-dimensional numerical value to represent “knowledge level”. Such assessment systems may measure inconsistently because they estimate this level differently and inadequately. The multi-dimensional competency model called COMpetence-Based learner knowledge for personalized Assessment (COMBA) is being developed to represent a learner’s knowledge in a multi-dimensional vector space. The heart of this model is to treat knowledge, not as possession, but as a contextualized space of capability either actual or potential. The paper discusses the automatic generation of an assessment from the COMBA competency model as a “guide-on-the–side”

    Application of Particle Swarm Optimization to Formative E-Assessment in Project Management

    Get PDF
    The current paper describes the application of Particle Swarm Optimization algorithm to the formative e-assessment problem in project management. The proposed approach resolves the issue of personalization, by taking into account, when selecting the item tests in an e-assessment, the following elements: the ability level of the user, the targeted difficulty of the test and the learning objectives, represented by project management concepts which have to be checked. The e-assessment tool in which the Particle Swarm Optimization algorithm is integrated is also presented. Experimental results and comparison with other algorithms used in item tests selection prove the suitability of the proposed approach to the formative e-assessment domain. The study is presented in the framework of other evolutionary and genetic algorithms applied in e-education.Particle Swarm Optimization, Genetic Algorithms, Evolutionary Algorithms, Formative E-assessment, E-education
    • 

    corecore