4,467 research outputs found

    Automatic generation of students' conceptual models underpinned by free-text adaptive computer assisted assessment

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. D. Perez-Marin, E. Alfonseca, M. Freire, and P. Rodriguez, “Automatic generation of students' conceptual models underpinned by free-text adaptive computer assisted assessment”, in Sixth International Conference on Advanced Learning Technologies, 2006, Kerkrade, 2006, pp. 280-284In this paper, we present an automatic procedure to generate students' knowledge conceptual models from their answers to an automatic free-text scoring system. The conceptual model is defined as a simplified representation of the concepts and relationships among them that each student keeps in his or her mind about an area of knowledge. It is considered that each area of knowledge comprises several topics and each topic several concepts. Each concept can be identified by a term that the students should use. A concept can belong to one topic or to several topics. The conceptual model is graphically displayed to the teachers as a conceptual map so that they can instantly see which concepts have already been assimilated and which ones should still be reviewed as they have been misunderstoodThis work has been sponsored by Spanish Ministry of Science and Technology,project number TIN2004-031

    Automatic identification of terms for the generation of students’ concept maps

    Full text link
    Proceedings of the 4th International Conference on Multimedia and Information and Communication Technologies in Education, M-icte 2006, held in Seville (Spain) on November 2006Willow, an adaptive multilingual free-text Computer-Assisted Assessment system, automatically evaluates students’ free-text answers given a set of correct ones. This paper presents an extension of the system in order to generate the students’ concept maps while they are being assessed. To that aim, a new module for the automatic identification of the terms of a particular knowledge field has been created. It identifies and keeps track of the terms that are being used in the students’ answers, and calculates a confidence score of the student's knowledge about each term. An empyrical evaluation using the students' real answers show that it is robust enough to generate a good set of terms from a very small set of answers.This work has been sponsored by Spanish Ministry of Science and Technology, project number TIN2004-0314

    Technology-supported assessment

    Get PDF

    Service-oriented flexible and interoperable assessment: towards a standardised e-assessment system

    Get PDF
    Free-text answers assessment has been a field of interest during the last 50 years. Several free-text answers assessment tools underpinned by different techniques have been developed. In most cases, the complexity of the underpinned techniques has caused those tools to be designed and developed as stand-alone tools. The rationales behind using computers to assist learning assessment are mainly to save time and cost, as well as to reduce staff workload. However, utilising free-text answers assessment tools separately form the learning environment may increase the staff workload and increase the complexity of the assessment process. Therefore, free-text answers scorers have to have a flexible design to be integrated within the context of the e-assessment system architectures taking advantages of software-as-a-service architecture. Moreover, flexible and interoperable e-assessment architecture has to be utilised in order to facilitate this integration. This paper discusses the importance of flexible and interoperable e-assessment. Moreover, it proposes a service-oriented flexible and interoperable architecture for futuristic e-assessment systems. Nevertheless, it shows how such architecture can foster the e-assessment process in general and the free-text answers assessment in particular

    Principles and practice of on-demand testing

    Get PDF

    DeepEval: An Integrated Framework for the Evaluation of Student Responses in Dialogue Based Intelligent Tutoring Systems

    Get PDF
    The automatic assessment of student answers is one of the critical components of an Intelligent Tutoring System (ITS) because accurate assessment of student input is needed in order to provide effective feedback that leads to learning. But this is a very challenging task because it requires natural language understanding capabilities. The process requires various components, concepts identification, co-reference resolution, ellipsis handling etc. As part of this thesis, we thoroughly analyzed a set of student responses obtained from an experiment with the intelligent tutoring system DeepTutor in which college students interacted with the tutor to solve conceptual physics problems, designed an automatic answer assessment framework (DeepEval), and evaluated the framework after implementing several important components. To evaluate our system, we annotated 618 responses from 41 students for correctness. Our system performs better as compared to the typical similarity calculation method. We also discuss various issues in automatic answer evaluation

    The Use of ICT for the Assessment of Key Competences

    Get PDF
    This report assesses current trends in the area of ICT for learning and assessment in view of their value for supporting the assessment of Key Competences. Based on an extensive review of the literature, it provides an overview of current ICT-enabled assessment practices, with a particular focus on more recent developments that support the holistic assessment of Key Competences for Lifelong Learning in Europe. The report presents a number of relevant cases, discusses the potential of emerging technologies, and addresses innovation and policy issues for eAssessment. It considers both summative and formative assessment and considers how ICT can lever the potential of more innovative assessment formats, such as peer-assessment and portfolio assessment and how more recent technological developments, such as Learning Analytics, could, in the future, foster assessment for learning. Reflecting on the use of the different ICT tools and services for each of the eight different Key Competences for Lifelong Learning it derives policy options for further exploiting the potential of ICT for competence-based assessment.JRC.J.3-Information Societ
    • 

    corecore