4 research outputs found

    Towards an automatic real-time assessment of online discussions in computer-supported collaborative learning practices

    Get PDF
    The discussion process plays an important social task in Computer-Supported Collaborative Learning (CSCL) where participants can discuss about the activity being performed, collaborate with each other through the exchange of ideas that may arise, propose new resolution mechanisms, and justify and refine their own contributions, and as a result acquire new knowledge. Indeed, learning by discussion when applied to collaborative learning scenarios can provide significant benefits for students in collaborative learning, and in education in general. As a result, current educational organizations incorporate in-class online discussions into web-based courses as part of the very rationale of their pedagogical models. However, online discussions as collaborative learning activities are usually greatly participated and contributed, which makes the monitoring and assessment tasks time-consuming, tedious and error-prone. Specially hard if not impossible by humans is to manually deal with the sequences of hundreds of contributions making up the discussion threads and the relations between these contributions. As a result, current assessment in online discussions restricts to offer evaluation results of the content quality of contributions after the completion of the collaborative learning task and neglects the essential issue of constantly assessing the knowledge building as a whole while it is still being generated. In this paper, we propose a multidimensional model based on data analysis from online collaborative discussion interaction that provides a first step towards an automatic assessment in (almost) real time. The context of this study is a real on-line discussion experience that took place at the Open University of CataloniaPeer ReviewedPostprint (published version

    A framework for assessing online discussion using quantitative log file and rubric

    Get PDF
    Online discussions have been found to be a powerful platform for collaborative learning. Students interact online and this has contributed towards individual student’s learning process. However, the issues that need to be addressed in online discussions are assessment of students’ participation and the level of activity with reference to numerous discussion threads. Currently, the assessment of online discussion is based on content or interaction and each does not have standardized detailed descriptions or rubrics to determine the level of participation among the online interactants. To address the problem of assessment, this research investigated and verified the use of content combined with interaction as significant assessment criteria. The proposed framework to address the problem used the Quantitative log file (QLF) and rubrics to gauge the level of students’ online participation. The QLF for content included novelty and key knowledge whereas interaction included pair response, final response, and interaction rate. The framework was applied in a prototype based on MOODLE environment called Rubric Assessment Participation System (RAPS). Questionnaires were distributed to fifty respondents in order to justify the assessment criteria of online participation. Six users were selected to test the prototype which combined content and interaction as assessment criteria in the rubrics and the result showed that RAPS can be used as an assessment tool for online discussions
    corecore