4 research outputs found
Towards an automatic real-time assessment of online discussions in computer-supported collaborative learning practices
The discussion process plays an important social task in Computer-Supported Collaborative Learning (CSCL) where participants can discuss about the activity being performed, collaborate with each other through the exchange of ideas that may arise, propose new resolution mechanisms, and justify and refine their own contributions, and as a result acquire new knowledge. Indeed, learning by discussion when applied to collaborative learning scenarios can provide significant benefits for students in collaborative learning, and in education in general. As a result, current educational organizations incorporate in-class online discussions into web-based courses as part of the very rationale of their pedagogical models. However, online discussions as collaborative learning activities are usually greatly participated and contributed, which makes the monitoring and assessment tasks time-consuming, tedious and error-prone. Specially hard if not impossible by humans is to manually deal with the sequences of hundreds of contributions making up the discussion threads and the relations between these contributions. As a result, current assessment in online discussions restricts to offer evaluation results of the content quality of contributions after the completion of the collaborative learning task and neglects the essential issue of constantly assessing the knowledge building as a whole while it is still being generated. In this paper, we propose a multidimensional model based on data analysis from online collaborative discussion interaction that provides a first step towards an automatic assessment in (almost) real time. The context of this study is a real on-line discussion experience that took place at the Open University of CataloniaPeer ReviewedPostprint (published version
A framework for assessing online discussion using quantitative log file and rubric
Online discussions have been found to be a powerful platform for collaborative learning. Students interact online and this has contributed towards individual student’s learning process. However, the issues that need to be addressed in online discussions are assessment of students’ participation and the level of activity with reference to numerous discussion threads. Currently, the assessment of online discussion is based on content or interaction and each does not have standardized detailed descriptions or rubrics to determine the level of participation among the online interactants. To address the problem of assessment, this research investigated and verified the use of content combined with interaction as significant assessment criteria. The proposed framework to address the problem used the Quantitative log file (QLF) and rubrics to gauge the level of students’ online participation. The QLF for content included novelty and key knowledge whereas interaction included pair response, final response, and interaction rate. The framework was applied in a prototype based on MOODLE environment called Rubric Assessment Participation System (RAPS). Questionnaires were distributed to fifty respondents in order to justify the assessment criteria of online participation. Six users were selected to test the prototype which combined content and interaction as assessment criteria in the rubrics and the result showed that RAPS can be used as an assessment tool for online discussions
Recommended from our members
Student-Âsummarized videos in an Adaptive and Collaborative E-learning Environment (ACES)
The purpose of this research was to develop a collaborative e-Learning framework using summarised videos as learning media to provide a more efficient learning experience where participants’ engagement and motivations are enhanced. The research aims to increase participants’ overall learning level, understanding level; motivation and communication skills.
For this research, a collaborative environment has been built where students participate in a video sharing system allowing them to create their own summarized
Videos from existing course video material. Students can then share these videos with other system participants with the ability to view, rate and comment on videos. Instructors upload the core video footage, which the students are able to edit and summarize.
Two experiments were run with live modules within the Department of Informatics; a pilot study and full experiment. Feedback from the pilot study was used to develop the framework for the full study. The experiments involved pre and post participation surveys to measure satisfaction and awareness effects. Also, system participation data was used for analysis of engagement and other factors defining the outcomes of this experiment.
The findings showed a considerable increase in student satisfaction regarding their understanding and motivation with video summarization tool used in the experiments. The results of [the] collaboration aspect of the experiment showed a slight increase in their satisfaction on their learning level, however, it had minimal effect on students’ motivation and engagement as no significant difference was noted after using the system