15 research outputs found

    Towards an automatic real-time assessment of online discussions in computer-supported collaborative learning practices

    Get PDF
    The discussion process plays an important social task in Computer-Supported Collaborative Learning (CSCL) where participants can discuss about the activity being performed, collaborate with each other through the exchange of ideas that may arise, propose new resolution mechanisms, and justify and refine their own contributions, and as a result acquire new knowledge. Indeed, learning by discussion when applied to collaborative learning scenarios can provide significant benefits for students in collaborative learning, and in education in general. As a result, current educational organizations incorporate in-class online discussions into web-based courses as part of the very rationale of their pedagogical models. However, online discussions as collaborative learning activities are usually greatly participated and contributed, which makes the monitoring and assessment tasks time-consuming, tedious and error-prone. Specially hard if not impossible by humans is to manually deal with the sequences of hundreds of contributions making up the discussion threads and the relations between these contributions. As a result, current assessment in online discussions restricts to offer evaluation results of the content quality of contributions after the completion of the collaborative learning task and neglects the essential issue of constantly assessing the knowledge building as a whole while it is still being generated. In this paper, we propose a multidimensional model based on data analysis from online collaborative discussion interaction that provides a first step towards an automatic assessment in (almost) real time. The context of this study is a real on-line discussion experience that took place at the Open University of CataloniaPeer ReviewedPostprint (published version

    Penilaian Esai Jawaban Bahasa Indonesia Menggunakan Metode Svm - Lsa Dengan Fitur Generik

    Full text link
    Paper ini mengkaji sebuah solusi untuk permasalahan penilaian jawaban esai secara otomatis dengan menggabungkan support vector machine (SVM) sebagai teknik klasifikasi teks otomatis dengan LSA sebagai USAha untuk menangani sinonim dan polisemi antar index term. Berbeda dengan sistem penilaian esai yang biasa yakni fitur yang digunakan berupa index term, fitur yang digunakan proses penilaian jawaban esai adalah berupa fitur generic yang memungkinkan pengujian model penilaian esai untuk berbagai pertanyaan yang berbeda. Dengan menggunakan fitur generic ini, seseorang tidak perlu melakukan pelatihan ulang jika orang tersebut akan melakukan penilaian esai jawaban untuk beberapa pertanyaan. Fitur yang dimaksud meliputi persentase kemunculan kata kunci, similarity jawaban esai dengan jawaban referensi, persentase kemunculan gagasan kunci, persentase kemunculan gagasan salah, serta persentase kemunculan sinonim kata kunci. Hasil pengujian juga memperlihatkan bahwa metode yang diusulkan mempunyai tingkat akurasi penilaian yang lebih tinggi jika dibandingkan dengan metode lain seperti SVM atau LSA menggunakan index term sebagai fitur pembelajaran mesin. This paper examines a solution for problems of assessment an essay answers automatically by combining support vector machine (SVM) as automatic text classification techniques and LSA as an attempt to deal with synonyms and the polysemy between index terms. Unlike the usual essay scoring system that used index terms features, the feature used for the essay answers assessment process is a generic feature which allows testing of valuation models essays for a variety of different questions. By using these generic features, one does not need to re training if the person will conduct an assessment essay answers to some questions. The features include percentage of keywords, similarity essay answers with the answer reference, percentage of key ideas, percentage of wrong answer, and percentage of keyword synonyms. The test results also show that the proposed method has a higher valuation accuracy rate compared to other methods such as SVM or LSA, use term index as features in machine learning

    TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter

    Get PDF
    The importance of dealing with fake news on social media has increased both in political and social contexts. While existing studies focus mainly on how to detect and label fake news, approaches to assist users in making their own assessments are largely missing. This article presents a study on how Twitter-users’ assessments can be supported by an indicator-based white-box approach. First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of a white-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitter in assessing tweets by showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implications for approaches to assist users in dealing with fake news

    A Knowledge Adoption Model Based Framework for Finding Helpful User-Generated Contents in Online Communities

    Get PDF
    Many online communities allow their members to provide information helpfulness judgments that can be used to guide other users to useful contents quickly. However, it is a serious challenge to solicit enough user participation in providing feedbacks in online communities. Existing studies on assessing the helpfulness of user-generated contents are mainly based on heuristics and lack of a unifying theoretical framework. In this article we propose a text classification framework for finding helpful user-generated contents in online knowledge-sharing communities. The objective of our framework is to help a knowledge seeker find helpful information that can be potentially adopted. The framework is built on the Knowledge Adoption Model that considers both content-based argument quality and information source credibility. We identify 6 argument quality dimensions and 3 source credibility dimensions based on information quality and psychological theories. Using data extracted from a popular online community, our empirical evaluations show that all the dimensions improve the performance over a traditional text classification technique that considers word-based lexical features only

    Enabling automatic just-in-time evaluation of in-class discussions in on-line collaborative learning practices

    Get PDF
    Learning by discussion when applied to on-line collaborative learning settings can provide significant benefits for students in education in general. Indeed, the discussion process plays an important social task in collaborative learning practices. Participants can discuss about the activity being performed, collaborate with each other through the exchange of ideas that may arise, propose new resolution mechanisms, justify and refine their own contributions, and as a result, acquire new knowledge. Considering these benefits, current educational organizations incorporate on-line discussions into web-based courses as part of the very rationale of their pedagogical models. However, in-class collaborative assignments are usually greatly participated and contributed, which makes the monitoring and assessment tasks by tutors and moderators time-consuming, tedious and error-prone. Specially hard if not impossible by human tutors is to manually deal with the sequences of hundreds of contributions making up the discussion threads and the relations between these contributions. Consequently, tutoring tasks during on-line discussions usually restrict to offer evaluation results of the contributing effort and quality after the collaborative learning activity takes place and thus neglect the essential issue of constantly considering the process of knowledge building while it is still being performed. In this paper, we propose a multidimensional model based on data analysis from online collaborative discussion interactions that provides a first step towards an automatic evaluation in just-in-time fashion. The context of this study is a real on-line discussion experience that took place at the Open University of Catalonia.Peer ReviewedPostprint (published version

    Assessing Order Effects in Online Community-based Health Forums

    Get PDF
    Measuring the quality of health content in online health forums is a challenging task. The majority of the existing measures are based on evaluations of forum users and may not be reliable. We employed machine learning techniques, text mining methods, and Big Data platforms to construct four measures of textual quality to automatically determine the similarity of a given answer to professional answers. We then used them to assess the quality of 66,888 answers posted on Yahoo! Answers Health section. All four measures of textual quality revealed a higher quality for asker-selected best answers indicating that askers, to some extent, have a proper judgment to select the best answers. We also studied the presence of order effects in online health forums. Our results suggest that the textual quality of the first answer positively influences the mean textual quality of the subsequent answers and negatively influences the quantity of subsequent answers

    TrustyTweet: An Indicator-based Browser-Plugin to Assist Users in Dealing with Fake News on Twitter

    Get PDF
    The importance of dealing withfake newsonsocial mediahas increased both in political and social contexts.While existing studies focus mainly on how to detect and label fake news, approaches to assist usersin making their own assessments are largely missing. This article presents a study on how Twitter-users’assessmentscan be supported by an indicator-based white-box approach.First, we gathered potential indicators for fake news that have proven to be promising in previous studies and that fit our idea of awhite-box approach. Based on those indicators we then designed and implemented the browser-plugin TrusyTweet, which assists users on Twitterin assessing tweetsby showing politically neutral and intuitive warnings without creating reactance. Finally, we suggest the findings of our evaluations with a total of 27 participants which lead to further design implicationsfor approachesto assistusers in dealing with fake news

    Social Data Mining for Crime Intelligence

    Get PDF
    With the advancement of the Internet and related technologies, many traditional crimes have made the leap to digital environments. The successes of data mining in a wide variety of disciplines have given birth to crime analysis. Traditional crime analysis is mainly focused on understanding crime patterns, however, it is unsuitable for identifying and monitoring emerging crimes. The true nature of crime remains buried in unstructured content that represents the hidden story behind the data. User feedback leaves valuable traces that can be utilised to measure the quality of various aspects of products or services and can also be used to detect, infer, or predict crimes. Like any application of data mining, the data must be of a high quality standard in order to avoid erroneous conclusions. This thesis presents a methodology and practical experiments towards discovering whether (i) user feedback can be harnessed and processed for crime intelligence, (ii) criminal associations, structures, and roles can be inferred among entities involved in a crime, and (iii) methods and standards can be developed for measuring, predicting, and comparing the quality level of social data instances and samples. It contributes to the theory, design and development of a novel framework for crime intelligence and algorithm for the estimation of social data quality by innovatively adapting the methods of monitoring water contaminants. Several experiments were conducted and the results obtained revealed the significance of this study in mining social data for crime intelligence and in developing social data quality filters and decision support systems
    corecore