5,670 research outputs found

    Website evaluation measures, website credibility and user engagement for municipal website

    Get PDF
    This paper attempts to explore website evaluation measures specifically for information driven website such Municipal electronic government website toward website credibility and user engagement. Despite overwhelming of information source in online environment, the role of government website as a prominent government information provider becomes less preferred. Even, rapid development and continuous assessment been done by the government bodies to enhance and make utilize their website by the users, issues such usability problem, low popularity ranking and less user engagement still been reported. Therefore, the first part of this article reviews on existing assessment measures for websites done by scholars and also by practitioners. Then, in the second part of this article presents some finding on self evaluation of ten municipal website around Klang valley, Malaysia in term of popularity ranking and user engagement measure (bounce rate, Daily Pageviews per Visitor, and Daily Time on Site). Through related literatures reviewed, less study done previously includes overall or multiple measures for evaluation of information driven website. Estimation result of popularity ranking and user engagement percentage among municipal website also shows that there is still need some improvement to make the gateway of Malaysia electronic government become more favorable and engaging

    When Automated Assessment Meets Automated Content Generation: Examining Text Quality in the Era of GPTs

    Full text link
    The use of machine learning (ML) models to assess and score textual data has become increasingly pervasive in an array of contexts including natural language processing, information retrieval, search and recommendation, and credibility assessment of online content. A significant disruption at the intersection of ML and text are text-generating large-language models such as generative pre-trained transformers (GPTs). We empirically assess the differences in how ML-based scoring models trained on human content assess the quality of content generated by humans versus GPTs. To do so, we propose an analysis framework that encompasses essay scoring ML-models, human and ML-generated essays, and a statistical model that parsimoniously considers the impact of type of respondent, prompt genre, and the ML model used for assessment model. A rich testbed is utilized that encompasses 18,460 human-generated and GPT-based essays. Results of our benchmark analysis reveal that transformer pretrained language models (PLMs) more accurately score human essay quality as compared to CNN/RNN and feature-based ML methods. Interestingly, we find that the transformer PLMs tend to score GPT-generated text 10-15\% higher on average, relative to human-authored documents. Conversely, traditional deep learning and feature-based ML models score human text considerably higher. Further analysis reveals that although the transformer PLMs are exclusively fine-tuned on human text, they more prominently attend to certain tokens appearing only in GPT-generated text, possibly due to familiarity/overlap in pre-training. Our framework and results have implications for text classification settings where automated scoring of text is likely to be disrupted by generative AI.Comment: Data available at: https://github.com/nd-hal/automated-ML-scoring-versus-generatio

    Increasing the credibility of scientific dissemination using crowdsourcing

    Get PDF
    Abstract. This thesis introduces Article Enhancer, a semi-automated web application that utilizes crowdsourcing services, specifically Amazon’s Mechanical Turk platform, for augmenting articles with various referencing content gathered from the crowd-workers, on demand. The main goal of Article Enhancer is to address the question of how scientific articles can be made more credible, before dissemination to the public. This application serves as a tool in helping users find suitable supporting content for their articles in a novel way, removing all the manual work of doing it themselves. Media literacy, social media, fake news and crowdsourcing are discussed as part of related work. Also, tools that offer a similar functionality are reviewed. Furthermore, system design and implementation for Article Enhancer is presented. It is important to mention that the referencing content provided through Article Enhancer comes from already existing online content. Although Article Enhancer is semi-automated system, its strongest point compared to the other systems, is that it doesn’t require extra human effort to enrich articles especially with visualization content, and providing already existing content on the web avoiding the process of creating new content, making it a fresh approach in this line of software service. To evaluate Article Enhancer, we deployed the web app in a real-life setting, a space oriented towards students known as Tellus, at the University of Oulu. This testing proceedings helped in determining that the system appears alluring and attractive to new users. Article Enhancer proved to be unique and thrilling after the first encounter for many of the users. Feedback also shows that adding and embedding content is an innovative way to make articles become more credible in the eye of the reader

    Deception Detection and Rumor Debunking for Social Media

    Get PDF
    Abstract The main premise of this chapter is that the time is ripe for more extensive research and development of social media tools that filter out intentionally deceptive information such as deceptive memes, rumors and hoaxes, fake news or other fake posts, tweets and fraudulent profiles. Social media users’ awareness of intentional manipulation of online content appears to be relatively low, while the reliance on unverified information (often obtained from strangers) is at an all-time high. I argue there is need for content verification, systematic fact-checking and filtering of social media streams. This literature survey provides a background for understanding current automated deception detection research, rumor debunking, and broader content verification methodologies, suggests a path towards hybrid technologies, and explains why the development and adoption of such tools might still be a significant challenge

    CSI: A Hybrid Deep Model for Fake News Detection

    Full text link
    The topic of fake news has drawn attention both from the public and the academic communities. Such misinformation has the potential of affecting public opinion, providing an opportunity for malicious parties to manipulate the outcomes of public events such as elections. Because such high stakes are at play, automatically detecting fake news is an important, yet challenging problem that is not yet well understood. Nevertheless, there are three generally agreed upon characteristics of fake news: the text of an article, the user response it receives, and the source users promoting it. Existing work has largely focused on tailoring solutions to one particular characteristic which has limited their success and generality. In this work, we propose a model that combines all three characteristics for a more accurate and automated prediction. Specifically, we incorporate the behavior of both parties, users and articles, and the group behavior of users who propagate fake news. Motivated by the three characteristics, we propose a model called CSI which is composed of three modules: Capture, Score, and Integrate. The first module is based on the response and text; it uses a Recurrent Neural Network to capture the temporal pattern of user activity on a given article. The second module learns the source characteristic based on the behavior of users, and the two are integrated with the third module to classify an article as fake or not. Experimental analysis on real-world data demonstrates that CSI achieves higher accuracy than existing models, and extracts meaningful latent representations of both users and articles.Comment: In Proceedings of the 26th ACM International Conference on Information and Knowledge Management (CIKM) 201
    • …
    corecore