3,664 research outputs found

    The Turn to Artificial Intelligence in Governing Communication Online

    Get PDF
    Presently, we are witnessing an intense debate about technological advancements in artificial intelligence (AI) research and its deployment in various societal domains and contexts. In this context, media and communications is one of the most prominent and contested fields. Bots, voice assistants, automated (fake) news generation, content moderation and filtering – all of these are examples of how AI and machine learning are transforming the dynamics and order of digital communication. On 20 March 2018 the Alexander von Humboldt Institute for Internet and Society together with the non-governmental organisation Access Now hosted the one-day expert workshop “The turn to AI in governing communication online”. International experts from academia, politics, civil society and business gathered in Berlin to discuss the complex socio-technical questions and issues concerning subjects such as artificial intelligence technologies, machine learning systems, the extent of their deployment in content moderation and the range of approaches to understanding the status and future impact of AI systems for governing social communication on the internet. This workshop report summarises and documents the authors’ main takeaways from the discussions. The discussions, comments and questions raised and responses from experts also fed into the report. The report has been distributed among workshop participants. It is intended to contribute current perspectives to the discourse on AI and the governance of communication

    Situational Awareness, Driver’s Trust in Automated Driving Systems and Secondary Task Performance

    Full text link
    Driver assistance systems, also called automated driving systems, allow drivers to immerse themselves in non-driving-related tasks. Unfortunately, drivers may not trust the automated driving system, which prevents either handing over the driving task or fully focusing on the secondary task. We assert that enhancing situational awareness can increase a driver's trust in automation. Situational awareness should increase a driver's trust and lead to better secondary task performance. This study manipulated driversĘĽ situational awareness by providing them with different types of information: the control condition provided no information to the driver, the low condition provided a status update, while the high condition provided a status update and a suggested course of action. Data collected included measures of trust, trusting behavior, and task performance through surveys, eye-tracking, and heart rate data. Results show that situational awareness both promoted and moderated the impact of trust in the automated vehicle, leading to better secondary task performance. This result was evident in measures of self-reported trust and trusting behavior.This research was supported in part by the Automotive Research Center (ARC) at the University of Michigan, with funding from government contract Department of the Army W56HZV-14-2-0001 through the U. S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC). The authors acknowledge and greatly appreciate the guidance of Victor Paul (TARDEC), Ben Haynes (TARDEC), and Jason Metcalfe (ARL) in helping design the study. The authors would also like to thank Quantum Signal, LLC, for providing its ANVEL software and invaluable development support.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/148141/1/SA Trust - SAE- Public.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/148141/4/Petersen et al. 2019.pdfDescription of Petersen et al. 2019.pdf : Final Publication Versio

    SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice

    Full text link
    To counter online abuse and misinformation, social media platforms have been establishing content moderation guidelines and employing various moderation policies. The goal of this paper is to study these community guidelines and moderation practices, as well as the relevant research publications to identify the research gaps, differences in moderation techniques, and challenges that should be tackled by the social media platforms and the research community at large. In this regard, we study and analyze in the US jurisdiction the fourteen most popular social media content moderation guidelines and practices, and consolidate them. We then introduce three taxonomies drawn from this analysis as well as covering over one hundred interdisciplinary research papers about moderation strategies. We identified the differences between the content moderation employed in mainstream social media platforms compared to fringe platforms. We also highlight the implications of Section 230, the need for transparency and opacity in content moderation, why platforms should shift from a one-size-fits-all model to a more inclusive model, and lastly, we highlight why there is a need for a collaborative human-AI system

    How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets

    Full text link
    Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and thus highly sensitive to biases stemming from annotator beliefs, characteristics and demographics. We conduct two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech. Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech. In addition, semi-structured interviews with nine crowd workers provide further insights regarding the influence of subjectivity on annotations. In exploring how workers interpret a task - shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors - we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour.Comment: Accepted to the 11th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2023

    Reliable Decision from Multiple Subtasks through Threshold Optimization: Content Moderation in the Wild

    Full text link
    Social media platforms struggle to protect users from harmful content through content moderation. These platforms have recently leveraged machine learning models to cope with the vast amount of user-generated content daily. Since moderation policies vary depending on countries and types of products, it is common to train and deploy the models per policy. However, this approach is highly inefficient, especially when the policies change, requiring dataset re-labeling and model re-training on the shifted data distribution. To alleviate this cost inefficiency, social media platforms often employ third-party content moderation services that provide prediction scores of multiple subtasks, such as predicting the existence of underage personnel, rude gestures, or weapons, instead of directly providing final moderation decisions. However, making a reliable automated moderation decision from the prediction scores of the multiple subtasks for a specific target policy has not been widely explored yet. In this study, we formulate real-world scenarios of content moderation and introduce a simple yet effective threshold optimization method that searches the optimal thresholds of the multiple subtasks to make a reliable moderation decision in a cost-effective way. Extensive experiments demonstrate that our approach shows better performance in content moderation compared to existing threshold optimization methods and heuristics.Comment: WSDM2023 (Oral Presentation

    Scaling Culture in Blockchain Gaming: Generative AI and Pseudonymous Engagement

    Full text link
    Managing rapidly growing decentralized gaming communities brings unique challenges at the nexus of cultural economics and technology. This paper introduces a streamlined analytical framework that utilizes Large Language Models (LLMs), in this instance open-access generative pre-trained transformer (GPT) models, offering an efficient solution with deeper insights into community dynamics. The framework aids moderators in identifying pseudonymous actor intent, moderating toxic behavior, rewarding desired actions to avoid unintended consequences of blockchain-based gaming, and gauging community sentiment as communities venture into metaverse platforms and plan for hypergrowth. This framework strengthens community controls, eases onboarding, and promotes a common moral mission across communities while reducing agency costs by 95 pct. Highlighting the transformative role of generative AI, the paper emphasizes its potential to redefine the cost of cultural production. It showcases the utility of GPTs in digital community management, expanding their implications in cultural economics and transmedia storytelling

    From Scalability to Subsidiarity in Addressing Online Harm

    Get PDF
    Large social media platforms are generally designed for scalability—the ambition to increase in size without a fundamental change in form. This means that to address harm among users, they favor automated moderation wherever possible and typically apply a uniform set of rules. This article contrasts scalability with restorative and transformative justice approaches to harm, which are usually context-sensitive, relational, and individualized. We argue that subsidiarity—the principle that local social units should have meaningful autonomy within larger systems—might foster the balance between context and scale that is needed for improving responses to harm
    • …
    corecore