3,620 research outputs found

    More Specificity, More Attention to Social Context: Reframing How We Address "Bad Actors"

    Get PDF
    To address "bad actors" online, I argue for more specific definitions of acceptable and unacceptable behaviors and explicit attention to the social structures in which behaviors occur.Comment: Paper submitted Workshop Paper Submitted to CHI 2018: Understanding "Bad Actors" Onlin

    Sticks and Stones May Break My Bones but Words Will Never Hurt Me...Until I See Them: A Qualitative Content Analysis of Trolls in Relation to the Gricean Maxims and (IM)Polite Virtual Speech Acts

    Get PDF
    The troll is one of the most obtrusive and disruptive bad actors on the internet. Unlike other bad actors, the troll interacts on a more personal and intimate level with other internet users. Social media platforms, online communities, comment boards, and chatroom forums provide them with this opportunity. What distinguishes these social provocateurs from other bad actors are their virtual speech acts and online behaviors. These acts aim to incite anger, shame, or frustration in others through the weaponization of words, phrases, and other rhetoric. Online trolls come in all forms and use various speech tactics to insult and demean their target audiences. The goal of this research is to investigate trolls\u27 virtual speech acts and the impact of troll-like behaviors on online communities. Using Gricean maxims and politeness theory, this study seeks to identify common vernacular, word usage, and other language behaviors that trolls use to divert the conversation, insult others, and possibly affect fellow internet users’ mental health and well-being

    Bad Actors: Authenticity, Inauthenticity, Speech, and Capitalism

    Get PDF
    “Authenticity” has evolved into an important value that guides social media companies’ regulation of online speech. It is enforced through rules and practices that include real-name policies, Terms of Service requiring users to present only accurate information about themselves, community guidelines that prohibit “coordinated inauthentic behavior,” verification practices, product features, and more. This Article critically examines authenticity regulation by the social media industry, including companies’ claims that authenticity is a moral virtue, an expressive value, and a pragmatic necessity for online communication. It explains how authenticity regulation provides economic value to companies engaged in “information capitalism,” “data capitalism,” and “surveillance capitalism.” It also explores how companies’ self-regulatory focus on authenticity shapes users’ views about objectionable speech, upends traditional commitments to pseudonymous political expression, and encourages collaboration between the State and private companies. The Article concludes that “authenticity,” as conceptualized by the industry, is not an important value for users on par with privacy or dignity, but that it offers business value to companies. Authenticity regulation also provides many of the same opportunities for viewpoint discrimination as does garden-variety content moderation

    Bad Actors: Authenticity, Inauthenticity, Speech, and Capitalism

    Full text link
    “Authenticity” has evolved into an important value that guides social media companies’ regulation of online speech. It is enforced through rules and practices that include real-name policies, Terms of Service requiring users to present only accurate information about themselves, community guidelines that prohibit “coordinated inauthentic behavior,” verification practices, product features, and more. This Article critically examines authenticity regulation by the social media industry, including companies’ claims that authenticity is a moral virtue, an expressive value, and a pragmatic necessity for online communication. It explains how authenticity regulation provides economic value to companies engaged in “information capitalism,” “data capitalism,” and “surveillance capitalism.” It also explores how companies’ self-regulatory focus on authenticity shapes users’ views about objectionable speech, upends traditional commitments to pseudonymous political expression, and encourages collaboration between the State and private companies. The Article concludes that “authenticity,” as conceptualized by the industry, is not an important value for users on par with privacy or dignity, but that it offers business value to companies. Authenticity regulation also provides many of the same opportunities for viewpoint discrimination as does garden-variety content moderation

    More Specificity, More Attention to Social Context: Reframing How We Address ``Bad Actors''

    Get PDF
    Paper submitted to CHI 2018 Workshop: Understanding "Bad Actors" OnlineTo address ``bad actors'' online, I argue for more specific definitions of acceptable and unacceptable behaviors and explicit attention to the social structures in which behaviors occur.https://deepblue.lib.umich.edu/bitstream/2027.42/142392/1/bad_actors.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/142392/5/specificity-attention-social.pdfDescription of bad_actors.pdf : Previous versionDescription of specificity-attention-social.pdf : Main article - update

    Exclude the Bad Actors or Learn About The Group

    Get PDF
    In public goods environments, the threat to punish non-contributors may increase contributions. However, this threat may make players' contributions less informative about their true social preferences. This lack of information may lead to lower contributions after the threat disappears, as we show in a two stage model with selfish and conditionally cooperatives types. Under specified conditions welfare may be improved by committing not to punish or exclude. Our laboratory evidence supports this. Contributions under the threat of targeted punishment were less informative of subjects' later choices than contributions made anonymously. Subjects also realised that these were less informative, and their incentivized predictions reflected this understanding. We find evidence of conditional cooperation driven by beliefs over other's contributions. Overall, our anonymous treatment led to lower first-stage contributions but significantly higher second-stage contributions than our revealed treatment. Our model and evidence may help explain why anonymous contributions are often encouraged in the real world

    An Outlook on Whether Competition in High-Voltage Transmission Line Development is Necessary?

    Get PDF
    Various concerns, such as climate change, supply issues, and bad actors with vast energy resources, have increased global interest in increasing power grid security and efficiency. One method to increase power grid security and efficiency that has gained popularity is using high-voltage powerlines, cables transporting energy over long distances with minimal power losses along the route. The People’s Republic of China has been at the forefront of implementing high-voltage powerlines within its borders. For example, the Changji-to-Guquan project, which began in 2019, consists of a 1,100-kV direct current line spanning 2,046 miles, “roughly the distance between Los Angeles and Cleveland.” This powerline can transmit 12,000 MW from China’s rural western territories to the more populated east, enough electricity for 50 million households. This post was originally published on the Cardozo International & Comparative Law Review on April 27, 2023. The original post can be accessed via the Archived Link button above

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging Social Media Posts

    Full text link
    Social media platforms offer flagging, a technical feature that empowers users to report inappropriate posts or bad actors, to reduce online harms. While flags are often presented as flimsy icons, their simple interface disguises complex underlying interactions among users, algorithms, and moderators. Through semi-structured interviews with 22 active social media users who had recently flagged, we examine their understanding of flagging procedures, explore the factors that motivate and demotivate them from engaging in flagging, and surface their emotional, cognitive, and privacy concerns. Our findings show that a belief in generalized reciprocity motivates flag submissions, but deficiencies in procedural transparency create gaps in users' mental models of how platforms process flags. We highlight how flags raise questions about the distribution of labor and responsibility between platforms and users for addressing online harm. We recommend innovations in the flagging design space that assist user comprehension and facilitate granular status checks while aligning with their privacy and security expectations.Comment: Under review at ACM CSC

    Detecting The Corruption Of Online Questionnaires By Artificial Intelligence

    Full text link
    Online questionnaires that use crowd-sourcing platforms to recruit participants have become commonplace, due to their ease of use and low costs. Artificial Intelligence (AI) based Large Language Models (LLM) have made it easy for bad actors to automatically fill in online forms, including generating meaningful text for open-ended tasks. These technological advances threaten the data quality for studies that use online questionnaires. This study tested if text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems. While humans were able to correctly identify authorship of text above chance level (76 percent accuracy), their performance was still below what would be required to ensure satisfactory data quality. Researchers currently have to rely on the disinterest of bad actors to successfully use open-ended responses as a useful tool for ensuring data quality. Automatic AI detection systems are currently completely unusable. If AIs become too prevalent in submitting responses then the costs associated with detecting fraudulent submissions will outweigh the benefits of online questionnaires. Individual attention checks will no longer be a sufficient tool to ensure good data quality. This problem can only be systematically addressed by crowd-sourcing platforms. They cannot rely on automatic AI detection systems and it is unclear how they can ensure data quality for their paying clients

    Understanding the voluntary moderation practices in live streaming communities

    Get PDF
    Harmful content, such as hate speech, online abuses, harassment, and cyberbullying, proliferates across various online communities. Live streaming as a novel online community provides ways for thousands of users (viewers) to entertain and engage with a broadcaster (streamer) in real-time in the chatroom. While the streamer has the camera on and the screen shared, tens of thousands of viewers are watching and messaging in real-time, resulting in concerns about harassment and cyberbullying. To regulate harmful content—toxic messages in the chatroom, streamers rely on a combination of automated tools and volunteer human moderators (mods) to block users or remove content, which is termed content moderation. Live streaming as a mixed media contains some unique attributes such as synchronicity and authenticity, making real-time content moderation challenging. Given the high interactivity and ephemerality of live text-based communication in the chatroom, mods have to make decisions with time constraints and little instruction, suffering cognitive overload and emotional toll. While much previous work has focused on moderation in asynchronous online communities and social media platforms, very little is known about human moderation in synchronous online communities with live interaction among users in a timely manner. It is necessary to understand mods’ moderation practices in live streaming communities, considering their roles to support community growth. This dissertation centers on volunteer mods in live streaming communities to explore their moderation practices and relationships with streamers and viewers. Through quantitative and qualitative methods, this dissertation mainly focuses on three aspects: the strategies and tools used by moderators, the mental model and decision-making process applied toward violators, and the conflict management present in the moderation team. This dissertation uses various socio-technical theories to explain mods’ individual and collaborative practices and suggests several design interventions to facilitate the moderation process in live streaming communities
    • …
    corecore