1,359 research outputs found

    Does Algorithmic Awareness Inculcate Mindful News Consumption in Social Media?

    Get PDF
    Social media curation algorithms can raise problems in terms of distortion of reality and mindless news consumption behaviors. This paper proposes algorithmic awareness as a plausible tool towards instilling mindful news consumption on social media platforms. Specifically, the paper investigates the effects of an Algorithmic Awareness (AA) intervention on 1) users perceived awareness about algorithmic curation and filter bubble effect and 2) news consumption behaviors in social media platforms. Based on concepts from information processing and mindfulness, we propose that imparting algorithmic awareness can coerce social media users to make more mindful decisions about whether to believe news posts and perform activities that contribute to their spread (e.g., read, share, fact check, customizing feed). To this end, we design an explanation-based intervention and propose to conduct a between subject’s online experiment

    “The Same Information Is Given to Everyone”: Algorithmic Awareness of Online Platforms

    Get PDF
    After years of discourse surrounding the concept of “filter bubbles,” information seekers still find themselves in echo chambers of their own thoughts and ideas. This study is an exploratory, mixed methods analysis of platform privacy/data policies and user awareness of the personal and usage data collected and user awareness of how platforms use this data to moderate and serve online content. Utilizing Bucher’s (2018) framework to research algorithms through the black box heuristic, this project learns how users inform themselves about data collection and use policies, and their awareness of algorithmic curation. The algorithmic systems that return search results or populate newsfeeds are opaque, black boxed systems. In an attempt to open the black box, this dissertation analyzes the privacy and data policies of the top three platforms by traffic in the United States – Google, YouTube, and Facebook – to first learn how they describe their data collection practices and how they explain data usage. Then a cross-sectional survey provides user perception data about what personal data is collected about them and how that data is used, based on the privacy policy analysis. The findings of this dissertation identify a need for algorithmic literacy and develop a new frame for the ACRL’s Information Literacy Framework to address algorithmic systems in information retrieval. Additionally, the findings draw attention to two subgroups of internet users – those who believe they do not use search engines and those who use only privacy-focused search engines. Both groups require additional research and demonstrate how online information retrieval is complicated through multiple points of access and unclear methods of information curation

    Blame It on the Algorithm? Russian Government-Sponsored Media and Algorithmic Curation of Political Information on Facebook

    Get PDF
    Previous research highlighted how algorithms on social media platforms can be abused to disseminate disinformation. However, less work has been devoted to understanding the interplay between Facebook news curation mechanisms and propaganda content. To address this gap, we analyze the activities of RT (formerly, Russia Today) on Facebook during the 2020 U.S. presidential election. We use agent-based algorithmic auditing and frame analysis to examine what content RT published on Facebook and how it was algorithmically curated in Facebook News Feeds and Search Results. We find that RT’s strategic framing included the promotion of anti-Biden leaning content, with an emphasis on antiestablishment narratives. However, due to algorithmic factors on Facebook, individual agents were exposed to eclectic RT content without an overarching narrative. Our findings contribute to the debate on computational propaganda by highlighting the ambiguous relationship between government-sponsored media and Facebook algorithmic curation, which may decrease the exposure of users to propaganda and at the same time increase confusion

    Platforms, the First Amendment and Online Speech: Regulating the Filters

    Get PDF
    In recent years, online platforms have given rise to multiple discussions about what their role is, what their role should be, and whether they should be regulated. The complex nature of these private entities makes it very challenging to place them in a single descriptive category with existing rules. In today’s information environment, social media platforms have become a platform press by providing hosting as well as navigation and delivery of public expression, much of which is done through machine learning algorithms. This article argues that there is a subset of algorithms that social media platforms use to filter public expression, which can be regulated without constitutional objections. A distinction is drawn between algorithms that curate speech for hosting purposes and those that curate for navigation purposes, and it is argued that content navigation algorithms, because of their function, deserve separate constitutional treatment. By analyzing the platforms’ functions independently from one another, this paper constructs a doctrinal and normative framework that can be used to navigate some of the complexity. The First Amendment makes it problematic to interfere with how platforms decide what to host because algorithms that implement content moderation policies perform functions analogous to an editorial role when deciding whether content should be censored or allowed on the platform. Content navigation algorithms, on the other hand, do not face the same doctrinal challenges; they operate outside of the public discourse as mere information conduits and are thus not subject to core First Amendment doctrine. Their function is to facilitate the flow of information to an audience, which in turn participates in public discourse; if they have any constitutional status, it is derived from the value they provide to their audience as a delivery mechanism of information. This article asserts that we should regulate content navigation algorithms to an extent. They undermine the notion of autonomous choice in the selection and consumption of content, and their role in today’s information environment is not aligned with a functioning marketplace of ideas and the prerequisites for citizens in a democratic society to perform their civic duties. The paper concludes that any regulation directed to content navigation algorithms should be subject to a lower standard of scrutiny, similar to the standard for commercial speech

    News recommender systems: a programmatic research review

    Full text link
    News recommender systems (NRS) are becoming a ubiquitous part of the digital media landscape. Particularly in the realm of political news, the adoption of NRS can significantly impact journalistic distribution, in turn affecting journalistic work practices and news consumption. Thus, NRS touch both the supply and demand of political news. In recent years, there has been a strong increase in research on NRS. Yet, the field remains dispersed across supply and demand research perspectives. Therefore, the contribution of this programmatic research review is threefold. First, we conduct a scoping study to review scholarly work on the journalistic supply and user demand sides. Second, we identify underexplored areas. Finally, we advance five recommendations for future research from a political communication perspective

    Algorithmic Distortion of Informational Landscapes

    Full text link
    The possible impact of algorithmic recommendation on the autonomy and free choice of Internet users is being increasingly discussed, especially in terms of the rendering of information and the structuring of interactions. This paper aims at reviewing and framing this issue along a double dichotomy. The first one addresses the discrepancy between users' intentions and actions (1) under some algorithmic influence and (2) without it. The second one distinguishes algorithmic biases on (1) prior information rearrangement and (2) posterior information arrangement. In all cases, we focus on and differentiate situations where algorithms empirically appear to expand the cognitive and social horizon of users, from those where they seem to limit that horizon. We additionally suggest that these biases may not be properly appraised without taking into account the underlying social processes which algorithms are building upon
    • …
    corecore