2,598 research outputs found

    Vulnerability in Social Epistemic Networks

    Get PDF
    Social epistemologists should be well-equipped to explain and evaluate the growing vulnerabilities associated with filter bubbles, echo chambers, and group polarization in social media. However, almost all social epistemology has been built for social contexts that involve merely a speaker-hearer dyad. Filter bubbles, echo chambers, and group polarization all presuppose much larger and more complex network structures. In this paper, we lay the groundwork for a properly social epistemology that gives the role and structure of networks their due. In particular, we formally define epistemic constructs that quantify the structural epistemic position of each node within an interconnected network. We argue for the epistemic value of a structure that we call the (m,k)-observer. We then present empirical evidence that (m,k)-observers are rare in social media discussions of controversial topics, which suggests that people suffer from serious problems of epistemic vulnerability. We conclude by arguing that social epistemologists and computer scientists should work together to develop minimal interventions that improve the structure of epistemic networks

    Quantifying Biases in Online Information Exposure

    Full text link
    Our consumption of online information is mediated by filtering, ranking, and recommendation algorithms that introduce unintentional biases as they attempt to deliver relevant and engaging content. It has been suggested that our reliance on online technologies such as search engines and social media may limit exposure to diverse points of view and make us vulnerable to manipulation by disinformation. In this paper, we mine a massive dataset of Web traffic to quantify two kinds of bias: (i) homogeneity bias, which is the tendency to consume content from a narrow set of information sources, and (ii) popularity bias, which is the selective exposure to content from top sites. Our analysis reveals different bias levels across several widely used Web platforms. Search exposes users to a diverse set of sources, while social media traffic tends to exhibit high popularity and homogeneity bias. When we focus our analysis on traffic to news sites, we find higher levels of popularity bias, with smaller differences across applications. Overall, our results quantify the extent to which our choices of online systems confine us inside "social bubbles."Comment: 25 pages, 10 figures, to appear in the Journal of the Association for Information Science and Technology (JASIST

    Measuring Online Social Bubbles

    Full text link
    Social media have quickly become a prevalent channel to access information, spread ideas, and influence opinions. However, it has been suggested that social and algorithmic filtering may cause exposure to less diverse points of view, and even foster polarization and misinformation. Here we explore and validate this hypothesis quantitatively for the first time, at the collective and individual levels, by mining three massive datasets of web traffic, search logs, and Twitter posts. Our analysis shows that collectively, people access information from a significantly narrower spectrum of sources through social media and email, compared to search. The significance of this finding for individual exposure is revealed by investigating the relationship between the diversity of information sources experienced by users at the collective and individual level. There is a strong correlation between collective and individual diversity, supporting the notion that when we use social media we find ourselves inside "social bubbles". Our results could lead to a deeper understanding of how technology biases our exposure to new information

    Polarization in Online Social Networks: A Review of Mechanisms and Dimensions

    Get PDF
    The extensive use of social media has sparked a controversial debate regarding interactions on social media platforms and their influence on network polarization. Still, the complex phenomenon lacks conceptual clarity. Conducting a systematic literature review on existing empirical findings, we take first steps to a more systematic conceptualization of polarization by identifying (a) the dimensions on which polarization is manifesting and (b) relevant influence factors associated with the emergence of polarization phenomena in online social networks. Further, we derive an integrated theory-driven framework offering a comprehensive set of mechanisms associated with polarization on social media and its concrete manifestation. We identified Attitude Extremity, Topic Diversity, Social Fragmentation, and Language Usage as four dimensions of how polarization is manifesting. The framework is a relevant starting point to attain coherence in future research on polarization phenomena in IS research and contributes to a more systematic discussion of unintended consequences of ICT usage

    Information consumption on social media : efficiency, divisiveness, and trust

    Get PDF
    Over the last decade, the advent of social media has profoundly changed the way people produce and consume information online. On these platforms, users themselves play a role in selecting the sources from which they consume information, overthrowing traditional journalistic gatekeeping. Moreover, advertisers can target users with news stories using users’ personal data. This new model has many advantages: the propagation of news is faster, the number of news sources is large, and the topics covered are diverse. However, in this new model, users are often overloaded with redundant information, and they can get trapped in filter bubbles by consuming divisive and potentially false information. To tackle these concerns, in my thesis, I address the following important questions: (i) How efficient are users at selecting their information sources? We have defined three intuitive notions of users’ efficiency in social media: link, in-flow, and delay efficiency. We use these three measures to assess how good users are at selecting who to follow within the social media system in order to most efficiently acquire information. (ii) How can we break the filter bubbles that users get trapped in? Users on social media sites such as Twitter often get trapped in filter bubbles by being exposed to radical, highly partisan, or divisive information. To prevent users from getting trapped in filter bubbles, we propose an approach to inject diversity in users’ information consumption by identifying non-divisive, yet informative information. (iii) How can we design an efficient framework for fact-checking? Proliferation of false information is a major problem in social media. To counter it, social media platforms typically rely on expert fact-checkers to detect false news. However, human fact-checkers can realistically only cover a tiny fraction of all stories. So, it is important to automatically prioritizing and selecting a small number of stories for human to fact check. However, the goals for prioritizing stories for fact-checking are unclear. We identify three desired objectives to prioritize news for fact-checking. These objectives are based on the users’ perception of truthfulness of stories. Our key finding is that these three objectives are incompatible in practice.In den letzten zehn Jahren haben soziale Medien die Art und Weise, wie Menschen online Informationen generieren und konsumieren, grundlegend verändert. Auf Social Media Plattformen wählen Nutzer selbst aus, von welchen Quellen sie Informationen beziehen hebeln damit das traditionelle Modell journalistischen Gatekeepings aus. Zusätzlich können Werbetreibende Nutzerdaten dazu verwenden, um Nachrichtenartikel gezielt an Nutzer zu verbreiten. Dieses neue Modell bietet einige Vorteile: Nachrichten verbreiten sich schneller, die Zahl der Nachrichtenquellen ist größer, und es steht ein breites Spektrum an Themen zur Verfügung. Das hat allerdings zur Folge, dass Benutzer häufig mit überflüssigen Informationen überladen werden und in Filterblasen geraten können, wenn sie zu einseitige oder falsche Informationen konsumieren. Um diesen Problemen Rechnung zu tragen, gehe ich in meiner Dissertation auf die drei folgenden wichtigen Fragestellungen ein: • (i) Wie effizient sind Nutzer bei der Auswahl ihrer Informationsquellen? Dazu definieren wir drei verschiedene, intuitive Arten von Nutzereffizienz in sozialen Medien: Link-, In-Flowund Delay-Effizienz. Mithilfe dieser drei Metriken untersuchen wir, wie gut Nutzer darin sind auszuwählen, wem sie auf Social Media Plattformen folgen sollen um effizient an Informationen zu gelangen. • (ii) Wie können wir verhindern, dass Benutzer in Filterblasen geraten? Nutzer von Social Media Webseiten werden häufig Teil von Filterblasen, wenn sie radikalen, stark parteiischen oder spalterischen Informationen ausgesetzt sind. Um das zu verhindern, entwerfen wir einen Ansatz mit dem Ziel, den Informationskonsum von Nutzern zu diversifizieren, indem wir Informationen identifizieren, die nicht polarisierend und gleichzeitig informativ sind. • (iii) Wie können wir Nachrichten effizient auf faktische Korrektheit hin überprüfen? Die Verbreitung von Falschinformationen ist eines der großen Probleme sozialer Medien. Um dem entgegenzuwirken, sind Social Media Plattformen in der Regel auf fachkundige Faktenprüfer zur Identifizierung falscher Nachrichten angewiesen. Die manuelle Überprüfung von Fakten kann jedoch realistischerweise nur einen sehr kleinen Teil aller Artikel und Posts abdecken. Daher ist es wichtig, automatisch eine überschaubare Zahl von Artikeln für die manuellen Faktenkontrolle zu priorisieren. Nach welchen Zielen eine solche Priorisierung erfolgen soll, ist jedoch unklar. Aus diesem Grund identifizieren wir drei wünschenswerte Priorisierungskriterien für die Faktenkontrolle. Diese Kriterien beruhen auf der Wahrnehmung des Wahrheitsgehalts von Artikeln durch Nutzer. Unsere Schlüsselbeobachtung ist, dass diese drei Kriterien in der Praxis nicht miteinander vereinbar sind

    Unraveling Information-Limiting Environments: An Empirical Review of Individual, Social, and Technological Filters in Social Media

    Get PDF
    Social media platforms offer a convenient way for people to interact and exchange information. However, there are sustained concerns that filter bubbles and echo chambers create information-limiting environments (ILEs) for their users. Despite a well-developed conceptual understanding, the empirical evidence regarding the causes and supporting conditions of these ILEs remains inconclusive. This paper addresses this gap by applying the triple-filter-bubble model developed by Geschke et al. (2019) to analyze empirical literature on the individual, social, and technological causes of ILEs. While we identify some factors that increase the probability of ILEs under certain conditions, our findings do not suffice to thoroughly validate conceptual models that explain why ILEs emerge. Therefore, we call for future research to investigate the causes of ILEs with higher external validity to develop a more comprehensive understanding of this phenomenon

    Network polarization, filter bubbles, and echo chambers: An annotated review of measures and reduction methods

    Full text link
    Polarization arises when the underlying network connecting the members of a community or society becomes characterized by highly connected groups with weak inter-group connectivity. The increasing polarization, the strengthening of echo chambers, and the isolation caused by information filters in social networks are increasingly attracting the attention of researchers from different areas of knowledge such as computer science, economics, social and political sciences. This work presents an annotated review of network polarization measures and models used to handle the polarization. Several approaches for measuring polarization in graphs and networks were identified, including those based on homophily, modularity, random walks, and balance theory. The strategies used for reducing polarization include methods that propose edge or node editions (including insertions or deletions, as well as edge weight modifications), changes in social network design, or changes in the recommendation systems embedded in these networks.Comment: Corrected a typo in Section 3.2; the rest remains unchange
    • …
    corecore