97 research outputs found

    Opinion dynamics with backfire effect and biased assimilation

    Get PDF
    The democratization of AI tools for content generation, combined with unrestricted access to mass media for all (e.g. through microblogging and social media), makes it increasingly hard for people to distinguish fact from fiction. This raises the question of how individual opinions evolve in such a networked environment without grounding in a known reality. The dominant approach to studying this problem uses simple models from the social sciences on how individuals change their opinions when exposed to their social neighborhood, and applies them on large social networks. We propose a novel model that incorporates two known social phenomena: (i) Biased Assimilation: the tendency of individuals to adopt other opinions if they are similar to their own; (ii) Backfire Effect: the fact that an opposite opinion may further entrench someone in their stance, making their opinion more extreme instead of moderating it. To the best of our knowledge this is the first DeGroot-type opinion formation model that captures the Backfire Effect. A thorough theoretical and empirical analysis of the proposed model reveals intuitive conditions for polarization and consensus to exist, as well as the properties of the resulting opinions

    An empirical examination of echo chambers in US climate policy networks

    Get PDF
    Diverse methods have been applied to understand why science continues to be debated within the climate policy domain. A number of studies have presented the notion of the ‘echo chamber’ to model and explain information flows across an array of social settings, finding disproportionate connections among ideologically similar political communicators. This paper builds on these findings to provide a more formal operationalization of the components of echo chambers. We then empirically test their utility using survey data collected from the community of political elites engaged in the contentious issue of climate politics in the United States. Our survey period coincides with the most active and contentious period in the history of US climate policy, when legislation regulating carbon dioxide emissions had passed through the House of Representatives and was being considered in the Senate. We use exponential random graph (ERG) modelling to demonstrate that both the homogeneity of information (the echo) and multi-path information transmission (the chamber) play significant roles in policy communication. We demonstrate that the intersection of these components creates echo chambers in the climate policy network. These results lead to some important conclusions about climate politics, as well as the relationship between science communication and policymaking at the elite level more generally.US National Science FoundationNational Socio-Environmental Synthesis Center (SESYNC

    Rightfully-Placed Blame:How social media algorithms facilitate post-truth politics

    Get PDF
    Social media algorithms have facilitated post-truth politics in online spaces such as Facebook and Twitter. The personalisation mechanisms of recommender algorithms construct and trap people in filter bubbles facilitating a virality of misinformation, lowering scrutiny towards non-dominant information sources, and becoming a means in which malicious actors proliferate deliberately misinformative or biassed knowledge. Eli Pariser’s (2011) publication is the structural argument in which this paper’s claim is grounded in. Literature on post-truth scholarship agreeing with this claim is engaged to demonstrate how the personalisation mechanisms have facilitated post-truth politics with the echo chamber phenomena, which imbues cognitive biases that inhibit our ability to impartially engage with counter-attitudinal opinion. Echo chambers have been observed on Facebook and Twitter, particularly amongst conspiracists and during the 2016 US presidential election, and have been found to facilitate post-truth politics by enabling misinformation to run rampant and reducing trust in government authority, resulting in political polarisation. A particularly agency-based criticism towards the causal relationship between post-truth conditions and social media algorithms is explored and scrutinised, as embodied by Guess et al.’s (2018) publication addressing weaknesses of the ‘blame’ we place on social media algorithms for political problems within society. We conclude that social media algorithms facilitate post-truth politics regardless of the agency in which the politically-engaged and aware user may employ due to the difficulty of ‘leaving’ filter bubbles. Excessive expectation is further placed on the non-politically engaged majority, who are additionally ignorant of filter bubbles they may be trapped in

    Probabilistic Social Learning Improves the Public's Detection of Misinformation

    Get PDF
    The digital spread of misinformation is one of the leading threats to democracy, public health, and the global economy. Popular strategies for mitigating misinformation include crowdsourcing, machine learning, and media literacy programs that require social media users to classify news in binary terms as either true or false. However, research on peer influence suggests that framing decisions in binary terms can amplify judgment errors and limit social learning, whereas framing decisions in probabilistic terms can reliably improve judgments. In this preregistered experiment, we compare online peer networks that collaboratively evaluate the veracity of news by communicating either binary or probabilistic judgments. Exchanging probabilistic estimates of news veracity substantially improved individual and group judgments, with the effect of eliminating polarization in news evaluation. By contrast, exchanging binary classifications reduced social learning and entrenched polarization. The benefits of probabilistic social learning are robust to participants' education, gender, race, income, religion, and partisanship.Comment: 11 pages, 4 figure

    Quantifying and minimizing risk of conflict in social networks

    Get PDF
    Controversy, disagreement, conflict, polarization and opinion divergence in social networks have been the subject of much recent research. In particular, researchers have addressed the question of how such concepts can be quantified given people’s prior opinions, and how they can be optimized by influencing the opinion of a small number of people or by editing the network’s connectivity. Here, rather than optimizing such concepts given a specific set of prior opinions, we study whether they can be optimized in the average case and in the worst case over all sets of prior opinions. In particular, we derive the worst-case and average-case conflict risk of networks, and we propose algorithms for optimizing these. For some measures of conflict, these are non-convex optimization problems with many local minima. We provide a theoretical and empirical analysis of the nature of some of these local minima, and show how they are related to existing organizational structures. Empirical results show how a small number of edits quickly decreases its conflict risk, both average-case and worst-case. Furthermore, it shows that minimizing average-case conflict risk often does not reduce worst-case conflict risk. Minimizing worst-case conflict risk on the other hand, while computationally more challenging, is generally effective at minimizing both worst-case as well as average-case conflict risk

    Minimizing Polarization and Disagreement in Social Networks

    Full text link
    The rise of social media and online social networks has been a disruptive force in society. Opinions are increasingly shaped by interactions on online social media, and social phenomena including disagreement and polarization are now tightly woven into everyday life. In this work we initiate the study of the following question: given nn agents, each with its own initial opinion that reflects its core value on a topic, and an opinion dynamics model, what is the structure of a social network that minimizes {\em polarization} and {\em disagreement} simultaneously? This question is central to recommender systems: should a recommender system prefer a link suggestion between two online users with similar mindsets in order to keep disagreement low, or between two users with different opinions in order to expose each to the other's viewpoint of the world, and decrease overall levels of polarization? Our contributions include a mathematical formalization of this question as an optimization problem and an exact, time-efficient algorithm. We also prove that there always exists a network with O(n/ϵ2)O(n/\epsilon^2) edges that is a (1+ϵ)(1+\epsilon) approximation to the optimum. For a fixed graph, we additionally show how to optimize our objective function over the agents' innate opinions in polynomial time. We perform an empirical study of our proposed methods on synthetic and real-world data that verify their value as mining tools to better understand the trade-off between of disagreement and polarization. We find that there is a lot of space to reduce both polarization and disagreement in real-world networks; for instance, on a Reddit network where users exchange comments on politics, our methods achieve a ∼60 000\sim 60\,000-fold reduction in polarization and disagreement.Comment: 19 pages (accepted, WWW 2018
    • …
    corecore