7 research outputs found

    Growing a Bayesian Conspiracy Theorist: An Agent-Based Model

    Get PDF
    Conspiracy theories cover topics from politicians to world events. Frequently, proponents of conspiracies hold these beliefs strongly despite available evidence that may challenge or disprove them. Therefore, conspiratorial reasoning has often been described as illegitimate or flawed. Here, we explore the possibility of growing a rational (Bayesian) conspiracy theorist through an Agent-Based Model. The agent has reasonable constraints on access to the total information as well its access to the global population. The model shows that network structures are central to maintain objectively mistaken beliefs. Increasing the size of the available network, yielded increased confidence in mistaken beliefs and subsequent network pruning, allowing for belief purism. Rather than ameliorating and correcting mistaken beliefs (where agents move toward the correct mean), large networks appear to maintain and strengthen them. As such, large networks may increase the potential for belief polarization, extreme beliefs, and conspiratorial thinking – even amongst Bayesian agents

    Influence and seepage:An evidence-resistant minority can affect public opinion and scientific belief formation

    Get PDF
    Some well-established scientific findings may be rejected by vocal minorities because the evidence is in conflict with political views or economic interests. For example, the tobacco industry denied the medical consensus on the harms of smoking for decades, and the clear evidence about human-caused climate change is currently being rejected by many politicians and think tanks that oppose regulatory action. We present an agent-based model of the processes by which denial of climate change can occur, how opinions that run counter to the evidence can affect the scientific community, and how denial can alter the public discourse. The model involves an ensemble of Bayesian agents, representing the scientific community, that are presented with the emerging historical evidence of climate change and that also communicate the evidence to each other. Over time, the scientific community comes to agreement that the climate is changing. When a minority of agents is introduced that is resistant to the evidence, but that enter into the scientific discussion, the simulated scientific community still acquires firm knowledge but consensus formation is delayed. When both types of agents are communicating with the general public, the public remains ambivalent about the reality of climate change. The model captures essential aspects of the actual evolution of scientific and public opinion during the last 4 decades

    Technologically scaffolded atypical cognition: the case of YouTube’s recommender system

    Get PDF
    YouTube has been implicated in the transformation of users into extremists and conspiracy theorists. The alleged mechanism for this radicalizing process is YouTube’s recommender system, which is optimized to amplify and promote clips that users are likely to watch through to the end. YouTube optimizes for watch-through for economic reasons: people who watch a video through to the end are likely to then watch the next recommended video as well, which means that more advertisements can be served to them. This is a seemingly innocuous design choice, but it has a troubling side-effect. Critics of YouTube have alleged that the recommender system tends to recommend extremist content and conspiracy theories, as such videos are especially likely to capture and keep users’ attention. To date, the problem of radicalization via the YouTube recommender system has been a matter of speculation. The current study represents the first systematic, pre-registered attempt to establish whether and to what extent the recommender system tends to promote such content. We begin by contextualizing our study in the framework of technological seduction. Next, we explain our methodology. After that, we present our results, which are consistent with the radicalization hypothesis. Finally, we discuss our findings, as well as directions for future research and recommendations for users, industry, and policy-makers

    Rational Factionalization for Agents with Probabilistically Related Beliefs

    Get PDF
    General epistemic polarization arises when the beliefs of a population grow further apart, in particular when all agents update on the same evidence. Epistemic factionalization arises when the beliefs grow further apart, but different beliefs also become correlated across the population. I present a model of how factionalization can emerge in a population of ideally rational agents. This kind of factionalization is driven by probabilistic relations between beliefs, with background beliefs shaping how the agents' beliefs evolve in the light of new evidence. Moreover, I show that in such a model, the only possible outcomes from updating on identical evidence are general convergence or factionalization. Beliefs cannot spread out in all directions: if the beliefs overall polarize, then it must result in factionalization

    Computational modelling of social cognition and behaviour

    Get PDF
    Philosophers have always been interested in asking moral questions, but social scientists have generally been more occupied with asking questions about morality. How do people differ with regards to their morality? How frequently are moral values inconsistent, thus resulting in internal conflicts? How likely are people to revise their moral beliefs? The aim of these questions is to explore moral reasoning and identify patterns of moral behaviour between people. Simultaneously, social scientists have moved beyond the exploration of small-scale, static snapshot of networks onto nuanced, data-driven analyses of the structure, content, and dynamics of large-scale social processes. This drives researchers to use far more elaborate tools, such as automated text analysis, online field experiments, mass collaboration, machine learning, and more generally computational modelling, to formulate and test theories (e.g., Evans & Aceves, 2016; Molina & Garip, 2019; Nelson, 2020; Salganik, 2019). It is fair to argue that social sciences are on the verge of a new era, an era in which computational methods and large-scale data are the primary tools/sources of gaining information and knowledge. In this dissertation, I will focus on developing formal models of the cognitive dissonance involved in moral values conflicts within individuals, and how this might be reduced. I will also attempt to extend this to connect with research linking moral and political psychology. Then I will try to explain echo chamber development, as a socio-cognitive phenomenon, arising from dynamics described in chapters 2 and 3. Finally, I will focus on moral belief updating, as an alternate (class of) response(s) in chapter 6. I try to explain these phenomena by bringing together cognitive and social theories. The three principal theories we build upon are Festinger’s Cognitive Dissonance, Bandura’s Moral Disengagement and Haidt’s Moral Fountations Theory. As it is detailed in the forthcoming paragraphs, the union of these theories, alongside with computational modelling, sparks off some interesting hypotheses. We now go ahead and discuss why computational modelling is a powerful tool in social sciences, and then present a historical background for each of the aforementioned theories
    corecore