1,095 research outputs found

    Opinion dynamics with backfire effect and biased assimilation

    Get PDF
    The democratization of AI tools for content generation, combined with unrestricted access to mass media for all (e.g. through microblogging and social media), makes it increasingly hard for people to distinguish fact from fiction. This raises the question of how individual opinions evolve in such a networked environment without grounding in a known reality. The dominant approach to studying this problem uses simple models from the social sciences on how individuals change their opinions when exposed to their social neighborhood, and applies them on large social networks. We propose a novel model that incorporates two known social phenomena: (i) Biased Assimilation: the tendency of individuals to adopt other opinions if they are similar to their own; (ii) Backfire Effect: the fact that an opposite opinion may further entrench someone in their stance, making their opinion more extreme instead of moderating it. To the best of our knowledge this is the first DeGroot-type opinion formation model that captures the Backfire Effect. A thorough theoretical and empirical analysis of the proposed model reveals intuitive conditions for polarization and consensus to exist, as well as the properties of the resulting opinions

    Opinion dynamics with backfire effect and biased assimilation

    Get PDF
    The democratization of AI tools for content generation, combined with unrestricted access to mass media for all (e.g. through microblogging and social media), makes it increasingly hard for people to distinguish fact from fiction. This raises the question of how individual opinions evolve in such a networked environment without grounding in a known reality. The dominant approach to studying this problem uses simple models from the social sciences on how individuals change their opinions when exposed to their social neighborhood, and applies them on large social networks. We propose a novel model that incorporates two known social phenomena: (i) \emph{Biased Assimilation}: the tendency of individuals to adopt other opinions if they are similar to their own; (ii) \emph{Backfire Effect}: the fact that an opposite opinion may further entrench someone in their stance, making their opinion more extreme instead of moderating it. To the best of our knowledge this is the first model that captures the Backfire Effect. A thorough theoretical and empirical analysis of the proposed model reveals intuitive conditions for polarization and consensus to exist, as well as the properties of the resulting opinions

    Adherence to Misinformation on Social Media Through Socio-Cognitive and Group-Based Processes

    Get PDF
    Previous work suggests that people's preference for different kinds of information depends on more than just accuracy. This could happen because the messages contained within different pieces of information may either be well-liked or repulsive. Whereas factual information must often convey uncomfortable truths, misinformation can have little regard for veracity and leverage psychological processes which increase its attractiveness and proliferation on social media. In this review, we argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation by reducing, rather than increasing, the psychological cost of doing so. We cover how attention may often be shifted away from accuracy and towards other goals, how social and individual cognition is affected by misinformation and the cases under which debunking it is most effective, and how the formation of online groups affects information consumption patterns, often leading to more polarization and radicalization. Throughout, we make the case that polarization and misinformation adherence are closely tied. We identify ways in which the psychological cost of adhering to misinformation can be increased when designing anti-misinformation interventions or resilient affordances, and we outline open research questions that the CSCW community can take up in further understanding this cost

    Learning Opinion Dynamics From Social Traces

    Full text link
    Opinion dynamics - the research field dealing with how people's opinions form and evolve in a social context - traditionally uses agent-based models to validate the implications of sociological theories. These models encode the causal mechanism that drives the opinion formation process, and have the advantage of being easy to interpret. However, as they do not exploit the availability of data, their predictive power is limited. Moreover, parameter calibration and model selection are manual and difficult tasks. In this work we propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces. Given a set of observables (e.g., actions and interactions between agents), our model can recover the most-likely latent opinion trajectories that are compatible with the assumptions about the process dynamics. This type of model retains the benefits of agent-based ones (i.e., causal interpretation), while adding the ability to perform model selection and hypothesis testing on real data. We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart. We then design an inference algorithm based on online expectation maximization to learn the latent parameters of the model. Such algorithm can recover the latent opinion trajectories from traces generated by the classical agent-based model. In addition, it can identify the most likely set of macro parameters used to generate a data trace, thus allowing testing of sociological hypotheses. Finally, we apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect. Our results suggest a low prominence of the effect in Reddit's political conversation.Comment: Published at KDD202

    Unbiasing Information Search and Processing through Personal and Social Identity Mechanisms

    Get PDF
    Group commitments such as partisanship and religion can bias the way individuals seek information and weigh evidence. This psychological process can lead to distorted views of reality and polarization between opposing social groups. Substantial research confirms the existence and persistence of numerous identity-driven divides in society, but means of attenuating them remain elusive. However, because identity-protective cognition is driven by a need to maintain global and not domain specific integrity, researchers have found that affirming an unrelated core aspect of the self can eliminate the need for ego defense and result in more evenhanded evaluation. This study proposes a competing intervention. Individuals possess numerous social identities that contextually vary in relative prominence; therefore a different means to unbiased cognition may be to make many social identities salient simultaneously, reducing influence of any potentially threatened identity. This may also reduce selective exposure to congenial information, which has not been found with affirmation. This study also advances research on the phenomenon of selective exposure by considering individuals’ interpersonal networks in information search. Because networks are not static, and are instead contextually activated, inducing a more complex representational structure of the self may broaden the set of contacts from whom individuals seek information. The bias-mitigative potential of self-affirmation and social identity complexity is examined here in a series of dispute contexts — two partisan, one religious — over a mining spill, an advanced biofuels mandate, and gene editing technology. Results from the three experiments (total N = 1,257) show modest support for social identity complexity reducing group-alignment of beliefs, behavior, and information search, while affirmation failed to reduce, and in some cases increased, group alignment

    Who Fears the HPV Vaccine, Who Doesn\u27t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition

    Get PDF
    The cultural cognition hypothesis holds that individuals are disposed to form risk perceptions that reflect and reinforce their commitments to contested views of the good society. This paper reports the results of a study that used the controversy over mandatory HPV vaccination to test the cultural cognition hypothesis. Although public health officials have recommended that all girls aged 11 or 12 be vaccinated for HPV - a virus that causes cervical cancer and that is transmitted by sexual contact - political controversy has blocked adoption of mandatory school-enrollment vaccination programs in all but one state. A multi-stage experimental study of a large and diverse sample of American adults (N = 1,500) found evidence that cultural cognition generates disagreement about the risks and benefits of the HPV vaccine. It does so, the experiment determined, through two mechanisms: biased assimilation, and the credibility heuristic. In addition to describing the study, the paper discusses the theoretical and practical implications of these findings
    • …
    corecore