26,833 research outputs found
Opinion dynamics with backfire effect and biased assimilation
The democratization of AI tools for content generation, combined with unrestricted access to mass media for all (e.g. through microblogging and social media), makes it increasingly hard for people to distinguish fact from fiction. This raises the question of how individual opinions evolve in such a networked environment without grounding in a known reality. The dominant approach to studying this problem uses simple models from the social sciences on how individuals change their opinions when exposed to their social neighborhood, and applies them on large social networks.
We propose a novel model that incorporates two known social phenomena: (i) Biased Assimilation: the tendency of individuals to adopt other opinions if they are similar to their own; (ii) Backfire Effect: the fact that an opposite opinion may further entrench someone in their stance, making their opinion more extreme instead of moderating it. To the best of our knowledge this is the first DeGroot-type opinion formation model that captures the Backfire Effect. A thorough theoretical and empirical analysis of the proposed model reveals intuitive conditions for polarization and consensus to exist, as well as the properties of the resulting opinions
The Evolution of Beliefs over Signed Social Networks
We study the evolution of opinions (or beliefs) over a social network modeled
as a signed graph. The sign attached to an edge in this graph characterizes
whether the corresponding individuals or end nodes are friends (positive links)
or enemies (negative links). Pairs of nodes are randomly selected to interact
over time, and when two nodes interact, each of them updates its opinion based
on the opinion of the other node and the sign of the corresponding link. This
model generalizes DeGroot model to account for negative links: when two enemies
interact, their opinions go in opposite directions. We provide conditions for
convergence and divergence in expectation, in mean-square, and in almost sure
sense, and exhibit phase transition phenomena for these notions of convergence
depending on the parameters of the opinion update model and on the structure of
the underlying graph. We establish a {\it no-survivor} theorem, stating that
the difference in opinions of any two nodes diverges whenever opinions in the
network diverge as a whole. We also prove a {\it live-or-die} lemma, indicating
that almost surely, the opinions either converge to an agreement or diverge.
Finally, we extend our analysis to cases where opinions have hard lower and
upper limits. In these cases, we study when and how opinions may become
asymptotically clustered to the belief boundaries, and highlight the crucial
influence of (strong or weak) structural balance of the underlying network on
this clustering phenomenon
Opinion Polarization by Learning from Social Feedback
We explore a new mechanism to explain polarization phenomena in opinion
dynamics in which agents evaluate alternative views on the basis of the social
feedback obtained on expressing them. High support of the favored opinion in
the social environment, is treated as a positive feedback which reinforces the
value associated to this opinion. In connected networks of sufficiently high
modularity, different groups of agents can form strong convictions of competing
opinions. Linking the social feedback process to standard equilibrium concepts
we analytically characterize sufficient conditions for the stability of
bi-polarization. While previous models have emphasized the polarization effects
of deliberative argument-based communication, our model highlights an affective
experience-based route to polarization, without assumptions about negative
influence or bounded confidence.Comment: Presented at the Social Simulation Conference (Dublin 2017
Opinion dynamics: models, extensions and external effects
Recently, social phenomena have received a lot of attention not only from
social scientists, but also from physicists, mathematicians and computer
scientists, in the emerging interdisciplinary field of complex system science.
Opinion dynamics is one of the processes studied, since opinions are the
drivers of human behaviour, and play a crucial role in many global challenges
that our complex world and societies are facing: global financial crises,
global pandemics, growth of cities, urbanisation and migration patterns, and
last but not least important, climate change and environmental sustainability
and protection. Opinion formation is a complex process affected by the
interplay of different elements, including the individual predisposition, the
influence of positive and negative peer interaction (social networks playing a
crucial role in this respect), the information each individual is exposed to,
and many others. Several models inspired from those in use in physics have been
developed to encompass many of these elements, and to allow for the
identification of the mechanisms involved in the opinion formation process and
the understanding of their role, with the practical aim of simulating opinion
formation and spreading under various conditions. These modelling schemes range
from binary simple models such as the voter model, to multi-dimensional
continuous approaches. Here, we provide a review of recent methods, focusing on
models employing both peer interaction and external information, and
emphasising the role that less studied mechanisms, such as disagreement, has in
driving the opinion dynamics. [...]Comment: 42 pages, 6 figure
Quantifying and minimizing risk of conflict in social networks
Controversy, disagreement, conflict, polarization and opinion divergence in social networks have been the subject of much recent research. In particular, researchers have addressed the question of how such concepts can be quantified given people’s prior opinions, and how they can be optimized by influencing the opinion of a small number of people or by editing the network’s connectivity.
Here, rather than optimizing such concepts given a specific set of prior opinions, we study whether they can be optimized in the average case and in the worst case over all sets of prior opinions. In particular, we derive the worst-case and average-case conflict risk of networks, and we propose algorithms for optimizing these.
For some measures of conflict, these are non-convex optimization problems with many local minima. We provide a theoretical and empirical analysis of the nature of some of these local minima, and show how they are related to existing organizational structures.
Empirical results show how a small number of edits quickly decreases its conflict risk, both average-case and worst-case. Furthermore, it shows that minimizing average-case conflict risk often does not reduce worst-case conflict risk. Minimizing worst-case conflict risk on the other hand, while computationally more challenging, is generally effective at minimizing both worst-case as well as average-case conflict risk
- …