620 research outputs found
Truth seekers in opinion dynamics models
We modify the model of Deffuant et al. to distinguish true opinion among
others in the fashion of Hegselmann and Krause
. The basic features of both models
modified to account for truth seekers are qualitatively the same.Comment: RevTeX4, 2 pages, 1 figure in 6 eps file
Monte Carlo simulations of the Ising and the Sznajd model on growing Barabasi - Albert networks
The Ising model shows on growing Barabasi - Albert networks the same
ferromagnetic behavior as on static Barabasi - Albert networks. Sznajd models
on growing Barabasi - Albert networks show an hysteresis like behavior. Nearly
a full consensus builds up and the winning opinion depends on history. On slow
growing Barabasi - Albert networks a full consensus builds up. At five opinions
in the Sznajd model with limited persuasion on growing Barabasi - Albert
networks, all odd opinions win and all even opinions loose supporters.Comment: 6 pages including 3 figures, for IJMP
Bounded confidence, radical groups, and charismatic leaders
By few simple extensions it is possible to model radical groups, charismatic leaders and processes of radicalization in the bounded confidence framework. In the resulting model we get a lot of surprising (non-)monotonicities. In certain regions of the parameter space more radicals or more 'charismaticity' may lead to less radicalisation
Advertising, consensus, and ageing in multilayer Sznajd model
In the Sznajd consensus model on the square lattice, two people who agree in
their opinions convince their neighbours of this opinion. We generalize it to
many layers representing many age levels, and check if still a consensus among
all layers is possible. Advertising sometimes but not always produces a
consensus on the advertised opinion.Comment: 6 pages including 4 figures, for Int. J. Mod. Phys.
The Importance of Disagreeing: Contrarians and Extremism in the CODA model
In this paper, we study the effects of introducing contrarians in a model of
Opinion Dynamics where the agents have internal continuous opinions, but
exchange information only about a binary choice that is a function of their
continuous opinion, the CODA model. We observe that the hung election scenario
still exists here, but it is weaker and it shouldn't be expected in every
election. Finally, we also show that the introduction of contrarians make the
tendency towards extremism of the original model weaker, indicating that the
existence of agents that prefer to disagree might be an important aspect and
help society to diminish extremist opinions.Comment: 14 pages, 9 figure
Truth and Cognitive Division of Labour: First Steps Towards a Computer Aided Social Epistemology
The paper analyzes the chances for the truth to be found and broadly accepted under conditions of cognitive division of labour combined with a social exchange process. Cognitive division of labour means, that only some individuals are active truth seekers, possibly with different capacities. The social exchange process consists in an exchange of opinions between all individuals, whether truth seekers or not. We de- velop a model which is investigated by both, mathematical tools and computer simulations. As an analytical result the Funnel theorem states that under rather weak conditions on the social process a consensus on the truth will be reached if all individuals posses an arbitrarily small inclination for truth seeking. The Leading the pack theorem states that under certain conditions even a single truth seeker may lead all individuals to the truth. Systematic simulations analyze how close and how fast groups can get to the truth depending on the frequency of truth seekers, their capacities as truth seekers, the position of the truth (more to the extreme or more in the centre of an opinion space), and the willingness to take into account the opinions of others when exchanging and updating opinions. A tricky movie visualizes simulations results in a parameter space of higher dimensions.Opinion Dynamics, Consensus/dissent, Bounded Confidence, Truth, Social Epistemology
Opinion Dynamics and Bounded Confidence Models, Analysis and Simulation
When does opinion formation within an interacting group lead to consensus, polarization or fragmentation? The article investigates various models for the dynamics of continuous opinions by analytical methods as well as by computer simulations. Section 2 develops within a unified framework the classical model of consensus formation, the variant of this model due to Friedkin and Johnsen, a time-dependent version and a nonlinear version with bounded confidence of the agents. Section 3 presents for all these models major analytical results. Section 4 gives an extensive exploration of the nonlinear model with bounded confidence by a series of computer simulations. An appendix supplies needed mathematical definitions, tools, and theorems.opinion dynamics, consensus/dissent, bounded confidence, nonlinear dynamical systems.
Optimal Opinion Control: The Campaign Problem
Opinion dynamics is nowadays a very common field of research. In this article
we formulate and then study a novel, namely strategic perspective on such
dynamics: There are the usual normal agents that update their opinions, for
instance according the well-known bounded confidence mechanism. But,
additionally, there is at least one strategic agent. That agent uses opinions
as freely selectable strategies to get control on the dynamics: The strategic
agent of our benchmark problem tries, during a campaign of a certain length, to
influence the ongoing dynamics among normal agents with strategically placed
opinions (one per period) in such a way, that, by the end of the campaign, as
much as possible normals end up with opinions in a certain interval of the
opinion space. Structurally, such a problem is an optimal control problem. That
type of problem is ubiquitous. Resorting to advanced and partly non-standard
methods for computing optimal controls, we solve some instances of the campaign
problem. But even for a very small number of normal agents, just one strategic
agent, and a ten-period campaign length, the problem turns out to be extremely
difficult. Explicitly we discuss moral and political concerns that immediately
arise, if someone starts to analyze the possibilities of an optimal opinion
control.Comment: 47 pages, 12 figures, and 11 table
Counting in Team Semantics
We explore several counting constructs for logics with team semantics. Counting is an important task in numerous applications, but with a somewhat delicate relationship to logic. Team semantics on the other side is the mathematical basis of modern logics of dependence and independence, in which formulae are evaluated not for a single assignment of values to variables, but for a set of such assignments. It is therefore interesting to ask what kind of counting constructs are adequate in this context, and how such constructs influence the expressive power, and the model-theoretic and algorithmic properties of logics with team semantics. Due to the second-order features of team semantics there is a rich variety of potential counting constructs. Here we study variations of two main ideas: forking atoms and counting quantifiers.
Forking counts how many different values for a tuple w occur in assignments with coinciding values for v. We call this the forking degree of bar v with respect to bar w. Forking is powerful enough to capture many of the previously studied atomic dependency properties. In particular we exhibit logics with forking atoms that have, respectively, precisely the power of dependence logic and independence logic.
Our second approach uses counting quantifiers E^{geq mu} of a similar kind as used in logics with Tarski semantics. The difference is that these quantifiers are now applied to teams of assignments that may give different values to mu. We show that, on finite structures, there is an intimate connection between inclusion logic with counting quantifiers and FPC, fixed-point logic with counting, which is a logic of fundamental importance for descriptive complexity theory. For sentences, the two logics have the same expressive power. Our analysis is based on a new variant of model-checking games, called threshold safety games, on a trap condition for such games, and on game interpretations
- …
