259 research outputs found

    The Importance of Disagreeing: Contrarians and Extremism in the CODA model

    Full text link
    In this paper, we study the effects of introducing contrarians in a model of Opinion Dynamics where the agents have internal continuous opinions, but exchange information only about a binary choice that is a function of their continuous opinion, the CODA model. We observe that the hung election scenario still exists here, but it is weaker and it shouldn't be expected in every election. Finally, we also show that the introduction of contrarians make the tendency towards extremism of the original model weaker, indicating that the existence of agents that prefer to disagree might be an important aspect and help society to diminish extremist opinions.Comment: 14 pages, 9 figure

    Formation of Languages; Equality, Hierarchy and Teachers

    Get PDF
    A quantitative method is suggested, where meanings of words, and grammatic rules about these, of a vocabulary are represented by real numbers. People meet randomly, and average their vocabularies if they are equal; otherwise they either copy from higher hierarchy or stay idle. Presence of teachers broadcasting the same (but arbitrarily chosen) vocabulary leads the language formations to converge more quickly.Comment: 10 pages, 3 (totally 8) figure

    Truth seekers in opinion dynamics models

    Full text link
    We modify the model of Deffuant et al. to distinguish true opinion among others in the fashion of Hegselmann and Krause . The basic features of both models modified to account for truth seekers are qualitatively the same.Comment: RevTeX4, 2 pages, 1 figure in 6 eps file

    About the Power to Enforce and Prevent Consensus by Manipulating Communication Rules

    Full text link
    We explore the possibilities of enforcing and preventing consensus in continuous opinion dynamics that result from modifications in the communication rules. We refer to the model of Weisbuch and Deffuant, where nn agents adjust their continuous opinions as a result of random pairwise encounters whenever their opinions differ not more than a given bound of confidence \eps. A high \eps leads to consensus, while a lower \eps leads to a fragmentation into several opinion clusters. We drop the random encounter assumption and ask: How small may \eps be such that consensus is still possible with a certain communication plan for the entire group? Mathematical analysis shows that \eps may be significantly smaller than in the random pairwise case. On the other hand we ask: How large may \eps be such that preventing consensus is still possible? In answering this question we prove Fortunato's simulation result that consensus cannot be prevented for \eps>0.5 for large groups. % Next we consider opinion dynamics under different individual strategies and examine their power to increase the chances of consensus. One result is that balancing agents increase chances of consensus, especially if the agents are cautious in adapting their opinions. However, curious agents increase chances of consensus only if those agents are not cautious in adapting their opinions.Comment: 21 pages, 6 figure

    Monte Carlo Simulation of Deffuant opinion dynamics with quality differences

    Full text link
    In this work the consequences of different opinion qualities in the Deffuant model were examined. If these qualities are randomly distributed, no different behavior was observed. In contrast to that, systematically assigned qualities had strong effects to the final opinion distribution. There was a high probability that the strongest opinion was one with a high quality. Furthermore, under the same conditions, this major opinion was much stronger than in the models without systematic differences. Finally, a society with systematic quality differences needed more tolerance to form a complete consensus than one without or with unsystematic ones.Comment: 8 pages including 5 space-consuming figures, fir Int. J. Mod. Phys. C 15/1

    Continuous Opinions and Discrete Actions in Opinion Dynamics Problems

    Full text link
    A model where agents show discrete behavior regarding their actions, but have continuous opinions that are updated by interacting with other agents is presented. This new updating rule is applied to both the voter and Sznajd models for interaction between neighbors and its consequences are discussed. The appearance of extremists is naturally observed and it seems to be a characteristic of this model.Comment: 10 pages, 4 figures, minor changes for improved clarit

    Multidimensional Consensus model on a Barabasi-Albert network

    Full text link
    A Consensus Model according to Deffuant on a directed Barabasi-Albert network was simulated. Agents have opinions on different subjects. A multi-component subject vector was used. The opinions are discrete. The analysis regards distribution and clusters of agents which are on agreement in the opinions of the subjects. Remarkable results are on the one hand, that there mostly exists no absolute consens. It determines depending on the ratio of number of agents to the number of subjects, whether the communication ends in a consens or a pluralism. Mostly a second robust cluster remains, in its size depending on the number of subjects. Two agents agree either in (nearly) all or (nearly) no subject. The operative parameter of the consens-formating-process is the tolerance in change of views of the group-members.Comment: 14 pages including all 10 figures, for IJMPC 16, issue

    Sélection de points en apprentissage actif. Discrépance et dispersion : des critères optimaux ?

    Get PDF
    National audienceNous souhaitons générer des bases d'apprentissage adaptées aux problèmes de classification. Nous montrons tout d'abord que les résultats théoriques privilégiant les suites à discrépance faible pour les problèmes de régression sont inadaptés aux problèmes de classification. Nous donnons ensuite des arguments théoriques et des résultats de simulations montrant que c'est la dispersion des points d'apprentissage qui est le critère pertinent à minimiser pour optimiser les performances de l'apprentissage en classification
    • …
    corecore