5 research outputs found

    Opinion Dynamics with Random Actions and a Stubborn Agent

    Full text link
    We study opinion dynamics in a social network with stubborn agents who influence their neighbors but who themselves always stick to their initial opinion. We consider first the well-known DeGroot model. While it is known in the literature that this model can lead to consensus even in the presence of a stubborn agent, we show that the same result holds under weaker assumptions than has been previously reported. We then consider a recent extension of the DeGroot model in which the opinion of each agent is a random Bernoulli distributed variable, and by leveraging on the first result we establish that this model also leads to consensus, in the sense of convergence in probability, in the presence of a stubborn agent. Moreover, all agents' opinions converge to that of the stubborn agent.Comment: 5 pages; This work was presented at Asilomar Conference on Signals, Systems, and Computers 201

    Biased Opinion Dynamics: When the Devil Is in the Details

    Get PDF
    We investigate opinion dynamics in multi-agent networks when a bias toward one of two possible opinions exists; for example, reflecting a status quo vs a superior alternative. Starting with all agents sharing an initial opinion representing the status quo, the system evolves in steps. In each step, one agent selected uniformly at random adopts the superior opinion with some probability α\alpha, and with probability 1α1 - \alpha it follows an underlying update rule to revise its opinion on the basis of those held by its neighbors. We analyze convergence of the resulting process under two well-known update rules, namely majority and voter. The framework we propose exhibits a rich structure, with a non-obvious interplay between topology and underlying update rule. For example, for the voter rule we show that the speed of convergence bears no significant dependence on the underlying topology, whereas the picture changes completely under the majority rule, where network density negatively affects convergence. We believe that the model we propose is at the same time simple, rich, and modular, affording mathematical characterization of the interplay between bias, underlying opinion dynamics, and social structure in a unified setting.Comment: The paper has appeared in the Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. The SOLE copyright holder is IJCAI (International Joint Conferences on Artificial Intelligence), all rights reserved. Link to the proceedings: https://www.ijcai.org/Proceedings/2020/

    Robot swarm democracy: the importance of informed individuals against zealots

    Get PDF
    Abstract: In this paper we study a generalized case of best-of-n model, which considers three kind of agents: zealots, individuals who remain stubborn and do not change their opinion; informed agents, individuals that can change their opinion, are able to assess the quality of the different options; and uninformed agents, individuals that can change their opinion but are not able to assess the quality of the different opinions. We study the consensus in different regimes: we vary the quality of the options, the percentage of zealots and the percentage of informed versus uninformed agents. We also consider two decision mechanisms: the voter and majority rule. We study this problem using numerical simulations and mathematical models, and we validate our findings on physical kilobot experiments. We find that (1) if the number of zealots for the lowest quality option is not too high, the decision-making process is driven toward the highest quality option; (2) this effect can be improved increasing the number of informed agents that can counteract the effect of adverse zealots; (3) when the two options have very similar qualities, in order to keep high consensus to the best quality it is necessary to have higher proportions of informed agents

    On Occupancy Based Randomized Load Balancing for Large Systems with General Distributions

    Get PDF
    Multi-server architectures are ubiquitous in today's information infrastructure whether for supporting cloud services, web servers, or for distributed storage. The performance of multi-server systems is highly dependent on the load distribution. This is affected by the use of load balancing strategies. Since both latency and blocking are important features, it is most reasonable to route an incoming job to a server that is lightly loaded. Hence a good load balancing policy should be dependent on the states of servers. Since obtaining information about the remaining workload of servers for every arrival is very hard, it is preferable to design load balancing policies that depend on occupancy or the number of progressing jobs of servers. Furthermore, if the system has a large number of servers, it is not practical to use the occupancy information of all the servers to dispatch or route an arrival due to high communication cost. In large-scale systems that have tens of thousands of servers, the policies which use the occupancy information of only a finite number of randomly selected servers to dispatch an arrival result in lower implementation cost than the policies which use the occupancy information of all the servers. Such policies are referred to as occupancy based randomized load balancing policies. Motivated by cloud computing systems and web-server farms, we study two types of models. In the first model, each server is an Erlang loss server, and this model is an abstraction of Infrastructure-as-a-Service (IaaS) clouds. The second model we consider is one with processor sharing servers that is an abstraction of web-server farms which serve requests in a round-robin manner with small time granularity. The performance criterion for web-servers is the response time or the latency for the request to be processed. In most prior works, the analysis of these models was restricted to the case of exponential job length distributions and in this dissertation we study the case of general job length distributions. To analyze the impact of a load balancing policy, we need to develop models for the system's dynamics. In this dissertation, we show that one can construct useful Markovian models. For occupancy based randomized routing policies, due to complex inter-dependencies between servers, an exact analysis is mostly intractable. However, we show that the multi-server systems that have an occupancy based randomized load balancing policy are examples of weakly interacting particle systems. In these systems, servers are interacting particles whose states lie in an uncountable state space. We develop a mean-field analysis to understand a server's behavior as the number of servers becomes large. We show that under certain assumptions, as the number of servers increases, the sequence of empirical measure-valued Markov processes which model the systems' dynamics converges to a deterministic measure-valued process referred to as the mean-field limit. We observe that the mean-field equations correspond to the dynamics of the distribution of a non-linear Markov process. A consequence of having the mean-field limit is that under minor and natural assumptions on the initial states of servers, any finite set of servers can be shown to be independent of each other as the number of servers goes to infinity. Furthermore, the mean-field limit approximates each server's distribution in the transient regime when the number of servers is large. A salient feature of loss and processor sharing systems in the setting where their time evolution can be modeled by reversible Markov processes is that their stationary occupancy distribution is insensitive to the type of job length distribution; it depends only on the average job length but not on the type of the distribution. This property does not hold when the number of servers is finite in our context due to lack of reversibility. We show however that the fixed-point of the mean-field is insensitive to the job length distributions for all occupancy based randomized load balancing policies when the fixed-point is unique for job lengths that have exponential distributions. We also provide some deeper insights into the relationship between the mean-field and the distributions of servers and the empirical measure in the stationary regime. Finally, we address the accuracy of mean-field approximations in the case of loss models. To do so we establish a functional central limit theorem under the assumption that the job lengths have exponential distributions. We show that a suitably scaled fluctuation of the stochastic empirical process around the mean-field converges to an Ornstein-Uhlenbeck process. Our analysis is also valid for the Halfin-Whitt regime in which servers are critically loaded. We then exploit the functional central limit theorem to quantify the error between the actual blocking probability of the system with a large number of servers and the blocking probability obtained from the fixed-point of the mean-field. In the Halfin-Whitt regime, the error is of the order inverse square root of the number of servers. On the other hand, for a light load regime, the error is smaller than the inverse square root of the number of servers
    corecore