2,982 research outputs found

    Simple Dynamics for Plurality Consensus

    Get PDF
    We study a \emph{Plurality-Consensus} process in which each of nn anonymous agents of a communication network initially supports an opinion (a color chosen from a finite set [k][k]). Then, in every (synchronous) round, each agent can revise his color according to the opinions currently held by a random sample of his neighbors. It is assumed that the initial color configuration exhibits a sufficiently large \emph{bias} ss towards a fixed plurality color, that is, the number of nodes supporting the plurality color exceeds the number of nodes supporting any other color by ss additional nodes. The goal is having the process to converge to the \emph{stable} configuration in which all nodes support the initial plurality. We consider a basic model in which the network is a clique and the update rule (called here the \emph{3-majority dynamics}) of the process is the following: each agent looks at the colors of three random neighbors and then applies the majority rule (breaking ties uniformly). We prove that the process converges in time O(min{k,(n/logn)1/3}logn)\mathcal{O}( \min\{ k, (n/\log n)^{1/3} \} \, \log n ) with high probability, provided that scmin{2k,(n/logn)1/3}nlogns \geqslant c \sqrt{ \min\{ 2k, (n/\log n)^{1/3} \}\, n \log n}. We then prove that our upper bound above is tight as long as k(n/logn)1/4k \leqslant (n/\log n)^{1/4}. This fact implies an exponential time-gap between the plurality-consensus process and the \emph{median} process studied by Doerr et al. in [ACM SPAA'11]. A natural question is whether looking at more (than three) random neighbors can significantly speed up the process. We provide a negative answer to this question: In particular, we show that samples of polylogarithmic size can speed up the process by a polylogarithmic factor only.Comment: Preprint of journal versio

    Local Access to Huge Random Objects Through Partial Sampling

    Get PDF
    © Amartya Shankha Biswas, Ronitt Rubinfeld, and Anak Yodpinyanee. Consider an algorithm performing a computation on a huge random object (for example a random graph or a “long” random walk). Is it necessary to generate the entire object prior to the computation, or is it possible to provide query access to the object and sample it incrementally “on-the-fly” (as requested by the algorithm)? Such an implementation should emulate the random object by answering queries in a manner consistent with an instance of the random object sampled from the true distribution (or close to it). This paradigm is useful when the algorithm is sub-linear and thus, sampling the entire object up front would ruin its efficiency. Our first set of results focus on undirected graphs with independent edge probabilities, i.e. each edge is chosen as an independent Bernoulli random variable. We provide a general implementation for this model under certain assumptions. Then, we use this to obtain the first efficient local implementations for the Erdös-Rényi G(n, p) model for all values of p, and the Stochastic Block model. As in previous local-access implementations for random graphs, we support Vertex-Pair and Next-Neighbor queries. In addition, we introduce a new Random-Neighbor query. Next, we give the first local-access implementation for All-Neighbors queries in the (sparse and directed) Kleinberg’s Small-World model. Our implementations require no pre-processing time, and answer each query using O(poly(log n)) time, random bits, and additional space. Next, we show how to implement random Catalan objects, specifically focusing on Dyck paths (balanced random walks on the integer line that are always non-negative). Here, we support Height queries to find the location of the walk, and First-Return queries to find the time when the walk returns to a specified location. This in turn can be used to implement Next-Neighbor queries on random rooted ordered trees, and Matching-Bracket queries on random well bracketed expressions (the Dyck language). Finally, we introduce two features to define a new model that: (1) allows multiple independent (and even simultaneous) instantiations of the same implementation, to be consistent with each other without the need for communication, (2) allows us to generate a richer class of random objects that do not have a succinct description. Specifically, we study uniformly random valid q-colorings of an input graph G with maximum degree ∆. This is in contrast to prior work in the area, where the relevant random objects are defined as a distribution with O(1) parameters (for example, n and p in the G(n, p) model). The distribution over valid colorings is instead specified via a “huge” input (the underlying graph G), that is far too large to be read by a sub-linear time algorithm. Instead, our implementation accesses G through local neighborhood probes, and is able to answer queries to the color of any given vertex in sub-linear time for q ≥ 9∆, in a manner that is consistent with a specific random valid coloring of G. Furthermore, the implementation is memory-less, and can maintain consistency with non-communicating copies of itself

    The impact of agent density on scalability in collective systems : noise-induced versus majority-based bistability

    Get PDF
    In this paper, we show that non-uniform distributions in swarms of agents have an impact on the scalability of collective decision-making. In particular, we highlight the relevance of noise-induced bistability in very sparse swarm systems and the failure of these systems to scale. Our work is based on three decision models. In the first model, each agent can change its decision after being recruited by a nearby agent. The second model captures the dynamics of dense swarms controlled by the majority rule (i.e., agents switch their opinion to comply with that of the majority of their neighbors). The third model combines the first two, with the aim of studying the role of non-uniform swarm density in the performance of collective decision-making. Based on the three models, we formulate a set of requirements for convergence and scalability in collective decision-making

    Conspiratorial beliefs observed through entropy principles

    Get PDF
    We propose a novel approach framed in terms of information theory and entropy to tackle the issue of conspiracy theories propagation. We start with the report of an event (such as 9/11 terroristic attack) represented as a series of individual strings of information denoted respectively by two-state variable Ei=+/-1, i=1,..., N. Assigning Ei value to all strings, the initial order parameter and entropy are determined. Conspiracy theorists comment on the report, focusing repeatedly on several strings Ek and changing their meaning (from -1 to +1). The reading of the event is turned fuzzy with an increased entropy value. Beyond some threshold value of entropy, chosen by simplicity to its maximum value, meaning N/2 variables with Ei=1, doubt prevails in the reading of the event and the chance is created that an alternative theory might prevail. Therefore, the evolution of the associated entropy is a way to measure the degree of penetration of a conspiracy theory. Our general framework relies on online content made voluntarily available by crowds of people, in response to some news or blog articles published by official news agencies. We apply different aggregation levels (comment, person, discussion thread) and discuss the associated patterns of entropy change.Comment: 21 page, 14 figure

    On the invisibility and impact of Robert Hooke’s theory of gravitation

    Get PDF
    Robert Hooke\u2019s theory of gravitation is a promising case study for probing the fruitfulness of Menachem Fisch\u2019s insistence on the centrality of trading zone mediators for rational change in the history of science and mathematics. In 1679, Hooke proposed an innovative explanation of planetary motions to Newton\u2019s attention. Until the correspondence with Hooke, Newton had embraced planetary models, whereby planets move around the Sun because of the action of an ether filling the interplanetary space. Hooke\u2019s model, instead, consisted in the idea that planets move in the void space under the influence of a gravitational attraction directed toward the sun. There is no doubt that the correspondence with Hooke allowed Newton to conceive a new explanation for planetary motions. This explanation was proposed by Hooke as a hypothesis that needed mathematical development and experimental confirmation. Hooke formulated his new model in a mathematical language which overlapped but not coincided with Newton\u2019s who developed Hooke\u2019s hypothetical model into the theory of universal gravitation as published in the Mathematical Principles of Natural Philosophy (1687). The nature of Hooke\u2019s contributions to mathematized natural philosophy, however, was contested during his own lifetime and gave rise to negative evaluations until the last century. Hooke has been often contrasted to Newton as a practitioner rather than as a \u201cscientist\u201d and unfavorably compared to the eminent Lucasian Professor. Hooke\u2019s correspondence with Newton seems to me an example of the phenomenon, discussed by Fisch in his philosophical works, of the invisibility in official historiography of \u201ctrading zone mediators,\u201d namely, of those actors that play a role, crucial but not easily recognized, in promoting rational scientific framework change

    On the Hierarchy of Distributed Majority Protocols

    Get PDF

    Parallel Global Edge Switching for the Uniform Sampling of Simple Graphs with Prescribed Degrees

    Full text link
    The uniform sampling of simple graphs matching a prescribed degree sequence is an important tool in network science, e.g. to construct graph generators or null-models. Here, the Edge Switching Markov Chain (ES-MC) is a common choice. Given an arbitrary simple graph with the required degree sequence, ES-MC carries out a large number of small changes, called edge switches, to eventually obtain a uniform sample. In practice, reasonably short runs efficiently yield approximate uniform samples. In this work, we study the problem of executing edge switches in parallel. We discuss parallelizations of ES-MC, but find that this approach suffers from complex dependencies between edge switches. For this reason, we propose the Global Edge Switching Markov Chain (G-ES-MC), an ES-MC variant with simpler dependencies. We show that G-ES-MC converges to the uniform distribution and design shared-memory parallel algorithms for ES-MC and G-ES-MC. In an empirical evaluation, we provide evidence that G-ES-MC requires not more switches than ES-MC (and often fewer), and demonstrate the efficiency and scalability of our parallel G-ES-MC implementation
    corecore