276 research outputs found

    Minority Becomes Majority in Social Networks

    Full text link
    It is often observed that agents tend to imitate the behavior of their neighbors in a social network. This imitating behavior might lead to the strategic decision of adopting a public behavior that differs from what the agent believes is the right one and this can subvert the behavior of the population as a whole. In this paper, we consider the case in which agents express preferences over two alternatives and model social pressure with the majority dynamics: at each step an agent is selected and its preference is replaced by the majority of the preferences of her neighbors. In case of a tie, the agent does not change her current preference. A profile of the agents' preferences is stable if the preference of each agent coincides with the preference of at least half of the neighbors (thus, the system is in equilibrium). We ask whether there are network topologies that are robust to social pressure. That is, we ask if there are graphs in which the majority of preferences in an initial profile always coincides with the majority of the preference in all stable profiles reachable from that profile. We completely characterize the graphs with this robustness property by showing that this is possible only if the graph has no edge or is a clique or very close to a clique. In other words, except for this handful of graphs, every graph admits at least one initial profile of preferences in which the majority dynamics can subvert the initial majority. We also show that deciding whether a graph admits a minority that becomes majority is NP-hard when the minority size is at most 1/4-th of the social network size.Comment: To appear in WINE 201

    Majority Dynamics and Aggregation of Information in Social Networks

    Full text link
    Consider n individuals who, by popular vote, choose among q >= 2 alternatives, one of which is "better" than the others. Assume that each individual votes independently at random, and that the probability of voting for the better alternative is larger than the probability of voting for any other. It follows from the law of large numbers that a plurality vote among the n individuals would result in the correct outcome, with probability approaching one exponentially quickly as n tends to infinity. Our interest in this paper is in a variant of the process above where, after forming their initial opinions, the voters update their decisions based on some interaction with their neighbors in a social network. Our main example is "majority dynamics", in which each voter adopts the most popular opinion among its friends. The interaction repeats for some number of rounds and is then followed by a population-wide plurality vote. The question we tackle is that of "efficient aggregation of information": in which cases is the better alternative chosen with probability approaching one as n tends to infinity? Conversely, for which sequences of growing graphs does aggregation fail, so that the wrong alternative gets chosen with probability bounded away from zero? We construct a family of examples in which interaction prevents efficient aggregation of information, and give a condition on the social network which ensures that aggregation occurs. For the case of majority dynamics we also investigate the question of unanimity in the limit. In particular, if the voters' social network is an expander graph, we show that if the initial population is sufficiently biased towards a particular alternative then that alternative will eventually become the unanimous preference of the entire population.Comment: 22 page

    Estimation of a probability in inverse binomial sampling under normalized linear-linear and inverse-linear loss

    Get PDF
    Sequential estimation of the success probability pp in inverse binomial sampling is considered in this paper. For any estimator p^\hat p, its quality is measured by the risk associated with normalized loss functions of linear-linear or inverse-linear form. These functions are possibly asymmetric, with arbitrary slope parameters aa and bb for p^p\hat pp respectively. Interest in these functions is motivated by their significance and potential uses, which are briefly discussed. Estimators are given for which the risk has an asymptotic value as pp tends to 00, and which guarantee that, for any pp in (0,1)(0,1), the risk is lower than its asymptotic value. This allows selecting the required number of successes, rr, to meet a prescribed quality irrespective of the unknown pp. In addition, the proposed estimators are shown to be approximately minimax when a/ba/b does not deviate too much from 11, and asymptotically minimax as rr tends to infinity when a=ba=b.Comment: 4 figure

    Revealing a signaling role of phytosphingosine-1-phosphate in yeast

    Get PDF
    Perturbing metabolic systems of bioactive sphingolipids with genetic approachMultiple types of “omics” data collected from the systemSystems approach for integrating multiple “omics” informationPredicting signal transduction information flow: lipid; TF activation; gene expressio

    Minding impacting events in a model of stochastic variance

    Get PDF
    We introduce a generalisation of the well-known ARCH process, widely used for generating uncorrelated stochastic time series with long-term non-Gaussian distributions and long-lasting correlations in the (instantaneous) standard deviation exhibiting a clustering profile. Specifically, inspired by the fact that in a variety of systems impacting events are hardly forgot, we split the process into two different regimes: a first one for regular periods where the average volatility of the fluctuations within a certain period of time is below a certain threshold and another one when the local standard deviation outnumbers it. In the former situation we use standard rules for heteroscedastic processes whereas in the latter case the system starts recalling past values that surpassed the threshold. Our results show that for appropriate parameter values the model is able to provide fat tailed probability density functions and strong persistence of the instantaneous variance characterised by large values of the Hurst exponent is greater than 0.8, which are ubiquitous features in complex systems.Comment: 18 pages, 5 figures, 1 table. To published in PLoS on

    A straightforward multiallelic significance test for the Hardy-Weinberg equilibrium law

    Get PDF
    Much forensic inference based upon DNA evidence is made assuming Hardy-Weinberg Equilibrium (HWE) for the genetic loci being used. Several statistical tests to detect and measure deviation from HWE have been devised, and their limitations become more obvious when testing for deviation within multiallelic DNA loci. The most popular methods-Chi-square and Likelihood-ratio tests-are based on asymptotic results and cannot guarantee a good performance in the presence of low frequency genotypes. Since the parameter space dimension increases at a quadratic rate on the number of alleles, some authors suggest applying sequential methods, where the multiallelic case is reformulated as a sequence of “biallelic” tests. However, in this approach it is not obvious how to assess the general evidence of the original hypothesis; nor is it clear how to establish the significance level for its acceptance/rejection. In this work, we introduce a straightforward method for the multiallelic HWE test, which overcomes the aforementioned issues of sequential methods. The core theory for the proposed method is given by the Full Bayesian Significance Test (FBST), an intuitive Bayesian approach which does not assign positive probabilities to zero measure sets when testing sharp hypotheses. We compare FBST performance to Chi-square, Likelihood-ratio and Markov chain tests, in three numerical experiments. The results suggest that FBST is a robust and high performance method for the HWE test, even in the presence of several alleles and small sample sizes

    To add or not to add a new treatment arm to a multiarm study: A decision-theoretic framework.

    Get PDF
    Multiarm clinical trials, which compare several experimental treatments against control, are frequently recommended due to their efficiency gain. In practise, all potential treatments may not be ready to be tested in a phase II/III trial at the same time. It has become appealing to allow new treatment arms to be added into on-going clinical trials using a "platform" trial approach. To the best of our knowledge, many aspects of when to add arms to an existing trial have not been explored in the literature. Most works on adding arm(s) assume that a new arm is opened whenever a new treatment becomes available. This strategy may prolong the overall duration of a study or cause reduction in marginal power for each hypothesis if the adaptation is not well accommodated. Within a two-stage trial setting, we propose a decision-theoretic framework to investigate when to add or not to add a new treatment arm based on the observed stage one treatment responses. To account for different prospect of multiarm studies, we define utility in two different ways; one for a trial that aims to maximise the number of rejected hypotheses; the other for a trial that would declare a success when at least one hypothesis is rejected from the study. Our framework shows that it is not always optimal to add a new treatment arm to an existing trial. We illustrate a case study by considering a completed trial on knee osteoarthritis
    corecore