34 research outputs found

    Evolution of Social Power in Social Networks with Dynamic Topology

    Get PDF
    The recently proposed DeGroot-Friedkin model describes the dynamical evolution of individual social power in a social network that holds opinion discussions on a sequence of different issues. This paper revisits that model, and uses nonlinear contraction analysis, among other tools, to establish several novel results. First, we show that for a social network with constant topology, each individual's social power converges to its equilibrium value exponentially fast, whereas previous results only concluded asymptotic convergence. Second, when the network topology is dynamic (i.e., the relative interaction matrix may change between any two successive issues), we show that each individual exponentially forgets its initial social power. Specifically, individual social power is dependent only on the dynamic network topology, and initial (or perceived) social power is forgotten as a result of sequential opinion discussion. Last, we provide an explicit upper bound on an individual's social power as the number of issues discussed tends to infinity; this bound depends only on the network topology. Simulations are provided to illustrate our results.The work of Mengbin Ye, Brian D. O. Anderson, and Changbin Yu was supported by the Australian Research Council under Grant DP-130103610 and Grant DP-160104500, by 111-Project D17019, by NSFC Projects 61385702 and 61761136005, and by Data61-CSIRO. The work of Mengbin Ye was supported by an Australian Government Research Training Program Scholarship. The work of Ji Liu and Tamer Bas¸ar was supported by the Office of Naval Research MURI Grant N00014-16-1-2710, and by NSF under Grant CCF 11-11342. Recommended by Associate Editor C. M. Kellett

    Is the Stack Distance Between Test Case and Method Correlated With Test Effectiveness?

    Full text link
    Mutation testing is a means to assess the effectiveness of a test suite and its outcome is considered more meaningful than code coverage metrics. However, despite several optimizations, mutation testing requires a significant computational effort and has not been widely adopted in industry. Therefore, we study in this paper whether test effectiveness can be approximated using a more light-weight approach. We hypothesize that a test case is more likely to detect faults in methods that are close to the test case on the call stack than in methods that the test case accesses indirectly through many other methods. Based on this hypothesis, we propose the minimal stack distance between test case and method as a new test measure, which expresses how close any test case comes to a given method, and study its correlation with test effectiveness. We conducted an empirical study with 21 open-source projects, which comprise in total 1.8 million LOC, and show that a correlation exists between stack distance and test effectiveness. The correlation reaches a strength up to 0.58. We further show that a classifier using the minimal stack distance along with additional easily computable measures can predict the mutation testing result of a method with 92.9% precision and 93.4% recall. Hence, such a classifier can be taken into consideration as a light-weight alternative to mutation testing or as a preceding, less costly step to that.Comment: EASE 201

    On the Analysis of the DeGroot-Friedkin Model with Dynamic Relative Interaction Matrices

    Get PDF
    This paper analyses the DeGroot-Friedkin model for evolution of the individuals’ social powers in a social network when the network topology varies dynamically (described by dynamic relative interaction matrices). The DeGroot-Friedkin model describes how individual social power (self-appraisal, self-weight) evolves as a network of individuals discuss opinions on a sequence of issues. We seek to study dynamically changing relative interactions because interactions may change depending on the issue being discussed. Specifically, we study relative interaction matrices which vary periodically with respect to the issues. This may reflect a group of individuals, e.g. a government cabinet, that meet regularly to discuss a set of issues sequentially. It is shown that individuals’ social powers admit a periodic solution. Initially, we study a social network which varies periodically between two relative interaction matrices, and then generalise to an arbitrary number of relative interaction matrices.This work was supported by the Australian Research Council (ARC) under the ARC grants DP-130103610 and DP-160104500, by the National Natural Science Foundation of China (grant 61375072), and by Data61-CSIRO (formerly NICTA). The work of Liu and Basar was supported in part by Office of Naval Research (ONR) MURI Grant N00014-16-1-2710, and in part by NSF under grant CCF 11-11342

    Modification of social dominance in social networks by selective adjustment of interpersonal weights

    Get PDF
    According to the DeGroot-Friedkin model of a social network, an individual's social power evolves as the network discusses individual opinions over a sequence of issues. Under mild assumptions on the connectivity of the network, the social power of every individual converges to a constant nonnegative value as the number of issues discussed increases. If the network has a special topology, namely the “star topology”, then all social power accumulates with the individual at the centre of the star. This paper studies the strategic introduction of new individuals and/or interpersonal relationships into a social network with the star topology so as to reduce the social power of the centre individual. In fact, several strategies are proposed. For each strategy, we derive necessary and sufficient conditions on the strength of the new interpersonal relationships, based on local information, which ensures that the centre individual no longer has the greatest social power within the social network. Interpretations of these conditions reveal that the strategies are remarkably intuitive and that certain strategies are favourable compared to others, all of which is sociologically expected.u. The work of Ye, Anderson, and Yu was supported by the Australian Research Council (ARC) under grants DP-130103610 and DP-160104500, by the National Natural Science Foundation of China (grant 61375072), and by Data61-CSIRO. Ye was supported by an Australian Government Research Training Program (RTP) Scholarship. The work of Liu and Bas¸ar was supported in part by Office of Naval Research (ONR) MURI Grant N00014-16-1-2710, and in part by NSF under grant CCF 11-11342

    Request-based gossiping without deadlocks

    Get PDF
    By the distributed averaging problem is meant the problem of computing the average value of a set of numbers possessed by the agents in a distributed network using only communication between neighboring agents. Gossiping is a well-known approach to the problem which seeks to iteratively arrive at a solution by allowing each agent to interchange information with at most one neighbor at each iterative step. Crafting a gossiping protocol which accomplishes this is challenging because gossiping is an inherently collaborative process which can lead to deadlocks unless careful precautions are taken to ensure that it does not. Many gossiping protocols are request-based which means simply that a gossip between two agents will occur whenever one of the two agents accepts a request to gossip placed by the other. In this paper, we present three deterministic request-based protocols. We show by example that the first can deadlock. The second is guaranteed to avoid deadlocks by exploiting the idea of local ordering together with the notion of an agent’s neighbor queue; the protocol requires the simplest queue updates, which provides an in-depth understanding of how local ordering and queue updates avoid deadlocks. It is shown that a third protocol which uses a slightly more complicated queue update rule can lead to significantly faster convergence; a worst case bound on convergence rate is provided.The work of Liu, Mou and Morse was supported by the US Air Force Office of Scientific Research and the National Science Foundation. The work of Anderson was supported by the Australian Research Council’s Discovery Project DP-110100538 and the National ICT AustraliaVNICTA. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. The work of Yu was supported by the Australian Research Council through a Queen Elizabeth II Fellowship and DP-110100538 and by the Overseas Expert Program of Shandong Province, China. The work of Anderson and Yu was also supported by the U.S. Air Force Research laboratory Grant FA2386-10-1-4102

    Bias-Correction Method in Bearing-Only Passive Localization

    No full text
    In this paper a novel analytical approach to approximate and correct the bias in 2D localization problem is proposed. This new method mixes Taylor series and Jacobian matrices to determine the bias, and leads to an easily computed analytical bias expression. Importantly, we compare the proposed approach with a well-cited previous method using simulation data. Further we apply our method to bearing-only localization algorithms. Monte Carlo simulation results demonstrate that the proposed method performs satisfactorily when the underlying geometry makes the localization problem reasonable. Furthermore the proposed method performs better than the comparison method and also is effective over a larger area. Although the method is presented in detail for bearing-only localization algorithms, the analysis methodology is also valid for other kinds of localization algorithms

    Localization Bias Correction in n-Dimensional Space

    No full text
    In previous work we proposed a method to determine the bias in localization algorithms using 2 or 3 sensors, whose location have been already identified, for targets in 2-dimensional space by mixing Taylor series and Jacobian matrices. In this paper we extend the bias-correction method to n-dimensional space with N sensors. To illustrate this approach, we analyze the proposed method in three situations using localization algorithms. Monte Carlo simulation results demonstrate the proposed bias-correction method can correct the bias very well in most situations

    Geometric Dilution of Localization and Bias-Correction Methods

    No full text
    A particular geometric problem-the collinearity problem-which may prevent effective use of localization algorithms is described in detail in this paper. Further analysis illustrates the methods for improving the estimate for localization algorithms also can be affected by the collinearity problem. In this paper, we propose a novel approach to deal with the collinearity problem for a localization improvement method-the bias-correction method [1, 2, 3]. Compare to earlier work such as [4], the main feature of the proposed approach is that it takes the level of the measurement noise into consideration as a variable. Monte Carlo simulation results demonstrate the performance of the proposed method. Further simulation illustrates the influence of two factors on the effect of the bias-correct method: the distance between sensors and the level of noise. Though it mainly aims to the bias-correction method, the proposed approach is also valid for localization algorithms because of the consistent performance of localization algorithms and the bias-correction method

    Systematic bias correction in source localization

    No full text
    A novel analytical approach is proposed to approximate and correct the bias in localization problems in n-dimensional space (n=2 or 3) with N (N>=n) independently usable measurements (such as distance, bearing, time difference of arrival (TDOA), etc.). H
    corecore