1,615 research outputs found

    How to choose the most appropriate centrality measure?

    Full text link
    We propose a new method to select the most appropriate network centrality measure based on the user's opinion on how such a measure should work on a set of simple graphs. The method consists in: (1) forming a set F\cal F of candidate measures; (2) generating a sequence of sufficiently simple graphs that distinguish all measures in F\cal F on some pairs of nodes; (3) compiling a survey with questions on comparing the centrality of test nodes; (4) completing this survey, which provides a centrality measure consistent with all user responses. The developed algorithms make it possible to implement this approach for any finite set F\cal F of measures. This paper presents its realization for a set of 40 centrality measures. The proposed method called culling can be used for rapid analysis or combined with a normative approach by compiling a survey on the subset of measures that satisfy certain normative conditions (axioms). In the present study, the latter was done for the subsets determined by the Self-consistency or Bridge axioms.Comment: 26 pages, 1 table, 1 algorithm, 8 figure

    Selection of Centrality Measures Using Self-Consistency and Bridge Axioms

    Full text link
    We consider several families of network centrality measures induced by graph kernels, which include some well-known measures and many new ones. The Self-consistency and Bridge axioms, which appeared earlier in the literature, are closely related to certain kernels and one of the families. We obtain a necessary and sufficient condition for Self-consistency, a sufficient condition for the Bridge axiom, indicate specific measures that satisfy these axioms, and show that under some additional conditions they are incompatible. PageRank centrality applied to undirected networks violates most conditions under study and has a property that according to some authors is ``hard to imagine'' for a centrality measure. We explain this phenomenon. Adopting the Self-consistency or Bridge axiom leads to a drastic reduction in survey time in the culling method designed to select the most appropriate centrality measures.Comment: 23 pages, 5 figures. A reworked versio

    The position value as a centrality measure in social networks

    Get PDF
    The position value, introduced by Meessen (1988), is a solution concept for cooperative games in which the value assigned to a player depends on the value of the connections or links he has with other players. This concept has been studied by Borm et al. (1992) and characterised by Slikker (2005). In this paper, we analyse the position value from the point of view of the typical properties of a measure of centrality in a social network. We extend the analysis already developed in Gomez et al. (2003) for the Myerson centrality measure, where the symmetric effect on the centralities of the end nodes of an added or removed edge is a fundamental part of its characterisation. However, the Position centrality measure, unlike the Myerson centrality measure, responds in a more versatile way to such addition or elimination. After studying the aforementioned properties, we will focus on the analysis and characterisation of the Position attachment centrality given by the position value when the underlying game is the attachment game. Some comparisons are made with the attachment centrality introduced by Skibski et al. (2019).Depto. de Estadística e Investigación OperativaFac. de Ciencias MatemáticasInstituto de Matemática Interdisciplinar (IMI)FALSEMinisterio de Ciencia e Innovaciónunpu

    Essays on Learning in Social Networks

    Get PDF
    Over the past few years, online social networks have become nearly ubiquitous, reshaping our social interactions as in no other point in history. The preeminent aspect of this social media revolution is arguably an almost complete transformation of the ways in which we acquire, process, store, and use information. In view of the evolving nature of social networks and their increasing complexity, development of formal models of social learning is imperative for a better understanding of the role of social networks in phenomena such as opinion formation, information aggregation, and coordination. This thesis takes a step in this direction by introducing and analyzing novel models of learning and coordination over networks. In particular, we provide answers to the following questions regarding a group of individuals who interact over a social network: 1) Do repeated communications between individuals with different subjective beliefs and pieces of information about a common true state lead them to eventually reach an agreement? 2) Do the individuals efficiently aggregate through their social interactions the information that is dispersed throughout the society? 3) And if so, how long does it take the individuals to aggregate the dispersed information and reach an agreement? This thesis provides answers to these questions given three different assumptions on the individuals\u27 behavior in response to new information. We start by studying the behavior of a group of individuals who are fully rational and are only concerned with discovering the truth. We show that communications between rational individuals with access to complementary pieces of information eventually direct everyone to discover the truth. Yet in spite of its axiomatic appeal, fully rational agent behavior may not be a realistic assumption when dealing with large societies and complex networks due to the extreme computational complexity of Bayesian inference. Motivated by this observation, we next explore the implications of bounded rationality by introducing biases in the way agents interpret the opinions of others while at the same time maintaining the assumption that agents interpret their private observations rationally. Our analysis yields the result that when faced with overwhelming evidence in favor of the truth even biased agents will eventually learn to discover the truth. We further show that the rate of learning has a simple analytical characterization in terms of the relative entropy of agents\u27 signal structures and their eigenvector centralities and use the characterization to perform comparative analysis. Finally, in the last chapter of the thesis, we introduce and analyze a novel model of opinion formation in which agents not only seek to discover the truth but also have the tendency to act in conformity with the rest of the population. Preference for conformity is relevant in scenarios ranging from participation in popular movements and following fads to trading in stock market. We argue that myopic agents who value conformity do not necessarily fully aggregate the dispersed information; nonetheless, we prove that examples of the failure of information aggregation are rare in a precise sense

    The Importance of Social and Government Learning in Ex Ante Policy Evaluation

    Full text link
    We provide two methodological insights on \emph{ex ante} policy evaluation for macro models of economic development. First, we show that the problems of parameter instability and lack of behavioral constancy can be overcome by considering learning dynamics. Hence, instead of defining social constructs as fixed exogenous parameters, we represent them through stable functional relationships such as social norms. Second, we demonstrate how agent computing can be used for this purpose. By deploying a model of policy prioritization with endogenous government behavior, we estimate the performance of different policy regimes. We find that, while strictly adhering to policy recommendations increases efficiency, the nature of such recipes has a bigger effect. In other words, while it is true that lack of discipline is detrimental to prescription outcomes (a common defense of failed recommendations), it is more important that such prescriptions consider the systemic and adaptive nature of the policymaking process (something neglected by traditional technocratic advice)
    corecore