173 research outputs found

    A contrasting look at self-organization in the Internet and next-generation communication networks

    Get PDF
    This article examines contrasting notions of self-organization in the Internet and next-generation communication networks, by reviewing in some detail recent evidence regarding several of the more popular attempts to explain prominent features of Internet structure and behavior as "emergent phenomena." In these examples, what might appear to the nonexpert as "emergent self-organization" in the Internet actually results from well conceived (albeit perhaps ad hoc) design, with explanations that are mathematically rigorous, in agreement with engineering reality, and fully consistent with network measurements. These examples serve as concrete starting points from which networking researchers can assess whether or not explanations involving self-organization are relevant or appropriate in the context of next-generation communication networks, while also highlighting the main differences between approaches to self-organization that are rooted in engineering design vs. those inspired by statistical physics

    Does Contributing Sequentially Increase the Level of Cooperation in Public Goods Games ? An Experimental Investigation

    Get PDF
    We run a series of experiments in which subjects have to choose their level of contribution to a pure public good. Our design differs from the standard public good game with respect to the decision procedure. Instead of deciding simultaneously in each round, subjects are randomly ordered in a sequence which differs from round to round. We compare sessions in which subjects can observe the exact contributions from earlier decisions ("Sequential treatment with Information") to sessions in which subjects decide sequentially but cannot observe earlier contributions ("Sequential treatment without information"). Furthermore, we investigate the effect of group size on aggregate contributions. Our result indicate that contributing sequentially increases the level of contribution to the public good when subjects are informed about the contribution levels of lower ranked subjects. Moreover, we observe that earlier players in the sequence try to influence positively the contributions of subsequent decision makers in the sequence, by making a large contribution. Such behaviour is motivated by the belief that subsequent players will reciprocate by also making a large contribution.

    Mathematics and the Internet: A Source of Enormous Confusion and Great Potential

    Get PDF
    Graph theory models the Internet mathematically, and a number of plausible mathematically intersecting network models for the Internet have been developed and studied. Simultaneously, Internet researchers have developed methodology to use real data to validate, or invalidate, proposed Internet models. The authors look at these parallel developments, particularly as they apply to scale-free network models of the preferential attachment type

    The economics of the telethon: leadership, reciprocity and moral motivation

    Get PDF
    We run a series of experiments in which subjects have to choose their level of contribution to a pure public good. The design differs from the standard public good game with respect to the decision procedure. Instead of deciding simultaneously in each round, subjects are randomly ordered in a sequence which differs from round to round. We compare sessions in which subjects can observe the exact contributions from earlier decisions ("sequential treatment with information") to sessions in which subjects decide sequentially but cannot observe earlier contributions ("sequential treatment without information"). The results indicate that sequentiality increases the level of contribution to the public good when subjects are informed about the contribution levels of lower ranked subjects while sequentiality alone has no effect on contributions. Moreover, we observe that earlier players try to influence positively the contributions of subsequent decision makers in the sequence, by making a large contribution. Such behaviour is motivated by the belief that subsequent players will reciprocate by also making a large contribution. We also discuss the effect of group size on aggregate contributions. Finally, we conceptualize a model where agents’ preferences incorporate a “weak” moral motivation element. The moral motivation is “weak” in the sense that contributors update their morally ideal level of contribution according to observed behaviours. This suggested qualification of rational contributors fits well with the patterns observed in the lab.

    Weak moral motivation leads to the decline of voluntary contributions

    Get PDF
    This paper provides a general framework that accounts for the decay of the average contribution observed in most experiments on voluntary contributions to a public good. Each player balances her material utility loss from contributing with her psychological utility loss of deviating from her moral ideal. The novel and central idea of our model is that people.s moral motivation is "weak": their judgement about what is the right contribution to a public good can evolve in the course of interactions, depending partly on observed past contributions and partly on an intrinsic "moral ideal". Under the assumption of weakly morally motivated agents, average voluntary contributions can decline with repetition of the game. Our model also explains other regularities observed in experiments, in particular the phenomenon of over-contributions compared to the Nash prediction and the so-called restart e¤ect, and it is compatible with the conditional cooperation hypothesis.

    More "normal" than normal: scaling distributions and complex systems

    Get PDF
    One feature of many naturally occurring or engineered complex systems is tremendous variability in event sizes. To account for it, the behavior of these systems is often described using power law relationships or scaling distributions, which tend to be viewed as "exotic" because of their unusual properties (e.g., infinite moments). An alternate view is based on mathematical, statistical, and data-analytic arguments and suggests that scaling distributions should be viewed as "more normal than normal". In support of this latter view that has been advocated by Mandelbrot for the last 40 years, we review in this paper some relevant results from probability theory and illustrate a powerful statistical approach for deciding whether the variability associated with observed event sizes is consistent with an underlying Gaussian-type (finite variance) or scaling-type (infinite variance) distribution. We contrast this approach with traditional model fitting techniques and discuss its implications for future modeling of complex systems

    Andrea\u27s Got Two Boyfriends and Berniece at Bay (November 19-21, 2009)

    Get PDF
    Program for Andrea\u27s Got Two Boyfriends and Berniece at Bay (November 19-21, 2009)

    Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version)

    Get PDF
    Although the ``scale-free'' literature is large and growing, it gives neither a precise definition of scale-free graphs nor rigorous proofs of many of their claimed properties. In fact, it is easily shown that the existing theory has many inherent contradictions and verifiably false claims. In this paper, we propose a new, mathematically precise, and structural definition of the extent to which a graph is scale-free, and prove a series of results that recover many of the claimed properties while suggesting the potential for a rich and interesting theory. With this definition, scale-free (or its opposite, scale-rich) is closely related to other structural graph properties such as various notions of self-similarity (or respectively, self-dissimilarity). Scale-free graphs are also shown to be the likely outcome of random construction processes, consistent with the heuristic definitions implicit in existing random graph approaches. Our approach clarifies much of the confusion surrounding the sensational qualitative claims in the scale-free literature, and offers rigorous and quantitative alternatives.Comment: 44 pages, 16 figures. The primary version is to appear in Internet Mathematics (2005

    Does Contributing Sequentially Increase the Level of Cooperation in Public Goods Games ? An Experimental Investigation

    Get PDF
    We run a series of experiments in which subjects have to choose their level of contribution to a pure public good. Our design differs from the standard public good game with respect to the decision procedure. Instead of deciding simultaneously in each round, subjects are randomly ordered in a sequence which differs from round to round. We compare sessions in which subjects can observe the exact contributions from earlier decisions ("Sequential treatment with Information") to sessions in which subjects decide sequentially but cannot observe earlier contributions ("Sequential treatment without information"). Furthermore, we investigate the effect of group size on aggregate contributions. Our result indicate that contributing sequentially increases the level of contribution to the public good when subjects are informed about the contribution levels of lower ranked subjects. Moreover, we observe that earlier players in the sequence try to influence positively the contributions of subsequent decision makers in the sequence, by making a large contribution. Such behaviour is motivated by the belief that subsequent players will reciprocate by also making a large contribution.Public good; sequential Game; contribution

    Understanding Internet topology: principles, models, and validation

    Get PDF
    Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet
    corecore