194 research outputs found

    A contrasting look at self-organization in the Internet and next-generation communication networks

    Get PDF
    This article examines contrasting notions of self-organization in the Internet and next-generation communication networks, by reviewing in some detail recent evidence regarding several of the more popular attempts to explain prominent features of Internet structure and behavior as "emergent phenomena." In these examples, what might appear to the nonexpert as "emergent self-organization" in the Internet actually results from well conceived (albeit perhaps ad hoc) design, with explanations that are mathematically rigorous, in agreement with engineering reality, and fully consistent with network measurements. These examples serve as concrete starting points from which networking researchers can assess whether or not explanations involving self-organization are relevant or appropriate in the context of next-generation communication networks, while also highlighting the main differences between approaches to self-organization that are rooted in engineering design vs. those inspired by statistical physics

    Mathematics and the Internet: A Source of Enormous Confusion and Great Potential

    Get PDF
    Graph theory models the Internet mathematically, and a number of plausible mathematically intersecting network models for the Internet have been developed and studied. Simultaneously, Internet researchers have developed methodology to use real data to validate, or invalidate, proposed Internet models. The authors look at these parallel developments, particularly as they apply to scale-free network models of the preferential attachment type

    More "normal" than normal: scaling distributions and complex systems

    Get PDF
    One feature of many naturally occurring or engineered complex systems is tremendous variability in event sizes. To account for it, the behavior of these systems is often described using power law relationships or scaling distributions, which tend to be viewed as "exotic" because of their unusual properties (e.g., infinite moments). An alternate view is based on mathematical, statistical, and data-analytic arguments and suggests that scaling distributions should be viewed as "more normal than normal". In support of this latter view that has been advocated by Mandelbrot for the last 40 years, we review in this paper some relevant results from probability theory and illustrate a powerful statistical approach for deciding whether the variability associated with observed event sizes is consistent with an underlying Gaussian-type (finite variance) or scaling-type (infinite variance) distribution. We contrast this approach with traditional model fitting techniques and discuss its implications for future modeling of complex systems

    Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version)

    Get PDF
    Although the ``scale-free'' literature is large and growing, it gives neither a precise definition of scale-free graphs nor rigorous proofs of many of their claimed properties. In fact, it is easily shown that the existing theory has many inherent contradictions and verifiably false claims. In this paper, we propose a new, mathematically precise, and structural definition of the extent to which a graph is scale-free, and prove a series of results that recover many of the claimed properties while suggesting the potential for a rich and interesting theory. With this definition, scale-free (or its opposite, scale-rich) is closely related to other structural graph properties such as various notions of self-similarity (or respectively, self-dissimilarity). Scale-free graphs are also shown to be the likely outcome of random construction processes, consistent with the heuristic definitions implicit in existing random graph approaches. Our approach clarifies much of the confusion surrounding the sensational qualitative claims in the scale-free literature, and offers rigorous and quantitative alternatives.Comment: 44 pages, 16 figures. The primary version is to appear in Internet Mathematics (2005

    Understanding Internet topology: principles, models, and validation

    Get PDF
    Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet

    In Search of netUnicorn: A Data-Collection Platform to Develop Generalizable ML Models for Network Security Problems

    Full text link
    The remarkable success of the use of machine learning-based solutions for network security problems has been impeded by the developed ML models' inability to maintain efficacy when used in different network environments exhibiting different network behaviors. This issue is commonly referred to as the generalizability problem of ML models. The community has recognized the critical role that training datasets play in this context and has developed various techniques to improve dataset curation to overcome this problem. Unfortunately, these methods are generally ill-suited or even counterproductive in the network security domain, where they often result in unrealistic or poor-quality datasets. To address this issue, we propose an augmented ML pipeline that leverages explainable ML tools to guide the network data collection in an iterative fashion. To ensure the data's realism and quality, we require that the new datasets should be endogenously collected in this iterative process, thus advocating for a gradual removal of data-related problems to improve model generalizability. To realize this capability, we develop a data-collection platform, netUnicorn, that takes inspiration from the classic "hourglass" model and is implemented as its "thin waist" to simplify data collection for different learning problems from diverse network environments. The proposed system decouples data-collection intents from the deployment mechanisms and disaggregates these high-level intents into smaller reusable, self-contained tasks. We demonstrate how netUnicorn simplifies collecting data for different learning problems from multiple network environments and how the proposed iterative data collection improves a model's generalizability
    corecore