5,965 research outputs found

    Traffic on complex networks: Towards understanding global statistical properties from microscopic density fluctuations

    Get PDF
    We study the microscopic time fluctuations of traffic load and the global statistical properties of a dense traffic of particles on scale-free cyclic graphs. For a wide range of driving rates R the traffic is stationary and the load time series exhibits antipersistence due to the regulatory role of the superstructure associated with two hub nodes in the network. We discuss how the superstructure affects the functioning of the network at high traffic density and at the jamming threshold. The degree of correlations systematically decreases with increasing traffic density and eventually disappears when approaching a jamming density Rc. Already before jamming we observe qualitative changes in the global network-load distributions and the particle queuing times. These changes are related to the occurrence of temporary crises in which the network-load increases dramatically, and then slowly falls back to a value characterizing free flow

    Understanding Internet topology: principles, models, and validation

    Get PDF
    Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    Core-periphery organization of complex networks

    Full text link
    Networks may, or may not, be wired to have a core that is both itself densely connected and central in terms of graph distance. In this study we propose a coefficient to measure if the network has such a clear-cut core-periphery dichotomy. We measure this coefficient for a number of real-world and model networks and find that different classes of networks have their characteristic values. For example do geographical networks have a strong core-periphery structure, while the core-periphery structure of social networks (despite their positive degree-degree correlations) is rather weak. We proceed to study radial statistics of the core, i.e. properties of the n-neighborhoods of the core vertices for increasing n. We find that almost all networks have unexpectedly many edges within n-neighborhoods at a certain distance from the core suggesting an effective radius for non-trivial network processes

    Design and analysis of adaptive hierarchical low-power long-range networks

    Get PDF
    A new phase of evolution of Machine-to-Machine (M2M) communication has started where vertical Internet of Things (IoT) deployments dedicated to a single application domain gradually change to multi-purpose IoT infrastructures that service different applications across multiple industries. New networking technologies are being deployed operating over sub-GHz frequency bands that enable multi-tenant connectivity over long distances and increase network capacity by enforcing low transmission rates to increase network capacity. Such networking technologies allow cloud-based platforms to be connected with large numbers of IoT devices deployed several kilometres from the edges of the network. Despite the rapid uptake of Long-power Wide-area Networks (LPWANs), it remains unclear how to organize the wireless sensor network in a scaleable and adaptive way. This paper introduces a hierarchical communication scheme that utilizes the new capabilities of Long-Range Wireless Sensor Networking technologies by combining them with broadly used 802.11.4-based low-range low-power technologies. The design of the hierarchical scheme is presented in detail along with the technical details on the implementation in real-world hardware platforms. A platform-agnostic software firmware is produced that is evaluated in real-world large-scale testbeds. The performance of the networking scheme is evaluated through a series of experimental scenarios that generate environments with varying channel quality, failing nodes, and mobile nodes. The performance is evaluated in terms of the overall time required to organize the network and setup a hierarchy, the energy consumption and the overall lifetime of the network, as well as the ability to adapt to channel failures. The experimental analysis indicate that the combination of long-range and short-range networking technologies can lead to scalable solutions that can service concurrently multiple applications

    The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena

    Full text link
    The Internet is the most complex system ever created in human history. Therefore, its dynamics and traffic unsurprisingly take on a rich variety of complex dynamics, self-organization, and other phenomena that have been researched for years. This paper is a review of the complex dynamics of Internet traffic. Departing from normal treatises, we will take a view from both the network engineering and physics perspectives showing the strengths and weaknesses as well as insights of both. In addition, many less covered phenomena such as traffic oscillations, large-scale effects of worm traffic, and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex System

    Shortest Path versus Multi-Hub Routing in Networks with Uncertain Demand

    Full text link
    We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixed- demand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands

    Topology Discovery of Sparse Random Graphs With Few Participants

    Get PDF
    We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants.Comment: A shorter version appears in ACM SIGMETRICS 2011. This version is scheduled to appear in J. on Random Structures and Algorithm
    • …
    corecore