11,962 research outputs found

    Understanding edge-connectivity in the Internet through core-decomposition

    Get PDF
    Internet is a complex network composed by several networks: the Autonomous Systems, each one designed to transport information efficiently. Routing protocols aim to find paths between nodes whenever it is possible (i.e., the network is not partitioned), or to find paths verifying specific constraints (e.g., a certain QoS is required). As connectivity is a measure related to both of them (partitions and selected paths) this work provides a formal lower bound to it based on core-decomposition, under certain conditions, and low complexity algorithms to find it. We apply them to analyze maps obtained from the prominent Internet mapping projects, using the LaNet-vi open-source software for its visualization

    K-core decomposition of Internet graphs: hierarchies, self-similarity and measurement biases

    Get PDF
    We consider the kk-core decomposition of network models and Internet graphs at the autonomous system (AS) level. The kk-core analysis allows to characterize networks beyond the degree distribution and uncover structural properties and hierarchies due to the specific architecture of the system. We compare the kk-core structure obtained for AS graphs with those of several network models and discuss the differences and similarities with the real Internet architecture. The presence of biases and the incompleteness of the real maps are discussed and their effect on the kk-core analysis is assessed with numerical experiments simulating biased exploration on a wide range of network models. We find that the kk-core analysis provides an interesting characterization of the fluctuations and incompleteness of maps as well as information helping to discriminate the original underlying structure

    Mathematics and the Internet: A Source of Enormous Confusion and Great Potential

    Get PDF
    Graph theory models the Internet mathematically, and a number of plausible mathematically intersecting network models for the Internet have been developed and studied. Simultaneously, Internet researchers have developed methodology to use real data to validate, or invalidate, proposed Internet models. The authors look at these parallel developments, particularly as they apply to scale-free network models of the preferential attachment type

    VoG: Summarizing and Understanding Large Graphs

    Get PDF
    How can we succinctly describe a million-node graph with a few simple sentences? How can we measure the "importance" of a set of discovered subgraphs in a large graph? These are exactly the problems we focus on. Our main ideas are to construct a "vocabulary" of subgraph-types that often occur in real graphs (e.g., stars, cliques, chains), and from a set of subgraphs, find the most succinct description of a graph in terms of this vocabulary. We measure success in a well-founded way by means of the Minimum Description Length (MDL) principle: a subgraph is included in the summary if it decreases the total description length of the graph. Our contributions are three-fold: (a) formulation: we provide a principled encoding scheme to choose vocabulary subgraphs; (b) algorithm: we develop \method, an efficient method to minimize the description cost, and (c) applicability: we report experimental results on multi-million-edge real graphs, including Flickr and the Notre Dame web graph.Comment: SIAM International Conference on Data Mining (SDM) 201

    {VoG}: {Summarizing} and Understanding Large Graphs

    Get PDF
    How can we succinctly describe a million-node graph with a few simple sentences? How can we measure the "importance" of a set of discovered subgraphs in a large graph? These are exactly the problems we focus on. Our main ideas are to construct a "vocabulary" of subgraph-types that often occur in real graphs (e.g., stars, cliques, chains), and from a set of subgraphs, find the most succinct description of a graph in terms of this vocabulary. We measure success in a well-founded way by means of the Minimum Description Length (MDL) principle: a subgraph is included in the summary if it decreases the total description length of the graph. Our contributions are three-fold: (a) formulation: we provide a principled encoding scheme to choose vocabulary subgraphs; (b) algorithm: we develop \method, an efficient method to minimize the description cost, and (c) applicability: we report experimental results on multi-million-edge real graphs, including Flickr and the Notre Dame web graph

    Opportunistic Third-Party Backhaul for Cellular Wireless Networks

    Full text link
    With high capacity air interfaces and large numbers of small cells, backhaul -- the wired connectivity to base stations -- is increasingly becoming the cost driver in cellular wireless networks. One reason for the high cost of backhaul is that capacity is often purchased on leased lines with guaranteed rates provisioned to peak loads. In this paper, we present an alternate \emph{opportunistic backhaul} model where third parties provide base stations and backhaul connections and lease out excess capacity in their networks to the cellular provider when available, presumably at significantly lower costs than guaranteed connections. We describe a scalable architecture for such deployments using open access femtocells, which are small plug-and-play base stations that operate in the carrier's spectrum but can connect directly into the third party provider's wired network. Within the proposed architecture, we present a general user association optimization algorithm that enables the cellular provider to dynamically determine which mobiles should be assigned to the third-party femtocells based on the traffic demands, interference and channel conditions and third-party access pricing. Although the optimization is non-convex, the algorithm uses a computationally efficient method for finding approximate solutions via dual decomposition. Simulations of the deployment model based on actual base station locations are presented that show that large capacity gains are achievable if adoption of third-party, open access femtocells can reach even a small fraction of the current market penetration of WiFi access points.Comment: 9 pages, 6 figure

    MEDUSA - New Model of Internet Topology Using k-shell Decomposition

    Full text link
    The k-shell decomposition of a random graph provides a different and more insightful separation of the roles of the different nodes in such a graph than does the usual analysis in terms of node degrees. We develop this approach in order to analyze the Internet's structure at a coarse level, that of the "Autonomous Systems" or ASes, the subnetworks out of which the Internet is assembled. We employ new data from DIMES (see http://www.netdimes.org), a distributed agent-based mapping effort which at present has attracted over 3800 volunteers running more than 7300 DIMES clients in over 85 countries. We combine this data with the AS graph information available from the RouteViews project at Univ. Oregon, and have obtained an Internet map with far more detail than any previous effort. The data suggests a new picture of the AS-graph structure, which distinguishes a relatively large, redundantly connected core of nearly 100 ASes and two components that flow data in and out from this core. One component is fractally interconnected through peer links; the second makes direct connections to the core only. The model which results has superficial similarities with and important differences from the "Jellyfish" structure proposed by Tauro et al., so we call it a "Medusa." We plan to use this picture as a framework for measuring and extrapolating changes in the Internet's physical structure. Our k-shell analysis may also be relevant for estimating the function of nodes in the "scale-free" graphs extracted from other naturally-occurring processes.Comment: 24 pages, 17 figure
    • 

    corecore