373 research outputs found
Ricci Curvature of the Internet Topology
Analysis of Internet topologies has shown that the Internet topology has
negative curvature, measured by Gromov's "thin triangle condition", which is
tightly related to core congestion and route reliability. In this work we
analyze the discrete Ricci curvature of the Internet, defined by Ollivier, Lin,
etc. Ricci curvature measures whether local distances diverge or converge. It
is a more local measure which allows us to understand the distribution of
curvatures in the network. We show by various Internet data sets that the
distribution of Ricci cuvature is spread out, suggesting the network topology
to be non-homogenous. We also show that the Ricci curvature has interesting
connections to both local measures such as node degree and clustering
coefficient, global measures such as betweenness centrality and network
connectivity, as well as auxilary attributes such as geographical distances.
These observations add to the richness of geometric structures in complex
network theory.Comment: 9 pages, 16 figures. To be appear on INFOCOM 201
Generating Representative ISP Technologies From First-Principles
Understanding and modeling the factors that underlie the growth and evolution of network topologies are basic questions that impact capacity planning, forecasting, and protocol research. Early topology generation work focused on generating network-wide connectivity maps, either at the AS-level or the router-level, typically with an eye towards reproducing abstract properties of observed topologies. But recently, advocates of an alternative "first-principles" approach question the feasibility of realizing representative topologies with simple generative models that do not explicitly incorporate real-world constraints, such as the relative costs of router configurations, into the model. Our work synthesizes these two lines by designing a topology generation mechanism that incorporates first-principles constraints. Our goal is more modest than that of constructing an Internet-wide topology: we aim to generate representative topologies for single ISPs. However, our methods also go well beyond previous work, as we annotate these topologies with representative capacity and latency information. Taking only demand for network services over a given region as input, we propose a natural cost model for building and interconnecting PoPs and formulate the resulting optimization problem faced by an ISP. We devise hill-climbing heuristics for this problem and demonstrate that the solutions we obtain are quantitatively similar to those in measured router-level ISP topologies, with respect to both topological properties and fault-tolerance
Understanding Internet topology: principles, models, and validation
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet
Distributed Collaborative Monitoring in Software Defined Networks
We propose a Distributed and Collaborative Monitoring system, DCM, with the
following properties. First, DCM allow switches to collaboratively achieve flow
monitoring tasks and balance measurement load. Second, DCM is able to perform
per-flow monitoring, by which different groups of flows are monitored using
different actions. Third, DCM is a memory-efficient solution for switch data
plane and guarantees system scalability. DCM uses a novel two-stage Bloom
filters to represent monitoring rules using small memory space. It utilizes the
centralized SDN control to install, update, and reconstruct the two-stage Bloom
filters in the switch data plane. We study how DCM performs two representative
monitoring tasks, namely flow size counting and packet sampling, and evaluate
its performance. Experiments using real data center and ISP traffic data on
real network topologies show that DCM achieves highest measurement accuracy
among existing solutions given the same memory budget of switches
On the Design of Clean-Slate Network Control and Management Plane
We provide a design of clean-slate control and management plane for data networks using the abstraction of 4D architecture, utilizing and extending 4D’s concept of a logically centralized Decision plane that is responsible for managing network-wide resources. In this paper, a scalable protocol and a dynamically adaptable algorithm for assigning Data plane devices to a physically distributed Decision plane are investigated, that enable a network to operate with minimal configuration and human intervention while providing optimal convergence and robustness against failures. Our work is especially relevant in the context of ISPs and large geographically dispersed enterprise networks. We also provide an extensive evaluation of our algorithm using real-world and artificially generated ISP topologies along with an experimental evaluation using ns-2 simulator
Measured impact of crooked traceroute
Data collected using traceroute-based algorithms underpins research into the Internet’s router-level topology, though it is possible to infer false links from this data. One source of false inference is the combination of per-flow load-balancing, in which more than one path is active from a given source to destination, and classic traceroute, which varies the UDP destination port number or ICMP checksum of successive probe packets, which can cause per-flow load-balancers to treat successive packets as distinct flows and forward them along different paths. Consequently, successive probe packets can solicit responses from unconnected routers, leading to the inference of false links. This paper examines the inaccuracies induced from such false inferences, both on macroscopic and ISP topology mapping. We collected macroscopic topology data to 365k destinations, with techniques that both do and do not try to capture load balancing phenomena.We then use alias resolution techniques to infer if a measurement artifact of classic traceroute induces a false router-level link. This technique detected that 2.71% and 0.76% of the links in our UDP and ICMP graphs were falsely inferred due to the presence of load-balancing. We conclude that most per-flow load-balancing does not induce false links when macroscopic topology is inferred using classic traceroute. The effect of false links on ISP topology mapping is possibly much worse, because the degrees of a tier-1 ISP’s routers derived from classic traceroute were inflated by a median factor of 2.9 as compared to those inferred with Paris traceroute
Robust traffic engineering
Phenomenal growth of Internet applications in recent years have made it difficult to forecast traffic patterns. Daily Internet traffic patterns shows that the network is vulnerable to malicious attacks, flash crowds and denial of service attacks (DDoS). In this paper, we present a robust routing technique (RRT) that attempts to deal with both normal routing conditions and transient failures. Our simulation results are compared with OSPF-TE. The key advantage of RRT is its convergence to generate the solution. It converges quickly to produce the simulation result on the family of topologies we consider in this paper. We are aiming to combine the best of proactive and reactive traffic engineering in RRT
Use of Devolved Controllers in Data Center Networks
In a data center network, for example, it is quite often to use controllers
to manage resources in a centralized man- ner. Centralized control, however,
imposes a scalability problem. In this paper, we investigate the use of
multiple independent controllers instead of a single omniscient controller to
manage resources. Each controller looks after a portion of the network only,
but they together cover the whole network. This therefore solves the
scalability problem. We use flow allocation as an example to see how this
approach can manage the bandwidth use in a distributed manner. The focus is on
how to assign components of a network to the controllers so that (1) each
controller only need to look after a small part of the network but (2) there is
at least one controller that can answer any request. We outline a way to
configure the controllers to fulfill these requirements as a proof that the use
of devolved controllers is possible. We also discuss several issues related to
such implementation.Comment: Appears in INFOCOM 2011 Cloud Computing Worksho
- …