921 research outputs found
Shortcuts through Colocation Facilities
Network overlays, running on top of the existing Internet substrate, are of
perennial value to Internet end-users in the context of, e.g., real-time
applications. Such overlays can employ traffic relays to yield path latencies
lower than the direct paths, a phenomenon known as Triangle Inequality
Violation (TIV). Past studies identify the opportunities of reducing latency
using TIVs. However, they do not investigate the gains of strategically
selecting relays in Colocation Facilities (Colos). In this work, we answer the
following questions: (i) how Colo-hosted relays compare with other relays as
well as with the direct Internet, in terms of latency (RTT) reductions; (ii)
what are the best locations for placing the relays to yield these reductions.
To this end, we conduct a large-scale one-month measurement of inter-domain
paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks.
We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We
examine the RTTs of the overlay paths obtained via the selected relays, as well
as the direct paths. We find that Colo-based relays perform the best and can
achieve latency reductions against direct paths, ranging from a few to 100s of
milliseconds, in 76% of the total cases; 75% (58% of total cases) of these
reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17),
London, GB, 201
A New Method for Assessing the Resiliency of Large, Complex Networks
Designing resilient and reliable networks is a principle concern of planners and private firms. Traffic congestion whether recurring or as the result of some aperiodic event is extremely costly. This paper describes an alternative process and a model for analyzing the resiliency of networks that address some of the shortcomings of more traditional approaches – e.g., the four-step modeling process used in transportation planning. It should be noted that the authors do not view this as a replacement to current approaches but rather as a complementary tool designed to augment analysis capabilities. The process that is described in this paper for analyzing the resiliency of a network involves at least three steps: 1. assessment or identification of important nodes and links according to different criteria 2. verification of critical nodes and links based on failure simulations and 3. consequence. Raster analysis, graph-theory principles and GIS are used to develop a model for carrying out each of these steps. The methods are demonstrated using two, large interdependent networks for a metropolitan area in the United States.
Shaping the Internet: 10 Years of IXP Growth
Over the past decade, IXPs have been playing a key role in enabling interdomain connectivity. Their traffic volumes have grown dramatically and their physical presence has spread throughout the world. While the relevance of IXPs is undeniable, their long-term contribution to the shaping of the current Internet is not fully understood yet. In this paper, we look into the impact on Internet routes of the intense IXP growth over the last decade. We observe that while in general IXPs only have a small effect in path shortening, very large networks do enjoy a clear IXP-enabled path reduction. We also observe a diversion of the routes, away from the central Tier-1 ASes supported by IXPs. Interestingly, we also find that whereas IXP membership has grown, large and central ASes have steadily moved away from public IXP peerings, whereas smaller ones have embraced them. Despite all this changes, we find though that a clear hierarchy remains, with a small group of highly central network
On the dynamics of interdomain routing in the Internet
The routes used in the Internet's interdomain routing system are a rich
information source that could be exploited to answer a wide range of
questions. However, analyzing routes is difficult, because the fundamental
object of study is a set of paths. In this dissertation, we present new
analysis tools -- metrics and methods -- for analyzing paths, and apply them
to study interdomain routing in the Internet over long periods of time.
Our contributions are threefold. First, we build on an existing metric (Routing
State Distance) to define a new metric that allows us to measure the similarity
between two prefixes with respect to the state of the global routing system.
Applying this metric over time yields a measure of how the set of paths to each
prefix varies at a given timescale. Second, we present PathMiner, a system to
extract large scale routing events from background noise and identify the AS
(Autonomous System) or AS-link most likely responsible for the event. PathMiner
is distinguished from previous work in its ability to identify and analyze
large-scale events that may re-occur many times over long timescales. We show
that it is scalable, being able to extract significant events from multiple
years of routing data at a daily granularity. Finally, we equip Routing State
Distance with a new set of tools for identifying and characterizing
unusually-routed ASes. At the micro level, we use our tools to identify
clusters of ASes that have the most unusual routing at each time. We also show
that analysis of individual ASes can expose business and engineering strategies
of the organizations owning the ASes. These strategies are often related to
content delivery or service replication. At the macro level, we show that the
set of ASes with the most unusual routing defines discernible and interpretable
phases of the Internet's evolution. Furthermore, we show that our tools can be
used to provide a quantitative measure of the "flattening" of the Internet
- …