28,711 research outputs found
Improving end-to-end availability using overlay networks
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2005.Includes bibliographical references (p. 139-150).The end-to-end availability of Internet services is between two and three orders of magnitude worse than other important engineered systems, including the US airline system, the 911 emergency response system, and the US public telephone system. This dissertation explores three systems designed to mask Internet failures, and, through a study of three years of data collected on a 31-site testbed, why these failures happen and how effectively they can be masked. A core aspect of many of the failures that interrupt end-to-end communication is that they fall outside the expected domain of well-behaved network failures. Many traditional techniques cope with link and router failures; as a result, the remaining failures are those caused by software and hardware bugs, misconfiguration, malice, or the inability of current routing systems to cope with persistent congestion.The effects of these failures are exacerbated because Internet services depend upon the proper functioning of many components-wide-area routing, access links, the domain name system, and the servers themselves-and a failure in any of them can prove disastrous to the proper functioning of the service. This dissertation describes three complementary systems to increase Internet availability in the face of such failures. Each system builds upon the idea of an overlay network, a network created dynamically between a group of cooperating Internet hosts. The first two systems, Resilient Overlay Networks (RON) and Multi-homed Overlay Networks (MONET) determine whether the Internet path between two hosts is working on an end-to-end basis. Both systems exploit the considerable redundancy available in the underlying Internet to find failure-disjoint paths between nodes, and forward traffic along a working path. RON is able to avoid 50% of the Internet outages that interrupt communication between a small group of communicating nodes.MONET is more aggressive, combining an overlay network of Web proxies with explicitly engineered redundant links to the Internet to also mask client access link failures. Eighteen months of measurements from a six-site deployment of MONET show that it increases a client's ability to access working Web sites by nearly an order of magnitude. Where RON and MONET combat accidental failures, the Mayday system guards against denial- of-service attacks by surrounding a vulnerable Internet server with a ring of filtering routers. Mayday then uses a set of overlay nodes to act as mediators between the service and its clients, permitting only properly authenticated traffic to reach the server.by David Godbe Andersen.Ph.D
CASPR: Judiciously Using the Cloud for Wide-Area Packet Recovery
We revisit a classic networking problem -- how to recover from lost packets
in the best-effort Internet. We propose CASPR, a system that judiciously
leverages the cloud to recover from lost or delayed packets. CASPR supplements
and protects best-effort connections by sending a small number of coded packets
along the highly reliable but expensive cloud paths. When receivers detect
packet loss, they recover packets with the help of the nearby data center, not
the sender, thus providing quick and reliable packet recovery for
latency-sensitive applications. Using a prototype implementation and its
deployment on the public cloud and the PlanetLab testbed, we quantify the
benefits of CASPR in providing fast, cost effective packet recovery. Using
controlled experiments, we also explore how these benefits translate into
improvements up and down the network stack
Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform
This paper presents a case for exploiting the synergy of dedicated and
opportunistic network resources in a distributed hosting platform for data
stream processing applications. Our previous studies have demonstrated the
benefits of combining dedicated reliable resources with opportunistic resources
in case of high-throughput computing applications, where timely allocation of
the processing units is the primary concern. Since distributed stream
processing applications demand large volume of data transmission between the
processing sites at a consistent rate, adequate control over the network
resources is important here to assure a steady flow of processing. In this
paper, we propose a system model for the hybrid hosting platform where stream
processing servers installed at distributed sites are interconnected with a
combination of dedicated links and public Internet. Decentralized algorithms
have been developed for allocation of the two classes of network resources
among the competing tasks with an objective towards higher task throughput and
better utilization of expensive dedicated resources. Results from extensive
simulation study show that with proper management, systems exploiting the
synergy of dedicated and opportunistic resources yield considerably higher task
throughput and thus, higher return on investment over the systems solely using
expensive dedicated resources.Comment: 9 page
Shortcuts through Colocation Facilities
Network overlays, running on top of the existing Internet substrate, are of
perennial value to Internet end-users in the context of, e.g., real-time
applications. Such overlays can employ traffic relays to yield path latencies
lower than the direct paths, a phenomenon known as Triangle Inequality
Violation (TIV). Past studies identify the opportunities of reducing latency
using TIVs. However, they do not investigate the gains of strategically
selecting relays in Colocation Facilities (Colos). In this work, we answer the
following questions: (i) how Colo-hosted relays compare with other relays as
well as with the direct Internet, in terms of latency (RTT) reductions; (ii)
what are the best locations for placing the relays to yield these reductions.
To this end, we conduct a large-scale one-month measurement of inter-domain
paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks.
We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We
examine the RTTs of the overlay paths obtained via the selected relays, as well
as the direct paths. We find that Colo-based relays perform the best and can
achieve latency reductions against direct paths, ranging from a few to 100s of
milliseconds, in 76% of the total cases; 75% (58% of total cases) of these
reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17),
London, GB, 201
Recommended from our members
Multimedia delivery in the future internet
The term “Networked Media” implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizens’ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications “on the move”, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Implications of Selfish Neighbor Selection in Overlay Networks
In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.Marie Curie Outgoing International Fellowship of the EU (MOIF-CT-2005-007230); National Science Foundation (CNS Cybertrust 0524477, CNS NeTS 0520166, CNS ITR 0205294, EIA RI 020206
- …