345 research outputs found

    Shortcuts through Colocation Facilities

    Full text link
    Network overlays, running on top of the existing Internet substrate, are of perennial value to Internet end-users in the context of, e.g., real-time applications. Such overlays can employ traffic relays to yield path latencies lower than the direct paths, a phenomenon known as Triangle Inequality Violation (TIV). Past studies identify the opportunities of reducing latency using TIVs. However, they do not investigate the gains of strategically selecting relays in Colocation Facilities (Colos). In this work, we answer the following questions: (i) how Colo-hosted relays compare with other relays as well as with the direct Internet, in terms of latency (RTT) reductions; (ii) what are the best locations for placing the relays to yield these reductions. To this end, we conduct a large-scale one-month measurement of inter-domain paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks. We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We examine the RTTs of the overlay paths obtained via the selected relays, as well as the direct paths. We find that Colo-based relays perform the best and can achieve latency reductions against direct paths, ranging from a few to 100s of milliseconds, in 76% of the total cases; 75% (58% of total cases) of these reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17), London, GB, 201

    Using Internet Geometry to Improve End-to-End Communication Performance

    Get PDF
    The Internet has been designed as a best-effort communication medium between its users, providing connectivity but optimizing little else. It does not guarantee good paths between two users: packets may take longer or more congested routes than necessary, they may be delayed by slow reaction to failures, there may even be no path between users. To obtain better paths, users can form routing overlay networks, which improve the performance of packet delivery by forwarding packets along links in self-constructed graphs. Routing overlays delegate the task of selecting paths to users, who can choose among a diversity of routes which are more reliable, less loaded, shorter or have higher bandwidth than those chosen by the underlying infrastructure. Although they offer improved communication performance, existing routing overlay networks are neither scalable nor fair: the cost of measuring and computing path performance metrics between participants is high (which limits the number of participants) and they lack robustness to misbehavior and selfishness (which could discourage the participation of nodes that are more likely to offer than to receive service). In this dissertation, I focus on finding low-latency paths using routing overlay networks. I support the following thesis: it is possible to make end-to-end communication between Internet users simultaneously faster, scalable, and fair, by relying solely on inherent properties of the Internet latency space. To prove this thesis, I take two complementary approaches. First, I perform an extensive measurement study in which I analyze, using real latency data sets, properties of the Internet latency space: the existence of triangle inequality violations (TIVs) (which expose detour paths: ''indirect'' one-hop paths that have lower round-trip latency than the ''direct'' default paths), the interaction between TIVs and network coordinate systems (which leads to scalable detour discovery), and the presence of mutual advantage (which makes fairness possible). Then, using the results of the measurement study, I design and build PeerWise, the first routing overlay network that reduces end-to-end latency between its participants and is both scalable and fair. I evaluate PeerWise using simulation and through a wide-area deployment on the PlanetLab testbed

    DMFSGD: A Decentralized Matrix Factorization Algorithm for Network Distance Prediction

    Full text link
    The knowledge of end-to-end network distances is essential to many Internet applications. As active probing of all pairwise distances is infeasible in large-scale networks, a natural idea is to measure a few pairs and to predict the other ones without actually measuring them. This paper formulates the distance prediction problem as matrix completion where unknown entries of an incomplete matrix of pairwise distances are to be predicted. The problem is solvable because strong correlations among network distances exist and cause the constructed distance matrix to be low rank. The new formulation circumvents the well-known drawbacks of existing approaches based on Euclidean embedding. A new algorithm, so-called Decentralized Matrix Factorization by Stochastic Gradient Descent (DMFSGD), is proposed to solve the network distance prediction problem. By letting network nodes exchange messages with each other, the algorithm is fully decentralized and only requires each node to collect and to process local measurements, with neither explicit matrix constructions nor special nodes such as landmarks and central servers. In addition, we compared comprehensively matrix factorization and Euclidean embedding to demonstrate the suitability of the former on network distance prediction. We further studied the incorporation of a robust loss function and of non-negativity constraints. Extensive experiments on various publicly-available datasets of network delays show not only the scalability and the accuracy of our approach but also its usability in real Internet applications.Comment: submitted to IEEE/ACM Transactions on Networking on Nov. 201

    Design and Evaluation of Distributed Algorithms for Placement of Network Services

    Get PDF
    Network services play an important role in the Internet today. They serve as data caches for websites, servers for multiplayer games and relay nodes for Voice over IP: VoIP) conversations. While much research has focused on the design of such services, little attention has been focused on their actual placement. This placement can impact the quality of the service, especially if low latency is a requirement. These services can be located on nodes in the network itself, making these nodes supernodes. Typically supernodes are selected in either a proprietary or ad hoc fashion, where a study of this placement is either unavailable or unnecessary. Previous research dealt with the only pieces of the problem, such as finding the location of caches for a static topology, or selecting better routes for relays in VoIP. However, a comprehensive solution is needed for dynamic applications such as multiplayer games or P2P VoIP services. These applications adapt quickly and need solutions based on the immediate demands of the network. In this thesis we develop distributed algorithms to assign nodes the role of a supernode. This research first builds off of prior work by modifying an existing assignment algorithm and implementing it in a distributed system called Supernode Placement in Overlay Topologies: SPOT). New algorithms are developed to assign nodes the supernode role. These algorithms are then evaluated in SPOT to demonstrate improved SN assignment and scalability. Through a series of simulation, emulation, and experimentation insight is gained into the critical issues associated with allocating resources to perform the role of supernodes. Our contributions include distributed algorithms to assign nodes as supernodes, an open source fully functional distributed supernode allocation system, an evaluation of the system in diverse networking environments, and a simulator called SPOTsim which demonstrates the scalability of the system to thousands of nodes. An example of an application deploying such a system is also presented along with the empirical results

    Descubrimiento dinámico de servidores basado en información de localización usando una tabla de Hash distribuida balanceada

    Get PDF
    The current Internet includes a large number of distributed services. In order to guarantee the QoS of the communications in these services, a client has to select a close-by server with enough available resources. To achieve this objective, in this Thesis, we propose a simple and practical solution for Dynamic and Location Aware Server Discovery based on a Distributed Hash Table (DHT). Specifically, we decide to use a Chord DHT system (although any other DHT scheme can be used). In more detail, the solution works as follows. The servers offering a given service S form a Chord-like DHT. In addition, they register their location (topological and/or geographical) information in the DHT. Each client using the service S is connected to at least one server from the DHT. Eventually, a given client C realizes that it is connected to a server providing a bad QoS, then, it queries the DHT in order to find an appropriate server (i.e. a close-by server with enough available resources). We define 11 design criteria, and compare our solution to the Related Work based on them. We show that our solution is the most complete one. Furthermore, we validate the performance of our solution in two different scenarios: (i) NAT Traversal Server Discovery and (ii) Home Agent Discovery in Mobile IP scenarios. The former serves to validate our solution in a highly dynamic environment whereas the latter demonstrates the appropriateness of our solution in more classical environments where the servers are typically always-on hosts. The extra overhead suffered from the servers involved in our system comes from their participation in the Chord DHT. Therefore, it is critical to fairly balance the load among all the servers. In our system as well as in other P2P systems (e.g. P2PSIP) the stored objects are small, then routing dominates the cost of publishing and retrieving objects. Therefore, in the second part of this Thesis, we address the issue of fairly balancing the routing load in Chord DHTs. We present an analytical model to evaluate the routing fairness of Chord based on the well accepted Jain’s Fairness Index (FI). Our model shows that Chord performs poorly. Following this observation, we propose a simple enhancement to the Chord finger selection algorithm with the goal of mitigating this effect. The key advantage of our proposal as compared to previous approaches is that it adds a neglible overhead to the basic Chord algorithm. We validate the goodness of the proposed solution analytically and by large scale simulations.-------------------------------------------------------------------------------------------------------------------------------------------------------------En los últimos años un gran número de servicios distribuídos han aparecido en Internet. Para garantizar la Calidad de Servicio de las comunicaciones en estos servicios sus clientes deben conectarse a un servidor cercano con suficientes recursos disponibles. Para alcanzar este objetivo, en esta Tesis, se propone una solución simple y práctica para el Descubrimiento Dinámico de Servidores basado en Información de Localizació usando una Tabla de Hash Distribuída (DHT). En concreto, hemos decidido usar una DHT de tipo Chord (aunque cualquier otro tipo de DHT puede usarse). A continuación describimos brevemente nuestra solución. Los servidores que ofrecen un servicio específico S forman una DHT tipo Chord donde registran su información de localización (topológica y/o geográfica). Cada cliente que usa el servicio S está conectado al menos a un servidor de la DHT. En caso de que un cliente C perciba que el servidor al que está conectado está ofreciendo una mala Calidad de Servicio, C consulta la DHT para encontrar un servidor más apropiado (p.ej. un servidor cercano con suficientes recursos disponibles). En la Tesis se definen 11 criterios de diseño y se compara nuestra solución con las soluciones existentes en base a ellos, demostrando que la nuestra es la solución más completa. Además, validamos el rendimiento de nuestra solución en dos escenarios diferentes: (i) Descubrimiento de Servidores para atravesar Traductores de Direcciones de Red (NATs) y (ii) Descubrimiento de Agentes Hogar (HAs) en escenarios de Movilidad IP. El primero sirve para validar el rendimiento de nuestra solución en escenarios altamente dinámicos mientras que el segundo demuestra la validez de la solución en un escenario más clásico donde los servidores son máquinas que están ininterrumpidamente funcionando. Los servidores involucrados en nuestro sistema sufren una sobrecarga debido a su participación en la DHT tipo Chord. Desafortunadamente, esta sobrecarga es inherente al sistema anteriormente descrito y no se puede eliminar. En cambio lo que sí podemos hacer es balancear la carga de la manera más justa posible entre todos los servidores. En nuestro sistema, al igual que en otros sistemas P2P (p.ej. P2PSIP) los objetos almacenados tienen un tamaño pequeño, produciendo que sea la tarea de enrutamiento la que domina el coste de publicar y obtener objetos. Por lo tanto, en la segunda parte de esta Tesis abordamos el reparto equilibrado de la carga de enrutamiento en DHTs tipo Chord. En primer lugar, definimos un modelo analítico para evaluar el reparto de la carga de enrutamiento entre los nodos que forman una DHT tipo Chord. Para ello nos basamos en una métrica aceptada por la comunidad investigadora como es el Jain’s Fairness Index (FI). El modelo resultante demuestra que Chord tiene un rendimiento pobre en el reparto justo de la carga de enrutamiento. Basándonos en esta observación proponemos una modificación simple al algoritmo de selección de punteros de Chord para mejorar el reparto de la carga de enrutamiento. La ventaja fundamental de nuestra solución en comparación con otras propuestas anteriores es que nuestra solución añade un coste despreciable al algoritmo básico de Chord. Finalmente, validamos el rendimiento de nuestra solución analíticamente y por medio de simulaciones a gran escala

    Decentralized network coordinate system for Internet distance prediction

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2005.Includes bibliographical references (p. 163-168).Several recently emerged Internet services make use of application-level or overlay networks. Examples of such services include overlay multicast, structured peer-to- peer lookup services, and peer-to-peer file sharing. Many of these services could benefit from enabling participating end hosts to estimate their relative network locations within the overlay. In this thesis, we present PCoord, a peer-to-peer network coordinate system for overlay topology discovery and distance prediction. The goal of PCoord is to allow participating peer nodes in an overlay network to collaboratively construct an accurate geometric model of the overlay network topology in a completely decentralized peer-to-peer fashion. We evaluate the PCoord approach through extensive simulations using both real network measurements and simulated topologies. Our simulation results indicate that PCoord can embed hosts in a low dimensional Euclidean model with a small median prediction error.by Li-wei H. Lehman.Ph.D
    corecore