2,113 research outputs found

    CliqueStream: an efficient and fault-resilient live streaming network on a clustered peer-to-peer overlay

    Full text link
    Several overlay-based live multimedia streaming platforms have been proposed in the recent peer-to-peer streaming literature. In most of the cases, the overlay neighbors are chosen randomly for robustness of the overlay. However, this causes nodes that are distant in terms of proximity in the underlying physical network to become neighbors, and thus data travels unnecessary distances before reaching the destination. For efficiency of bulk data transmission like multimedia streaming, the overlay neighborhood should resemble the proximity in the underlying network. In this paper, we exploit the proximity and redundancy properties of a recently proposed clique-based clustered overlay network, named eQuus, to build efficient as well as robust overlays for multimedia stream dissemination. To combine the efficiency of content pushing over tree structured overlays and the robustness of data-driven mesh overlays, higher capacity stable nodes are organized in tree structure to carry the long haul traffic and less stable nodes with intermittent presence are organized in localized meshes. The overlay construction and fault-recovery procedures are explained in details. Simulation study demonstrates the good locality properties of the platform. The outage time and control overhead induced by the failure recovery mechanism are minimal as demonstrated by the analysis.Comment: 10 page

    Greedy routing and virtual coordinates for future networks

    Get PDF
    At the core of the Internet, routers are continuously struggling with ever-growing routing and forwarding tables. Although hardware advances do accommodate such a growth, we anticipate new requirements e.g. in data-oriented networking where each content piece has to be referenced instead of hosts, such that current approaches relying on global information will not be viable anymore, no matter the hardware progress. In this thesis, we investigate greedy routing methods that can achieve similar routing performance as today but use much less resources and which rely on local information only. To this end, we add specially crafted name spaces to the network in which virtual coordinates represent the addressable entities. Our scheme enables participating routers to make forwarding decisions using only neighbourhood information, as the overarching pseudo-geometric name space structure already organizes and incorporates "vicinity" at a global level. A first challenge to the application of greedy routing on virtual coordinates to future networks is that of "routing dead-ends" that are local minima due to the difficulty of consistent coordinates attribution. In this context, we propose a routing recovery scheme based on a multi-resolution embedding of the network in low-dimensional Euclidean spaces. The recovery is performed by routing greedily on a blurrier view of the network. The different network detail-levels are obtained though the embedding of clustering-levels of the graph. When compared with higher-dimensional embeddings of a given network, our method shows a significant diminution of routing failures for similar header and control-state sizes. A second challenge to the application of virtual coordinates and greedy routing to future networks is the support of "customer-provider" as well as "peering" relationships between participants, resulting in a differentiated services environment. Although an application of greedy routing within such a setting would combine two very common fields of today's networking literature, such a scenario has, surprisingly, not been studied so far. In this context we propose two approaches to address this scenario. In a first approach we implement a path-vector protocol similar to that of BGP on top of a greedy embedding of the network. This allows each node to build a spatial map associated with each of its neighbours indicating the accessible regions. Routing is then performed through the use of a decision-tree classifier taking the destination coordinates as input. When applied on a real-world dataset (the CAIDA 2004 AS graph) we demonstrate an up to 40% compression ratio of the routing control information at the network's core as well as a computationally efficient decision process comparable to methods such as binary trees and tries. In a second approach, we take inspiration from consensus-finding in social sciences and transform the three-dimensional distance data structure (where the third dimension encodes the service differentiation) into a two-dimensional matrix on which classical embedding tools can be used. This transformation is achieved by agreeing on a set of constraints on the inter-node distances guaranteeing an administratively-correct greedy routing. The computed distances are also enhanced to encode multipath support. We demonstrate a good greedy routing performance as well as an above 90% satisfaction of multipath constraints when relying on the non-embedded obtained distances on synthetic datasets. As various embeddings of the consensus distances do not fully exploit their multipath potential, the use of compression techniques such as transform coding to approximate the obtained distance allows for better routing performances

    Using Internet Geometry to Improve End-to-End Communication Performance

    Get PDF
    The Internet has been designed as a best-effort communication medium between its users, providing connectivity but optimizing little else. It does not guarantee good paths between two users: packets may take longer or more congested routes than necessary, they may be delayed by slow reaction to failures, there may even be no path between users. To obtain better paths, users can form routing overlay networks, which improve the performance of packet delivery by forwarding packets along links in self-constructed graphs. Routing overlays delegate the task of selecting paths to users, who can choose among a diversity of routes which are more reliable, less loaded, shorter or have higher bandwidth than those chosen by the underlying infrastructure. Although they offer improved communication performance, existing routing overlay networks are neither scalable nor fair: the cost of measuring and computing path performance metrics between participants is high (which limits the number of participants) and they lack robustness to misbehavior and selfishness (which could discourage the participation of nodes that are more likely to offer than to receive service). In this dissertation, I focus on finding low-latency paths using routing overlay networks. I support the following thesis: it is possible to make end-to-end communication between Internet users simultaneously faster, scalable, and fair, by relying solely on inherent properties of the Internet latency space. To prove this thesis, I take two complementary approaches. First, I perform an extensive measurement study in which I analyze, using real latency data sets, properties of the Internet latency space: the existence of triangle inequality violations (TIVs) (which expose detour paths: ''indirect'' one-hop paths that have lower round-trip latency than the ''direct'' default paths), the interaction between TIVs and network coordinate systems (which leads to scalable detour discovery), and the presence of mutual advantage (which makes fairness possible). Then, using the results of the measurement study, I design and build PeerWise, the first routing overlay network that reduces end-to-end latency between its participants and is both scalable and fair. I evaluate PeerWise using simulation and through a wide-area deployment on the PlanetLab testbed

    Contributions to routing scalability and QoS assurance in cloud data transport networks based on the recursive internetwork architecture

    Get PDF
    With an increasing number of devices and heterogeneous distributed applications, it is becoming evident that service delivered by the current Internet fall short to supply the actual Quality of Service (QoS) requirements of applications. In addition, the global scope of the IP layer causes large scalability problems on the network. Multiple solutions aim to overcome the limitations of the model (BGP, NAT, etc.), but all end being constrained by the same networking model that they try to improve, ending simply breaking and patching the stack itself of TCP/IP. In contrast, RINA proposes a new clean-slate Internet architecture based on a recursive networking stack with focus on inter process communication, where each layer, or DIF, performs the same set of tasks. DIFs are fully configurable by mean of programmable policies, and provide complete support for QoS services. RINA is capable to provide a standardized way to express the capabilities of each layer, the QoS Cubes. With those, RINA allows for applications and upper processes to express their requirements in terms of latency, losses, etc. The contributions in this thesis take profit from the recursive stack of RINA and the use of policies to propose and analyse old and new solutions which would not be compatible with the current TCP/IP Internet. Improving the QoS services, this work takes profit from the information on requirements provided by the applications themselves to improve the assurance of QoS. With the use of Q-based scheduling policies, improved QoS assurances are provided, aiming to provide “good enough” service for all flows in the network, resulting in a more appropriate sharing of resources. These policies have been tested in backbone-like networks, showing interesting improvements with respect to commonly used solutions like MPLS-based VPNs. In addition the provisioning of QoS services to end-users is also considered. In order to allow that, it is required to impose some limits on what end-users can send to the network, limiting the amount of priority traffic that potentially greedy users can send. In that regard, while enforcing strict rate-limits per QoS would be trivial in RINA, a new △Q-based rate-limiting policy that aims to limit the amount of priority traffic in a more user-friendly way is also explored. In terms of scalability, this work also considers different measures to improve forwarding and routing within large-scale networks. As for the use of policies that could profit from specific network topologies, a new forwarding policy, that mix both topological rules and exceptions, is proposed. With this policy, forwarding table lookups in large tables are replaced with fast and simple forwarding rules based on the location of nodes and their neighbourhood. Given the common topologies used in large data centres, the proposed policy is found to be a perfect match for those scenarios. Test for different data centre topology showed clear improvements, requiring only a small fraction of all forwarding information despite the large size of such networks, depending that in the number of concurrent failures in the network rather than on the size of it. In addition, this work also considers the use of topological routing policies to populate exceptions upon failures. The use of topological routing solutions resulted in reduced complexity for computing paths and less routing messages. In addition to topological solutions, the use other routing solution, not well suited for the IP environment are also investigated. Specifically, it is shown how a Landmark routing solution could be implemented within RINA. Finally, efforts are also devoted to analyse the importance of path selection for ensuring QoS requirements and how it is not required to reach extremes solutions, like the use of connections, to provide the required services.Con un número cada vez mayor de dispositivos y aplicaciones distribuidas, se está volviendo evidente que el servicio best-effort ofrecido por la actual Internet TCP/IP no satisface los requisitos de calidad de servicio (QoS) de las aplicaciones. No solo eso, sino que el alcance global de la capa de IP se convierte en la causa de grandes problemas de escalabilidad, requiriendo costes cada vez más altos para ser resueltos. Desde la implantación de TCP/IP, han aparecido múltiples soluciones que tienen como objetivo superar las limitaciones del modelo (BGP, NAT, LISP, etc.). Aun así, todas estas soluciones terminan restringidas por el mismo modelo de red que intentan mejorar. Dado esto, la mayoría de las soluciones terminan simplemente rompiendo y parcheando la pila misma de TCP/IP. Con el objetivo de resolver esos problemas, la Recursive InterNetwork Architecture (RINA) propone una nueva arquitectura de Internet que vuelve a las raíces de la comunicación en red. En lugar de parchear la pila actual de TCP/IP, RINA propone una pila de red recursiva con enfoque en la comunicación entre procesos, donde cada capa, llamada Distributed IPC Facility (DIF), realiza el mismo conjunto de tareas. Mientras realizan las mismas tareas, las DIF de RINA son completamente configurables por medio de políticas programables, definiciones de cómo realizar tales tareas. Además, RINA brinda soporte completo para servicios de QoS por medio de los Cubos QoS, o clases de QoS que definen las capacidades de cada DIF. Con el uso de los Cubos QoS, RINA es capaz de proporcionar una forma estandarizada de expresar las capacidades de cada capa. Además, dada esa información, RINA también permite que las aplicaciones y los procesos de capas superiores expresen sus requisitos de QoS en términos de latencia aceptada, pérdidas, uso promedio, etc. Las contribuciones en esta tesis sacan provecho de la pila recursiva de RINA y el uso de políticas para proponer y analizar soluciones, antiguas y nuevas, para QoS y escalabilidad, que no serán compatibles con la Internet TCP/IP actual. En términos de mejoras de los servicios de QoS, el trabajo en esta tesis aprovecha la información sobre los requisitos de flujo, proporcionados por las propias aplicaciones, para mejorar las garantías de QoS proporcionadas por la red. Propone el uso de políticas basadas en △Q, proporcionando garantías de QoS mejoradas, que coinciden mejor con los requisitos de los flujos. A diferencia de las soluciones de diferenciación de QoS más simples, donde los servicios de QoS se proporcionan en orden de prioridad, △Q pretende proporcionar un servicio “suficientemente bueno" para todos los flujos en la red, lo que resulta en una repartición de recursos más apropiada. En este trabajo, estas políticas se han probado en redes tipo backbone, que muestran mejoras interesantes con respecto a las soluciones comunes de diferenciación de QoS, como las VPN basadas en MPLS. Además del uso de las políticas de △Q en el núcleo de la red, esta tesis también considera el suministro de servicios de QoS a los usuarios finales, siendo ese el objetivo final de las redes. Para permitir eso, se requiere imponer algunos límites a lo que los usuarios finales pueden enviar a la red, con el fin de limitar la cantidad de tráfico prioritario que usuarios codiciosos puedan enviar. En ese sentido, aunque imponer límites de velocidad estrictos por QoS sería trivial en RINA, también se explora una nueva política de limitación de tasas basada en △Q que pretende limitar la cantidad de tráfico prioritario de una manera más beneficiosa para los usuarios. En términos de escalabilidad, esta tesis también considera diferentes medidas para mejorar el reenvío y el enrutamiento dentro de redes de gran escala. Primero, en cuanto al uso de políticas que podrán beneficiarse de topologías de red específicas, se propone una nueva política de forwarding que combina reglas topológicas, es decir decisiones basadas en la ubicación de nodos, y excepciones, es decir entradas que sobrescriben reglas en caso de error. Con esta política, las costosas búsquedas en tablas grandes se reemplazan con reglas de rápidas y simples basadas en la ubicación de los nodos y su vecindad. Dadas las topologías específicas más comúnmente utilizadas en los grandes centros de datos hoy en día, se encuentra que el uso de la política propuesta es la combinación perfecta para esos escenarios. Pruebas en varias topologías comunes para centros de datos mostraron mejoras claras, que requieren solo una pequeña fracción de toda la información sobre la red, a pesar del gran tamaño de dichas redes, dependiendo esta de la cantidad de fallas concurrentes en la red y no del tamaño de la misma. Además, esta tesis también considera el uso de políticas de enrutamiento topológico para poblar tales excepciones en caso de fallas. El uso de soluciones de enrutamiento topológico dio como resultado la reducción de la complejidad en el cálculo de rutas, junto con un menor número de mensajes de enrutamiento. Además de las soluciones topológicas, también se investiga el uso de otra solución de enrutamiento, no adecuada para el entorno de IP. Específicamente, se muestra como una solución de enrutamiento Landmark, una solución de enrutamiento de la familia de enrutamiento compacto, podría implementarse dentro de RINA. Finalmente, también se dedican esfuerzos a analizar la importancia de la selección de rutas para garantizar los requisitos de QoS y como no se requiere llegar a soluciones extremas, como el uso de conexiones, para proporcionar los servicios requeridos.Postprint (published version

    DHash table

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.Includes bibliographical references (p. 123-132) and index.DHash is a new system that harnesses the storage and network resources of computers distributed across the Internet by providing a wide-area storage service, DHash. DHash frees applications from re-implementing mechanisms common to any system that stores data on a collection of machines: it maintains a mapping of objects to servers, replicates data for durability, and balances load across participating servers. Applications access data stored in DHash through a familiar hash-table interface: put stores data in the system under a key; get retrieves the data. DHash has proven useful to a number of application builders and has been used to build a content-distribution system [31], a Usenet replacement [115], and new Internet naming architectures [130, 129]. These applications demand low-latency, high-throughput access to durable data. Meeting this demand is challenging in the wide-area environment. The geographic distribution of nodes means that latencies between nodes are likely to be high: to provide a low-latency get operation the system must locate a nearby copy of the data without traversing high-latency links.(cont.) Also, wide-area network links are likely to be less reliable and have lower capacities than local-area network links: to provide durability efficiently the system must minimize the number of copies of data items it sends over these limited capacity links in response to node failure. This thesis describes the design and implementation of the DHash distributed hash table and presents algorithms and techniques that address these challenges. DHash provides low-latency operations by using a synthetic network coordinate system (Vivaldi) to find nearby copies of data without sending messages over high-latency links. A network transport (STP), designed for applications that contact a large number of nodes, lets DHash provide high throughput by striping a download across many servers without causing high packet loss or exhausting local resources. Sostenuto, a data maintenance algorithm, lets DHash maintain data durability while minimizing the number of copies of data that the system sends over limited-capacity links.by Frank Dabek.Ph.D

    DeMMon Decentralized Management and Monitoring Framework

    Get PDF
    The centralized model proposed by the Cloud computing paradigm mismatches the decentralized nature of mobile and IoT applications, given the fact that most of the data production and consumption is performed by end-user devices outside of the Data Center (DC). As the number of these devices grows, and given the need to transport data to and from DCs for computation, application providers incur additional infrastructure costs, and end-users incur delays when performing operations. These reasons have led us into a post-cloud era, where a new computing paradigm arose: Edge Computing. Edge Computing takes into account the broad spectrum of devices residing outside of the DC, closer to the clients, as potential targets for computations, potentially reducing infrastructure costs, improving the quality of service (QoS) for end-users and allowing new interaction paradigms between users and applications. Managing and monitoring the execution of these devices raises new challenges previously unaddressed by Cloud computing, given the scale of these systems and the devices’ (potentially) unreliable data connections and heterogenous computational power. The study of the state-of-the-art has revealed that existing resource monitoring and management solutions require manual configuration and have centralized components, which we believe do not scale for larger-scale systems. In this work, we address these limitations by presenting a novel Decentralized Management and Monitoring (“DeMMon”) system, targeted for edge settings. DeMMon provides primitives to ease the development of tools that manage computational resources that support edge-enabled applications, decomposed in components, through decentralized actions, taking advantage of partial knowledge of the system. Our solution was evaluated to amount to its benefits regarding information dissemination and monitoring capabilities across a set of realistic emulated scenarios of up to 750 nodes with variable failure rates. The results show the validity of our approach and that it can outperform state-of-the-art solutions regarding scalability and reliabilityO modelo centralizado de computação utilizado no paradigma da Computação na Nuvem apresenta limitações no contexto de aplicações no domínio da Internet das Coisas e aplicações móveis. Neste tipo de aplicações, os dados são produzidos e consumidos maioritariamente por dispositivos que se encontram na periferia da rede. Desta forma, transportar estes dados de e para os centros de dados impõe uma carga excessiva nas infraestruturas de rede que ligam os dispositivos aos centros de dados, aumentando a latência de respostas e diminuindo a qualidade de serviço para os utilizadores. Para combater estas limitações, surgiu o paradigma da Computação na Periferia, este paradigma propõe a execução de computações, e potencialmente armazenamento de dados, em dispositivos fora dos centros de dados, mais perto dos clientes, reduzindo custos e criando um novo leque de possibilidades para efetuar computações distribuídas mais próximas dos dispositivos que produzem e consomem os dados. Contudo, gerir e supervisionar a execução desses dispositivos levanta obstáculos não equacionados pela Computação na Nuvem, como a escala destes sistemas, ou a variabilidade na conectividade e na capacidade de computação dos dispositivos que os compõem. O estudo da literatura revela que ferramentas populares para gerir e supervisionar aplicações e dispositivos possuem limitações para a sua escalabilidade, como por exemplo, pontos de falha centralizados, ou requerem a configuração manual de cada dispositivo. Nesta dissertação, propõem-se uma nova solução de monitorização e disseminação de informação descentralizada. Esta solução oferece operações que permitem recolher informação sobre o estado do sistema, de modo a ser utilizada por soluções (também descentralizadas) que gerem aplicações especializadas para executar na periferia da rede. A nossa solução foi avaliada em redes emuladas de várias dimensões com um máximo de 750 nós, no contexto de disseminação e de monitorização de informação. Os nossos resultados mostram que o nosso sistema consegue ser mais robusto ao mesmo tempo que é mais escalável quando comparado com o estado da arte

    Enabling Large-Scale Peer-to-Peer Stored Video Streaming Service with QoS Support

    Get PDF
    This research aims to enable a large-scale, high-volume, peer-to-peer, stored-video streaming service over the Internet, such as on-line DVD rentals. P2P allows a group of dynamically organized users to cooperatively support content discovery and distribution services without needing to employ a central server. P2P has the potential to overcome the scalability issue associated with client-server based video distribution networks; however, it brings a new set of challenges. This research addresses the following five technical challenges associated with the distribution of streaming video over the P2P network: 1) allow users with limited transmit bandwidth capacity to become contributing sources, 2) support the advertisement and discovery of time-changing and time-bounded video frame availability, 3) Minimize the impact of distribution source losses during video playback, 4) incorporate user mobility information in the selection of distribution sources, and 5) design a streaming network architecture that enables above functionalities.To meet the above requirements, we propose a video distribution network model based on a hybrid architecture between client-server and P2P. In this model, a video is divided into a sequence of small segments and each user executes a scheduling algorithm to determine the order, the timing, and the rate of segment retrievals from other users. The model also employs an advertisement and discovery scheme which incorporates parameters of the scheduling algorithm to allow users to share their life-time of video segment availability information in one advertisement and one query. An accompanying QoS scheme allows reduction in the number of video playback interruptions while one or more distribution sources depart from the service prematurely.The simulation study shows that the proposed model and associated schemes greatly alleviate the bandwidth requirement of the video distribution server, especially when the number of participating users grows large. As much as 90% of load reduction was observed in some experiments when compared to a traditional client-server based video distribution service. A significant reduction is also observed in the number of video presentation interruptions when the proposed QoS scheme is incorporated in the distribution process while certain percentages of distribution sources depart from the service unexpectedly
    corecore