28 research outputs found

    A Node Replication Method to Guarantee Reachability for P2P Sensor Data Stream Delivery System on Heterogeneous Churn Situations

    Get PDF
    COMPSAC2015 : The 39th Annual International Computers, Software & Applications Conference , Jul 1-5, 2015 , Taichung, TaiwanIn this paper, we propose a method to construct scalable sensor data stream delivery system that guarantees the specified QoS of the delivery, i.e. Total reachbility to destinations even in a heterogeneous churn situation of the delivery server resources (nodes). There were some P2P-based methods to construct scalable and efficient sensor data stream system that accommodates different delivery cycles by distributing communication loads of the nodes. However, existing methods could not guarantee the QoS of the delivery when the nodes on the system has heterogeneous churn rate. Our method extends existing method, which assigns the relay nodes based on the distributed hashing of the time-to-deliver, to decide the number of replication nodes according to the churn rate of each node and delivery paths. By simulations, we confirmed that our proposed method can guarantee the required reach ability avoiding to increase unnecessary resource assignment costs

    Diffusion en directe avec du Gossip

    Get PDF
    Video streaming has become a killer application for peer-to-peer technologies. By aggregating scarce resources such as upload bandwidth, decentralized video streaming protocols make it possible to serve a video stream to huge numbers of users while requiring very limited investments from broadcasters. In this paper, we present HEAP, a novel peer-to-peer streaming protocol designed for heterogeneous scenarios. Gossip protocols have already shown their effec- tiveness in the context of live video streaming. HEAP, HEterogeneity-Aware gossip Protocol, goes beyond their applicability and performance by incorporating several novel features. First, HEAP includes a fanout-adaptation scheme that tunes the contribution of nodes to the streaming process based on their bandwidth capabilities. Second, HEAP comprises heuristics that improve reliability, as well as operation in the presence of heterogeneous network latency. We extensively evaluate HEAP on a real deployment over 200 nodes on the Grid5000 platform in a variety of settings, and assess its scalability with up to 100k simulated nodes. Our results show that HEAP significantly improves the quality of streamed videos over standard homogeneous gossip protocols, especially when the stream rate is close to the average available bandwidth.Le streaming vidéo est devenu une killer application pour les technologies pair-à- pair. En agrégeant les ressources rares telles que le debit maximale téléversement, les protocoles de diffusion vidéo décentralisée permettent servir un flux vidéo à un grand nombre d’utilisateurs tout en limitant les couts. Dans cet article, nous présentons HEAP, un nouveau protocole de streaming pair-à-pair conçu pour des réseaux hétérogènes. Les protocoles de gossip ont déjà mon- tré leur efficacité dans le contexte du streaming vidéo en direct. HEAP, em HEterogeneity-Aware Gossip Protocol, va au-delà de protocoles existantes en incorporant plusieurs caractéristiques nouvelles. Premièrement, HEAP adapte la contribution des noeuds en fonction de leurs debit maximal. Deuxièmement, HEAP inclut des heuristiques qui améliorent la fiabilité, en présence de latence de réseau hétérogène. Nous évaluons HEAP sur un déploiement réel sur 200 noeuds sur la plate-forme Grid5000 avec une variété de paramètres, et évaluons son passage à l’échelle avec jusqu’à 100k noeuds simulé. Nos résultats montrent que HEAP améliore significativement la qualité des vidéos diffusées par rapport au protocoles standard, surtout lorsque le débit est proche de la bande passante moyenne disponible

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12

    Get PDF
    This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc

    Naming and discovery in networks : architecture and economics

    Get PDF
    In less than three decades, the Internet was transformed from a research network available to the academic community into an international communication infrastructure. Despite its tremendous success, there is a growing consensus in the research community that the Internet has architectural limitations that need to be addressed in a effort to design a future Internet. Among the main technical limitations are the lack of mobility support, and the lack of security and trust. The Internet, and particularly TCP/IP, identifies endpoints using a location/routing identifier, the IP address. Coupling the endpoint identifier to the location identifier hinders mobility and poorly identifies the actual endpoint. On the other hand, the lack of security has been attributed to limitations in both the network and the endpoint. Authentication for example is one of the main concerns in the architecture and is hard to implement partly due to lack of identity support. The general problem that this dissertation is concerned with is that of designing a future Internet. Towards this end, we focus on two specific sub-problems. The first problem is the lack of a framework for thinking about architectures and their design implications. It was obvious after surveying the literature that the majority of the architectural work remains idiosyncratic and descriptions of network architectures are mostly idiomatic. This has led to the overloading of architectural terms, and to the emergence of a large body of network architecture proposals with no clear understanding of their cross similarities, compatibility points, their unique properties, and architectural performance and soundness. On the other hand, the second problem concerns the limitations of traditional naming and discovery schemes in terms of service differentiation and economic incentives. One of the recurring themes in the community is the need to separate an entity\u27s identifier from its locator to enhance mobility and security. Separation of identifier and locator is a widely accepted design principle for a future Internet. Separation however requires a process to translate from the identifier to the locator when discovering a network path to some identified entity. We refer to this process as identifier-based discovery, or simply discovery, and we recognize two limitations that are inherent in the design of traditional discovery schemes. The first limitation is the homogeneity of the service where all entities are assumed to have the same discovery performance requirements. The second limitation is the inherent incentive mismatch as it relates to sharing the cost of discovery. This dissertation addresses both subproblems, the architectural framework as well as the naming and discovery limitations

    Actas da 10ª Conferência sobre Redes de Computadores

    Get PDF
    Universidade do MinhoCCTCCentro AlgoritmiCisco SystemsIEEE Portugal Sectio

    DeMMon Decentralized Management and Monitoring Framework

    Get PDF
    The centralized model proposed by the Cloud computing paradigm mismatches the decentralized nature of mobile and IoT applications, given the fact that most of the data production and consumption is performed by end-user devices outside of the Data Center (DC). As the number of these devices grows, and given the need to transport data to and from DCs for computation, application providers incur additional infrastructure costs, and end-users incur delays when performing operations. These reasons have led us into a post-cloud era, where a new computing paradigm arose: Edge Computing. Edge Computing takes into account the broad spectrum of devices residing outside of the DC, closer to the clients, as potential targets for computations, potentially reducing infrastructure costs, improving the quality of service (QoS) for end-users and allowing new interaction paradigms between users and applications. Managing and monitoring the execution of these devices raises new challenges previously unaddressed by Cloud computing, given the scale of these systems and the devices’ (potentially) unreliable data connections and heterogenous computational power. The study of the state-of-the-art has revealed that existing resource monitoring and management solutions require manual configuration and have centralized components, which we believe do not scale for larger-scale systems. In this work, we address these limitations by presenting a novel Decentralized Management and Monitoring (“DeMMon”) system, targeted for edge settings. DeMMon provides primitives to ease the development of tools that manage computational resources that support edge-enabled applications, decomposed in components, through decentralized actions, taking advantage of partial knowledge of the system. Our solution was evaluated to amount to its benefits regarding information dissemination and monitoring capabilities across a set of realistic emulated scenarios of up to 750 nodes with variable failure rates. The results show the validity of our approach and that it can outperform state-of-the-art solutions regarding scalability and reliabilityO modelo centralizado de computação utilizado no paradigma da Computação na Nuvem apresenta limitações no contexto de aplicações no domínio da Internet das Coisas e aplicações móveis. Neste tipo de aplicações, os dados são produzidos e consumidos maioritariamente por dispositivos que se encontram na periferia da rede. Desta forma, transportar estes dados de e para os centros de dados impõe uma carga excessiva nas infraestruturas de rede que ligam os dispositivos aos centros de dados, aumentando a latência de respostas e diminuindo a qualidade de serviço para os utilizadores. Para combater estas limitações, surgiu o paradigma da Computação na Periferia, este paradigma propõe a execução de computações, e potencialmente armazenamento de dados, em dispositivos fora dos centros de dados, mais perto dos clientes, reduzindo custos e criando um novo leque de possibilidades para efetuar computações distribuídas mais próximas dos dispositivos que produzem e consomem os dados. Contudo, gerir e supervisionar a execução desses dispositivos levanta obstáculos não equacionados pela Computação na Nuvem, como a escala destes sistemas, ou a variabilidade na conectividade e na capacidade de computação dos dispositivos que os compõem. O estudo da literatura revela que ferramentas populares para gerir e supervisionar aplicações e dispositivos possuem limitações para a sua escalabilidade, como por exemplo, pontos de falha centralizados, ou requerem a configuração manual de cada dispositivo. Nesta dissertação, propõem-se uma nova solução de monitorização e disseminação de informação descentralizada. Esta solução oferece operações que permitem recolher informação sobre o estado do sistema, de modo a ser utilizada por soluções (também descentralizadas) que gerem aplicações especializadas para executar na periferia da rede. A nossa solução foi avaliada em redes emuladas de várias dimensões com um máximo de 750 nós, no contexto de disseminação e de monitorização de informação. Os nossos resultados mostram que o nosso sistema consegue ser mais robusto ao mesmo tempo que é mais escalável quando comparado com o estado da arte

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    Agent Organization in the Knowledge Plane

    Get PDF
    In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources.The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc.In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs, agents are organized into regions based on different sets of constraints. I focus on two key design issues in the NetKP: (1) a regionbased architecture for agent organization, in which I design an efficient and non-intrusive organization among regions that combines network topology and a distributed hash table; (2) request and knowledge dissemination, in which I design a robust and efficient broadcast and aggregation mechanism using a tree structure among regions. In the spec-KPs, I build two examples: experiment management on the PlanetLab testbed and distributed intrusion detection on the DETER testbed. The experiment results suggest a common approach driven by the design principles of the Internet and more specialized constraints can derive productive organization for network management and applications

    Agent organization in the KP

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 181-191).In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources. The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc. In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs, agents are organized into regions based on different sets of constraints. I focus on two key design issues in the NetKP: (1) a region-based architecture for agent organization, in which I design an efficient and non-intrusive organization among regions that combines network topology and a distributed hash table; (2) request and knowledge dissemination, in which I design a robust and efficient broadcast and aggregation mechanism using a tree structure among regions.(cont.) In the spec-KPs, I build two examples: experiment management on the PlanetLab testbed and distributed intrusion detection on the DETER testbed. The experiment results suggest a common approach driven by the design principles of the Internet and more specialized constraints can derive productive organization for network management and applications.by Ji Li.Ph.D
    corecore