16 research outputs found

    A HOLISTIC REDUNDANCY- AND INCENTIVE-BASED FRAMEWORK TO IMPROVE CONTENT AVAILABILITY IN PEER-TO-PEER NETWORKS

    Get PDF
    Peer-to-Peer (P2P) technology has emerged as an important alternative to the traditional client-server communication paradigm to build large-scale distributed systems. P2P enables the creation, dissemination and access to information at low cost and without the need of dedicated coordinating entities. However, existing P2P systems fail to provide high-levels of content availability, which limit their applicability and adoption. This dissertation takes a holistic approach to device mechanisms to improve content availability in large-scale P2P systems. Content availability in P2P can be impacted by hardware failures and churn. Hardware failures, in the form of disk or node failures, render information inaccessible. Churn, an inherent property of P2P, is the collective effect of the users’ uncoordinated behavior, which occurs when a large percentage of nodes join and leave frequently. Such a behavior reduces content availability significantly. Mitigating the combined effect of hardware failures and churn on content availability in P2P requires new and innovative solutions that go beyond those applied in existing distributed systems. To addresses this challenge, the thesis proposes two complementary, low cost mechanisms, whereby nodes self-organize to overcome failures and improve content availability. The first mechanism is a low complexity and highly flexible hybrid redundancy scheme, referred to as Proactive Repair (PR). The second mechanism is an incentive-based scheme that promotes cooperation and enforces fair exchange of resources among peers. These mechanisms provide the basis for the development of distributed self-organizing algorithms to automate PR and, through incentives, maximize their effectiveness in realistic P2P environments. Our proposed solution is evaluated using a combination of analytical and experimental methods. The analytical models are developed to determine the availability and repair cost properties of PR. The results indicate that PR’s repair cost outperforms other redundancy schemes. The experimental analysis was carried out using simulation and the development of a testbed. The simulation results confirm that PR improves content availability in P2P. The proposed mechanisms are implemented and tested using a DHT-based P2P application environment. The experimental results indicate that the incentive-based mechanism can promote fair exchange of resources and limits the impact of uncooperative behaviors such as “free-riding”

    A performance evaluation of peer-to-peer storage systems

    Get PDF
    This work evaluates the performance of Peer-to-Peer storage systems in structured Peer-to-Peer (P2P) networks under the impacts of a continuous process of nodes joining and leaving the network (Churn). Based on the Distributed Hash Tables (DHT), the peer-to-peer systems provide the means to store data among a large and dynamic set of participating host nodes. We consider the fact that existing solutions do not tolerate a high Churn rate or are not really scalable in terms of number of stored data blocks. The considered performance metrics include number of data blocks lost, bandwidth consumption, latencies and distance of matched lookups. We have selected Pastry, Chord and Kademlia to evaluate the e ect of inopportune connections/disconnections in Peer-to-Peer storage systems, because these selected P2P networks possess distinctive characteristics. Chord is one of the rst structured P2P networks that implements Distributed Hash Tables (DHTs). Similar to Chord, Pastry is based on a ring structure, with the identi er space forming the ring. However, Pastry uses a di erent algorithm than Chord to select the overlay neighbors of a peer. Kademlia is a more recent structured P2P network, with the XOR mechanism for improving distance calculation. DHT deployments are characterized by Churn. But if the frequency of Churn is too high, data blocks can be lost and lookup mechanism begin to incur delays. In architectures that employ DHTs, the choice of algorithm for data replication and maintenance can have a signi cant impact on the performance and reliability. PAST is a persistent Peer-to-Peer storage utility, which replicates complete les on multiple nodes, and uses Pastry for message routing and content location. The hypothesis is that by enhancing the Churn tolerance through building a really e cient replication and maintenance mechanisms, it will: i) Operate better than a peer-to-peer storage system such as PAST especially in replica placement strategy with a fewer data transfers. ii) Resolve le lookups with a match that is closer to the source peer, thus con- serving bandwidth. Our research will involve a series of simulation studies using two network simulators OverSim and OMNeT++. The main results are: Our approach achieves a higher data availability in presence of Churn, than the original PAST replication strategy; For a Churn occuring every minute our strategy loses two times less blocks than PAST; Our replication strategy induces an average of twice less block transfers than PAST

    Design and applications of a secure and decentralized DHT

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 105-114).Distributed Hash Tables (DHTs) are a powerful building block for highly scalable decentralized systems. They route requests over a structured overlay network to the node responsible for a given key. DHTs are subject to the well-known Sybil attack, in which an adversary creates many false identities in order to increase its influence and deny service to honest participants. Defending against this attack is challenging because (1) in an open network, creating many fake identities is cheap; (2) an attacker can subvert periodic routing table maintenance to increase its influence over time; and (3) specific keys can be targeted by clustering attacks. As a result, without centralized admission control, previously existing DHTs could not provide strong availability guarantees. This dissertation describes Whanau, a novel DHT routing protocol which is both efficient and strongly resistant to the Sybil attack. Whanau solves this long-standing problem by using the social connections between users to build routing tables that enable Sybilresistant one-hop lookups. The number of Sybils in the social network does not affect the protocol's performance, but links between honest users and Sybils do. With a social network of n well-connected honest nodes, Whanau provably tolerates up to O(n/ log n) such "attack edges". This means that an attacker must convince a large fraction of the honest users to make a social connection with the adversary's Sybils before any lookups will fail. Whanau uses techniques from structured DHTs to build routing tables that contain O(Vf log n) entries per node. It introduces the idea of layered identifiers to counter clustering attacks, which have proven particularly challenging for previous DHTs to handle. Using the constructed tables, lookups provably take constant time. Simulation results, using large-scale social network graphs from LiveJournal, Flickr, YouTube, and DBLP, confirm the analytic prediction that Whanau provides high availability in the face of powerful Sybil attacks. Experimental results using PlanetLab demonstrate that an implementation of the Whanau protocol can handle reasonable levels of churn.by Christopher T. Lesniewski-Laas.Ph.D

    Security for Decentralised Service Location - Exemplified with Real-Time Communication Session Establishment

    Get PDF
    Decentralised Service Location, i.e. finding an application communication endpoint based on a Distributed Hash Table (DHT), is a fairly new concept. The precise security implications of this approach have not been studied in detail. More importantly, a detailed analysis regarding the applicability of existing security solutions to this concept has not been conducted. In many cases existing client-server approaches to security may not be feasible. In addition, to understand the necessity for such an analysis, it is key to acknowledge that Decentralised Service Location has some unique security requirements compared to other P2P applications such as filesharing or live streaming. This thesis concerns the security challenges for Decentralised Service Location. The goals of our work are on the one hand to precisely understand the security requirements and research challenges for Decentralised Service Location, and on the other hand to develop and evaluate corresponding security mechanisms. The thesis is organised as follows. First, fundamentals are explained and the scope of the thesis is defined. Decentralised Service Location is defined and P2PSIP is explained technically as a prototypical example. Then, a security analysis for P2PSIP is presented. Based on this security analysis, security requirements for Decentralised Service Location and the corresponding research challenges -- i.e. security concerns not suitably mitigated by existing solutions -- are derived. Second, several decentralised solutions are presented and evaluated to tackle the security challenges for Decentralised Service Location. We present decentralised algorithms to enable availability of the DHTs lookup service in the presence of adversary nodes. These algorithms are evaluated via simulation and compared to analytical bounds. Further, a cryptographic approach based on self-certifying identities is illustrated and discussed. This approach enables decentralised integrity protection of location-bindings. Finally, a decentralised approach to assess unknown identities is introduced. The approach is based on a Web-of-Trust model. It is evaluated via prototypical implementation. Finally, the thesis closes with a summary of the main contributions and a discussion of open issues

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    An Embryonics Inspired Architecture for Resilient Decentralised Cloud Service Delivery

    Get PDF
    Data-driven artificial intelligence applications arising from Internet of Things technologies can have profound wide-reaching societal benefits at the cross-section of the cyber and physical domains. Usecases are expanding rapidly. For example, smart-homes and smart-buildings provide intelligent monitoring, resource optimisation, safety, and security for their inhabitants. Smart cities can manage transport, waste, energy, and crime on large scales. Whilst smart-manufacturing can autonomously produce goods through the self-management of factories and logistics. As these use-cases expand further, the requirement to ensure data is processed accurately and timely is ever crucial, as many of these applications are safety critical. Where loss off life and economic damage is a likely possibility in the event of system failure. While the typical service delivery paradigm, cloud computing, is strong due to operating upon economies of scale, their physical proximity to these applications creates network latency which is incompatible with these safety critical applications. To complicate matters further, the environments they operate in are becoming increasingly hostile. With resource-constrained and mobile wireless networking, commonplace. These issues drive the need for new service delivery architectures which operate closer to, or even upon, the network devices, sensors and actuators which compose these IoT applications at the network edge. These hostile and resource constrained environments require adaptation of traditional cloud service delivery models to these decentralised mobile and wireless environments. Such architectures need to provide persistent service delivery within the face of a variety of internal and external changes or: resilient decentralised cloud service delivery. While the current state of the art proposes numerous techniques to enhance the resilience of services in this manner, none provide an architecture which is capable of providing data processing services in a cloud manner which is inherently resilient. Adopting techniques from autonomic computing, whose characteristics are resilient by nature, this thesis presents a biologically-inspired platform modelled on embryonics. Embryonic systems have an ability to self-heal and self-organise whilst showing capacity to support decentralised data processing. An initial model for embryonics-inspired resilient decentralised cloud service delivery is derived according to both the decentralised cloud, and resilience requirements given for this work. Next, this model is simulated using cellular automata, which illustrate the embryonic concept’s ability to provide self-healing service delivery under varying system component loss. This highlights optimisation techniques, including: application complexity bounds, differentiation optimisation, self-healing aggression, and varying system starting conditions. All attributes of which can be adjusted to vary the resilience performance of the system depending upon different resource capabilities and environmental hostilities. Next, a proof-of-concept implementation is developed and validated which illustrates the efficacy of the solution. This proof-of-concept is evaluated on a larger scale where batches of tests highlighted the different performance criteria and constraints of the system. One key finding was the considerable quantity of redundant messages produced under successful scenarios which were helpful in terms of enabling resilience yet could increase network contention. Therefore balancing these attributes are important according to use-case. Finally, graph-based resilience algorithms were executed across all tests to understand the structural resilience of the system and whether this enabled suitable measurements or prediction of the application’s resilience. Interestingly this study highlighted that although the system was not considered to be structurally resilient, the applications were still being executed in the face of many continued component failures. This highlighted that the autonomic embryonic functionality developed was succeeding in executing applications resiliently. Illustrating that structural and application resilience do not necessarily coincide. Additionally, one graph metric, assortativity, was highlighted as being predictive of application resilience, although not structural resilience

    Distributed Key Generation and Its Applications

    Get PDF
    Numerous cryptographic applications require a trusted authority to hold a secret. With a plethora of malicious attacks over the Internet, however, it is difficult to establish and maintain such an authority in online systems. Secret-sharing schemes attempt to solve this problem by distributing the required trust to hold and use the secret over multiple servers; however, they still require a trusted {\em dealer} to choose and share the secret, and have problems related to single points of failure and key escrow. A distributed key generation (DKG) scheme overcomes these hurdles by removing the requirement of a dealer in secret sharing. A (threshold) DKG scheme achieves this using a complete distribution of the trust among a number of servers such that any subset of servers of size greater than a given threshold can reveal or use the shared secret, while any smaller subset cannot. In this thesis, we make contributions to DKG in the computational security setting and describe three applications of it. We first define a constant-size commitment scheme for univariate polynomials over finite fields and use it to reduce the size of broadcasts required for DKG protocols in the synchronous communication model by a linear factor. Further, we observe that the existing (synchronous) DKG protocols do not provide a liveness guarantee over the Internet and design the first DKG protocol for use over the Internet. Observing the necessity of long-term stability, we then present proactive security and group modification protocols for our DKG system. We also demonstrate the practicality of our DKG protocol over the Internet by testing our implementation over PlanetLab. For the applications, we use our DKG protocol to define IND-ID-CCA secure distributed private-key generators (PKGs) for three important identity-based encryption (IBE) schemes: Boneh and Franklin's BF-IBE, Sakai and Kasahara's SK-IBE, and Boneh and Boyen's BB1-IBE. These IBE schemes cover all three important IBE frameworks: full-domain-hash IBEs, exponent-inversion IBEs and commutative-blinding IBEs respectively, and our distributed PKG constructions can easily be modified for other IBE schemes in these frameworks. As the second application, we use our distributed PKG for BF-IBE to define an onion routing circuit construction mechanism in the identity-based setting, which solves the scalability problem in single-pass onion routing circuit construction without hampering forward secrecy. As the final application, we use our DKG implementation to design a threshold signature architecture for quorum-based distributed hash tables and use it to define two robust communication protocols in these peer-to-peer systems

    Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12

    Get PDF
    This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc

    The Sea of Stuff: a model to manage shared mutable data in a distributed environment

    Get PDF
    Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes."This work was supported by Adobe Systems, Inc. and EPSRC [grant number EP/M506631/1]" - from the Acknowledgements pag
    corecore