68 research outputs found

    Implementation Of A Raptorq-Based Protocol For Peer To Peer Network

    Get PDF
    The object of this thesis is to develop and test a Ruby based implementation of the RaptorQP2P protocol. The RaptorQP2P protocol is a novel peer-to-peer protocol based on RaptorQ forward error correction. This protocol facilitates delivery of a single file to a large number of peers. It applies two levels of RaptorQ encoding to the source file before packet transmission. Download completion time using RaptorQP2P was found to be significantly improved comparing to BitTorrent. We developed a Ruby interface to the Qualcomm proprietary RaptorQ software development kit library. Then we achieved the two levels of RaptorQ encoding and decoding with the Ruby interface. Our implementation uses 5 threads to implement RaptorQP2P features. Thread 1 runs as a server to accept the connection requests from new peers. Thread 2 works as a client to connect to other peers. Thread 3 is used for sending data (pieces) and thread 4 is to receive data from neighboring peers. Thread 5 manages the piece map status, the peer list, and choking of a peer. We first tested communication modules of the implementation. Then we set up scheduled transmission tests to validate the intelligent symbol transmission scheduling design. Finally, we set up a multi-peer network for close to practical tests. We use 5 RaspberryvPi single-board computers to act as 1 seeder and 4 leechers. The seeder has the whole file and delivers the file to the 4 leechers simultaneously. The 4 leechers will also exchange part of the file with each other based on what they have received. Test results show that our implementation attains all the features of RaptorQP2P: the implementation uses two levels RaptorQ encoding; a peer is able to download a piece from multiple neighbors simultaneously; and a peer can send the received encoded symbols of a piece to other peers even if the peer does not have the full piece yet

    Scalable download protocols

    Get PDF
    Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only “local” (rather than global) request information.Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence

    Digital Fountain for Multi-node Aggregation of Data in Blockchains

    Get PDF
    abstract: Blockchain scalability is one of the issues that concerns its current adopters. The current popular blockchains have initially been designed with imperfections that in- troduce fundamental bottlenecks which limit their ability to have a higher throughput and a lower latency. One of the major bottlenecks for existing blockchain technologies is fast block propagation. A faster block propagation enables a miner to reach a majority of the network within a time constraint and therefore leading to a lower orphan rate and better profitability. In order to attain a throughput that could compete with the current state of the art transaction processing, while also keeping the block intervals same as today, a 24.3 Gigabyte block will be required every 10 minutes with an average transaction size of 500 bytes, which translates to 48600000 transactions every 10 minutes or about 81000 transactions per second. In order to synchronize such large blocks faster across the network while maintain- ing consensus by keeping the orphan rate below 50%, the thesis proposes to aggregate partial block data from multiple nodes using digital fountain codes. The advantages of using a fountain code is that all connected peers can send part of data in an encoded form. When the receiving peer has enough data, it then decodes the information to reconstruct the block. Along with them sending only part information, the data can be relayed over UDP, instead of TCP, improving upon the speed of propagation in the current blockchains. Fountain codes applied in this research are Raptor codes, which allow construction of infinite decoding symbols. The research, when applied to blockchains, increases success rate of block delivery on decode failures.Dissertation/ThesisMasters Thesis Computer Science 201

    Slurpie: A Cooperative Bulk Data Transfer Protocol

    Get PDF
    We present Slurpie: a peer-to-peer protocol for bulk data transfer. Slurpie is specifically designed to reduce client download times for large, popular files, and to reduce load on servers that serve these files. Slurpie employs a novel adaptive downloading strategy to increase client performance, and employs a randomized backoff strategy to precisely control load on the server. We describe a full implementation of the Slurpie protocol, and present results from both controlled localarea and wide-area testbeds. Our results show that Slurpie clients improve performance as the size of the network increases, and the server is completely insulated from large flash crowds entering the Slurpie network

    Systems-compatible Incentives

    Get PDF
    Originally, the Internet was a technological playground, a collaborative endeavor among researchers who shared the common goal of achieving communication. Self-interest used not to be a concern, but the motivations of the Internet's participants have broadened. Today, the Internet consists of millions of commercial entities and nearly 2 billion users, who often have conflicting goals. For example, while Facebook gives users the illusion of access control, users do not have the ability to control how the personal data they upload is shared or sold by Facebook. Even in BitTorrent, where all users seemingly have the same motivation of downloading a file as quickly as possible, users can subvert the protocol to download more quickly without giving their fair share. These examples demonstrate that protocols that are merely technologically proficient are not enough. Successful networked systems must account for potentially competing interests. In this dissertation, I demonstrate how to build systems that give users incentives to follow the systems' protocols. To achieve incentive-compatible systems, I apply mechanisms from game theory and auction theory to protocol design. This approach has been considered in prior literature, but unfortunately has resulted in few real, deployed systems with incentives to cooperate. I identify the primary challenge in applying mechanism design and game theory to large-scale systems: the goals and assumptions of economic mechanisms often do not match those of networked systems. For example, while auction theory may assume a centralized clearing house, there is no analog in a decentralized system seeking to avoid single points of failure or centralized policies. Similarly, game theory often assumes that each player is able to observe everyone else's actions, or at the very least know how many other players there are, but maintaining perfect system-wide information is impossible in most systems. In other words, not all incentive mechanisms are systems-compatible. The main contribution of this dissertation is the design, implementation, and evaluation of various systems-compatible incentive mechanisms and their application to a wide range of deployable systems. These systems include BitTorrent, which is used to distribute a large file to a large number of downloaders, PeerWise, which leverages user cooperation to achieve lower latencies in Internet routing, and Hoodnets, a new system I present that allows users to share their cellular data access to obtain greater bandwidth on their mobile devices. Each of these systems represents a different point in the design space of systems-compatible incentives. Taken together, along with their implementations and evaluations, these systems demonstrate that systems-compatibility is crucial in achieving practical incentives in real systems. I present design principles outlining how to achieve systems-compatible incentives, which may serve an even broader range of systems than considered herein. I conclude this dissertation with what I consider to be the most important open problems in aligning the competing interests of the Internet's participants

    Vertaisverkkopohjainen luotettavuus multicast-sessioihin

    Get PDF
    As storage and network capacities keep growing, there is an increasing need for distributing large amounts of data through networks. At the moment, there are several alternatives trying to solve the problems of large content distribution. Unfortunately, none of them are optimal in terms of scalability and the amount of traffic generated. We introduce a protocol that tries to optimize these two factors by combining two existing solutions, IP multicast and peer-to-peer networking. IP multicast is used to minimize the traffic generated by the protocol with good scalability. However, since IP multicast is not reliable, a peer-to-peer approach is used to provide this functionality. Our experiments show that the merging of these mechanisms is feasible and provides good performance in terms distribution time and used resources.Tallennus- ja verkkokapasiteettien kasvaessa laajan tietomäärän jakamisen tarve verkkojen välityksellä kasvaa entisestään. Tällä hetkellä on olemassa useita vaihtoehtoja laajan tietomäärän levitykseen liittyvien ongelmien ratkaisemiseen. Valitettavasti yksikään niistä ei ole optimaalinen skaalautuvuuden ja tarvittavan tietoliikenteen määrän suhteen. Esittelemme protokollan, joka yrittää optimoida näitä kahta parametria yhdistämällä kaksi olemassa olevaa ratkaisua, IP multicastin ja vertaisverkot. IP multicastia käytetään minimoimaan protokollan tarvitsema liikennemäärä, mikä lisää samalla skaalautuvuutta. Koska IP multicast ei ole luotettava tiedonsiirtoprotokolla, vertaisverkkopohjaista ratkaisua käytetään lisäämään protokollaan luotettavuus. Kokeemme osoittavat, että näiden ratkaisujen yhdistäminen on järkevä ratkaisu ja mahdollistaa tiedonsiirron nopeasti ja käyttäen vähän resursseja

    Community computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-186).In this thesis we lay the foundations for a distributed, community-based computing environment to tap the resources of a community to better perform some tasks, either computationally hard or economically prohibitive, or physically inconvenient, that one individual is unable to accomplish efficiently. We introduce community coding, where information systems meet social networks, to tackle some of the challenges in this new paradigm of community computation. We design algorithms, protocols and build system prototypes to demonstrate the power of community computation to better deal with reliability, scalability and security issues, which are the main challenges in many emerging community-computing environments, in several application scenarios such as community storage, community sensing and community security. For example, we develop a community storage system that is based upon a distributed P2P (peer-to-peer) storage paradigm, where we take an array of small, periodically accessible, individual computers/peer nodes and create a secure, reliable and large distributed storage system. The goal is for each one of them to act as if they have immediate access to a pool of information that is larger than they could hold themselves, and into which they can contribute new stuff in a both open and secure manner. Such a contributory and self-scaling community storage system is particularly useful where reliable infrastructure is not readily available in that such a system facilitates easy ad-hoc construction and easy portability. In another application scenario, we develop a novel framework of community sensing with a group of image sensors. The goal is to present a set of novel tools in which software, rather than humans, examines the collection of images sensed by a group of image sensors to determine what is happening in the field of view. We also present several design principles in the aspects of community security. In one application example, we present community-based email spain detection approach to deal with email spams more efficiently.by Fulu Li.Ph.D
    corecore