261 research outputs found

    Distributed k-ary System: Algorithms for Distributed Hash Tables

    Get PDF
    This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described

    A Survey on Transactional Stream Processing

    Full text link
    Transactional stream processing (TSP) strives to create a cohesive model that merges the advantages of both transactional and stream-oriented guarantees. Over the past decade, numerous endeavors have contributed to the evolution of TSP solutions, uncovering similarities and distinctions among them. Despite these advances, a universally accepted standard approach for integrating transactional functionality with stream processing remains to be established. Existing TSP solutions predominantly concentrate on specific application characteristics and involve complex design trade-offs. This survey intends to introduce TSP and present our perspective on its future progression. Our primary goals are twofold: to provide insights into the diverse TSP requirements and methodologies, and to inspire the design and development of groundbreaking TSP systems

    Currency management system: a distributed banking service for the grid

    Get PDF
    Market based resource allocation mechanisms require mechanisms to regulate and manage the usage of traded resources. One mechanism to control this is the definition of some kind of currency. Within this context, we have implemented a first prototype of our Currency Management System, which stands for a decentralized and scalable banking service for the Grid. Basically, our system stores user accounts within a DHT and its basic operation is the transferFunds which, as its name suggests, transfers virtual currency from an account to one another

    CATS: linearizability and partition tolerance in scalable and self-organizing key-value stores

    Get PDF
    Distributed key-value stores provide scalable, fault-tolerant, and self-organizing storage services, but fall short of guaranteeing linearizable consistency in partially synchronous, lossy, partitionable, and dynamic networks, when data is distributed and replicated automatically by the principle of consistent hashing. This paper introduces consistent quorums as a solution for achieving atomic consistency. We present the design and implementation of CATS, a distributed key-value store which uses consistent quorums to guarantee linearizability and partition tolerance in such adverse and dynamic network conditions. CATS is scalable, elastic, and self-organizing; key properties for modern cloud storage middleware. Our system shows that consistency can be achieved with practical performance and modest throughput overhead (5%) for read-intensive workloads

    Design and applications of a secure and decentralized DHT

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 105-114).Distributed Hash Tables (DHTs) are a powerful building block for highly scalable decentralized systems. They route requests over a structured overlay network to the node responsible for a given key. DHTs are subject to the well-known Sybil attack, in which an adversary creates many false identities in order to increase its influence and deny service to honest participants. Defending against this attack is challenging because (1) in an open network, creating many fake identities is cheap; (2) an attacker can subvert periodic routing table maintenance to increase its influence over time; and (3) specific keys can be targeted by clustering attacks. As a result, without centralized admission control, previously existing DHTs could not provide strong availability guarantees. This dissertation describes Whanau, a novel DHT routing protocol which is both efficient and strongly resistant to the Sybil attack. Whanau solves this long-standing problem by using the social connections between users to build routing tables that enable Sybilresistant one-hop lookups. The number of Sybils in the social network does not affect the protocol's performance, but links between honest users and Sybils do. With a social network of n well-connected honest nodes, Whanau provably tolerates up to O(n/ log n) such "attack edges". This means that an attacker must convince a large fraction of the honest users to make a social connection with the adversary's Sybils before any lookups will fail. Whanau uses techniques from structured DHTs to build routing tables that contain O(Vf log n) entries per node. It introduces the idea of layered identifiers to counter clustering attacks, which have proven particularly challenging for previous DHTs to handle. Using the constructed tables, lookups provably take constant time. Simulation results, using large-scale social network graphs from LiveJournal, Flickr, YouTube, and DBLP, confirm the analytic prediction that Whanau provides high availability in the face of powerful Sybil attacks. Experimental results using PlanetLab demonstrate that an implementation of the Whanau protocol can handle reasonable levels of churn.by Christopher T. Lesniewski-Laas.Ph.D

    Design of efficient and elastic storage in the cloud

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    MDCC: Multi-Data Center Consistency

    Get PDF
    Replicating data across multiple data centers not only allows moving the data closer to the user and, thus, reduces latency for applications, but also increases the availability in the event of a data center failure. Therefore, it is not surprising that companies like Google, Yahoo, and Netflix already replicate user data across geographically different regions. However, replication across data centers is expensive. Inter-data center network delays are in the hundreds of milliseconds and vary significantly. Synchronous wide-area replication is therefore considered to be unfeasible with strong consistency and current solutions either settle for asynchronous replication which implies the risk of losing data in the event of failures, restrict consistency to small partitions, or give up consistency entirely. With MDCC (Multi-Data Center Consistency), we describe the first optimistic commit protocol, that does not require a master or partitioning, and is strongly consistent at a cost similar to eventually consistent protocols. MDCC can commit transactions in a single round-trip across data centers in the normal operational case. We further propose a new programming model which empowers the application developer to handle longer and unpredictable latencies caused by inter-data center communication. Our evaluation using the TPC-W benchmark with MDCC deployed across 5 geographically diverse data centers shows that MDCC is able to achieve throughput and latency similar to eventually consistent quorum protocols and that MDCC is able to sustain a data center outage without a significant impact on response times while guaranteeing strong consistency

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet

    Viiteraamistik turvariskide haldamiseks plokiahela abil

    Get PDF
    Turvalise tarkvara loomiseks on olemas erinevad programmid (nt OWASP), ohumudelid (nt STRIDE), turvariskide juhtimise mudelid (nt ISSRM) ja eeskirjad (nt GDPR). Turvaohud aga arenevad pidevalt, sest traditsiooniline tehnoloogiline infrastruktuur ei rakenda turvameetmeid kavandatult. Blockchain näib leevendavat traditsiooniliste rakenduste turvaohte. Kuigi plokiahelapõhiseid rakendusi peetakse vähem haavatavateks, ei saanud need erinevate turvaohtude eest kaitsmise hõbekuuliks. Lisaks areneb plokiahela domeen pidevalt, pakkudes uusi tehnikaid ja sageli vahetatavaid disainikontseptsioone, mille tulemuseks on kontseptuaalne ebaselgus ja segadus turvaohtude tõhusal käsitlemisel. Üldiselt käsitleme traditsiooniliste rakenduste TJ-e probleemi, kasutades vastumeetmena plokiahelat ja plokiahelapõhiste rakenduste TJ-t. Alustuseks uurime, kuidas plokiahel leevendab traditsiooniliste rakenduste turvaohte, ja tulemuseks on plokiahelapõhine võrdlusmudel (PV), mis järgib TJ-e domeenimudelit. Järgmisena esitleme PV-it kontseptualiseerimisega alusontoloogiana kõrgema taseme võrdlusontoloogiat (ULRO). Pakume ULRO kahte eksemplari. Esimene eksemplar sisaldab Cordat, kui lubatud plokiahelat ja finantsjuhtumit. Teine eksemplar sisaldab lubadeta plokiahelate komponente ja tervishoiu juhtumit. Mõlemad ontoloogiaesitlused aitavad traditsiooniliste ja plokiahelapõhiste rakenduste TJ-es. Lisaks koostasime veebipõhise ontoloogia parsimise tööriista OwlParser. Kaastööde tulemusel loodi ontoloogiapõhine turberaamistik turvariskide haldamiseks plokiahela abil. Raamistik on dünaamiline, toetab TJ-e iteratiivset protsessi ja potentsiaalselt vähendab traditsiooniliste ja plokiahelapõhiste rakenduste turbeohte.Various programs (e.g., OWASP), threat models (e.g., STRIDE), security risk management models (e.g., ISSRM), and regulations (e.g., GDPR) exist to communicate and reduce the security threats to build secure software. However, security threats continuously evolve because the traditional technology infrastructure does not implement security measures by design. Blockchain is appearing to mitigate traditional applications’ security threats. Although blockchain-based applications are considered less vulnerable, they did not become the silver bullet for securing against different security threats. Moreover, the blockchain domain is constantly evolving, providing new techniques and often interchangeable design concepts, resulting in conceptual ambiguity and confusion in treating security threats effectively. Overall, we address the problem of traditional applications’ SRM using blockchain as a countermeasure and the SRM of blockchain-based applications. We start by surveying how blockchain mitigates the security threats of traditional applications, and the outcome is a blockchain-based reference model (BbRM) that adheres to the SRM domain model. Next, we present an upper-level reference ontology (ULRO) as a foundation ontology and provide two instantiations of the ULRO. The first instantiation includes Corda as a permissioned blockchain and the financial case. The second instantiation includes the permissionless blockchain components and the healthcare case. Both ontology representations help in the SRM of traditional and blockchain-based applications. Furthermore, we built a web-based ontology parsing tool, OwlParser. Contributions resulted in an ontology-based security reference framework for managing security risks using blockchain. The framework is dynamic, supports the iterative process of SRM, and potentially lessens the security threats of traditional and blockchain-based applications.https://www.ester.ee/record=b551352
    corecore