143 research outputs found

    A Lightweight Approach for Improving the Lookup Performance in Kademlia-type Systems

    Full text link
    Discovery of nodes and content in large-scale distributed systems is generally based on Kademlia, today. Understanding Kademlia-type systems to improve their performance is essential for maintaining a high service quality for an increased number of participants, particularly when those systems are adopted by latency-sensitive applications. This paper contributes to the understanding of Kademlia by studying the impact of \emph{diversifying} neighbours' identifiers within each routing table bucket on the lookup performance. We propose a new, yet backward-compatible, neighbour selection scheme that attempts to maximize the aforementioned diversity. The scheme does not cause additional overhead except negligible computations for comparing the diversity of identifiers. We present a theoretical model for the actual impact of the new scheme on the lookup's hop count and validate it against simulations of three exemplary Kademlia-type systems. We also measure the performance gain enabled by a partial deployment for the scheme in the real KAD system. The results confirm the superiority of the systems that incorporate our scheme.Comment: 13 pages, 8 figures, conference version 'Diversity Entails Improvement: A new Neighbour Selection Scheme for Kademlia-type Systems' at IEEE P2P 201

    Exploiting Parallelism in the Design of Peer-to-Peer Overlays

    Get PDF
    Many peer-to-peer overlay operations are inherently parallel and this parallelism can be exploited by using multi-destination multicast routing, resulting in significant message reduction in the underlying network. We propose criteria for assessing when multicast routing can effectively be used, and compare multi-destination multicast and host group multicast using these criteria. We show that the assumptions underlying the Chuang-Sirbu multicast scaling law are valid in large-scale peer-to-peer overlays, and thus Chuang-Sirbu is suitable for estimating the message reduction when replacing unicast overlay messages with multicast messages. Using simulation, we evaluate message savings in two overlay algorithms when multi-destination multicast routing is used in place of unicast messages. We further describe parallelism in a range of overlay algorithms including multi-hop, variable-hop, load-balancing, random walk, and measurement overlay

    Empirical and Analytical Perspectives on the Robustness of Blockchain-related Peer-to-Peer Networks

    Get PDF
    Die Erfindung von Bitcoin hat ein großes Interesse an dezentralen Systemen geweckt. Eine häufige Zuschreibung an dezentrale Systeme ist dabei, dass eine Dezentralisierung automatisch zu einer höheren Sicherheit und Widerstandsfähigkeit gegenüber Angriffen führt. Diese Dissertation widmet sich dieser Zuschreibung, indem untersucht wird, ob dezentralisierte Anwendungen tatsächlich so robust sind. Dafür werden exemplarisch drei Systeme untersucht, die häufig als Komponenten in komplexen Blockchain-Anwendungen benutzt werden: Ethereum als Infrastruktur, IPFS zur verteilten Datenspeicherung und schließlich "Stablecoins" als Tokens mit Wertstabilität. Die Sicherheit und Robustheit dieser einzelnen Komponenten bestimmt maßgeblich die Sicherheit des Gesamtsystems in dem sie verwendet werden; darüber hinaus erlaubt der Fokus auf Komponenten Schlussfolgerungen über individuelle Anwendungen hinaus. Für die entsprechende Analyse bedient sich diese Arbeit einer empirisch motivierten, meist Netzwerklayer-basierten Perspektive -- angereichert mit einer ökonomischen im Kontext von Wertstabilen Tokens. Dieses empirische Verständnis ermöglicht es Aussagen über die inhärenten Eigenschaften der studierten Systeme zu treffen. Ein zentrales Ergebnis dieser Arbeit ist die Entdeckung und Demonstration einer "Eclipse-Attack" auf das Ethereum Overlay. Mittels eines solchen Angriffs kann ein Angreifer die Verbreitung von Transaktionen und Blöcken behindern und Netzwerkteilnehmer aus dem Overlay ausschließen. Des weiteren wird das IPFS-Netzwerk umfassend analysiert und kartografiert mithilfe (1) systematischer Crawls der DHT sowie (2) des Mitschneidens von Anfragenachrichten für Daten. Erkenntlich wird hierbei, dass die hybride Overlay-Struktur von IPFS Segen und Fluch zugleich ist, da das Gesamtsystem zwar robust gegen Angriffe ist, gleichzeitig aber eine umfassende Überwachung der Netzwerkteilnehmer ermöglicht wird. Im Rahmen der wertstabilen Kryptowährungen wird ein Klassifikations-Framework vorgestellt und auf aktuelle Entwicklungen im Gebiet der "Stablecoins" angewandt. Mit diesem Framework wird somit (1) der aktuelle Zustand der Stablecoin-Landschaft sortiert und (2) ein Mittel zur Verfügung gestellt, um auch zukünftige Designs einzuordnen und zu verstehen.The inception of Bitcoin has sparked a large interest in decentralized systems. In particular, popular narratives imply that decentralization automatically leads to a high security and resilience against attacks, even against powerful adversaries. In this thesis, we investigate whether these ascriptions are appropriate and if decentralized applications are as robust as they are made out to be. To this end, we exemplarily analyze three widely-used systems that function as building blocks for blockchain applications: Ethereum as basic infrastructure, IPFS for distributed storage and lastly "stablecoins" as tokens with a stable value. As reoccurring building blocks for decentralized applications these examples significantly determine the security and resilience of the overall application. Furthermore, focusing on these building blocks allows us to look past individual applications and focus on inherent systemic properties. The analysis is driven by a strong empirical, mostly network-layer based perspective; enriched with an economic point of view in the context of monetary stabilization. The resulting practical understanding allows us to delve into the systems' inherent properties. The fundamental results of this thesis include the demonstration of a network-layer Eclipse attack on the Ethereum overlay which can be leveraged to impede the delivery of transaction and blocks with dire consequences for applications built on top of Ethereum. Furthermore, we extensively map the IPFS network through (1) systematic crawling of its DHT, as well as (2) monitoring content requests. We show that while IPFS' hybrid overlay structure renders it quite robust against attacks, this virtue of the overlay is simultaneously a curse, as it allows for extensive monitoring of participating peers and the data they request. Lastly, we exchange the network-layer perspective for a mostly economic one in the context of monetary stabilization. We present a classification framework to (1) map out the stablecoin landscape and (2) provide means to pigeon-hole future system designs. With our work we not only scrutinize ascriptions attributed to decentral technologies; we also reached out to IPFS and Ethereum developers to discuss results and remedy potential attack vectors

    Ditto: Towards Decentralised Similarity Search for Web3 Services

    Get PDF
    The Web has become an integral part of life, and over the past decade, it has become increasingly centralised, leading to a number of challenges such as censorship and control, particularly in search engines. Recently, the paradigm of the decentralised Web (DWeb), or Web3, has emerged, which aims to provide decentralised alternatives to current systems with decentralised control, transparency, and openness. In this paper we introduce Ditto, a decentralised search mechanism for DWeb content, based on similarity search. Ditto uses locality sensitive hashing (LSH) to extract similarity signatures and records from content, which are stored on a decentralised index on top of a distributed hash table (DHT). Ditto uniquely supports numerous underlying content networks and types, and supports various use-cases, including keyword-search. Our evaluation shows that our system is feasible and that our search quality, delay, and overhead are comparable to those currently accepted by users of DWeb and search systems

    A holistic architecture using peer to peer (P2P) protocols for the internet of things and wireless sensor networks

    Get PDF
    Wireless Sensor Networks (WSNs) interact with the physical world using sensing and/or actuation. The wireless capability of WSN nodes allows them to be deployed close to the sensed phenomenon. Cheaper processing power and the use of micro IP stacks allow nodes to form an “Internet of Things” (IoT) integrating the physical world with the Internet in a distributed system of devices and applications. Applications using the sensor data may be located across the Internet from the sensor network, allowing Cloud services and Big Data approaches to store and analyse this data in a scalable manner, supported by new approaches in the area of fog and edge computing. Furthermore, the use of protocols such as the Constrained Application Protocol (CoAP) and data models such as IPSO Smart Objects have supported the adoption of IoT in a range of scenarios. IoT has the potential to become a realisation of Mark Weiser’s vision of ubiquitous computing where tiny networked computers become woven into everyday life. This presents the challenge of being able to scale the technology down to resource-constrained devices and to scale it up to billions of devices. This will require seamless interoperability and abstractions that can support applications on Cloud services and also on node devices with constrained computing and memory capabilities, limited development environments and requirements on energy consumption. This thesis proposes a holistic architecture using concepts from tuple-spaces and overlay Peer-to-Peer (P2P) networks. This architecture is termed as holistic, because it considers the flow of the data from sensors through to services. The key contributions of this work are: development of a set of architectural abstractions to provide application layer interoperability, a novel cache algorithm supporting leases, a tuple-space based data store for local and remote data and a Peer to Peer (P2P) protocol with an innovative use of a DHT in building an overlay network. All these elements are designed for implementation on a resource constrained node and to be extensible to server environments, which is shown in a prototype implementation. This provides the basis for a new P2P holistic approach that will allow Wireless Sensor Networks and IoT to operate in a self-organising ad hoc manner in order to deliver the promise of IoT

    Kadcast-NG: A Structured Broadcast Protocol for Blockchain Networks

    Get PDF
    In order to propagate transactions and blocks, today’s blockchain systems rely on unstructured peer-to-peer overlay networks. In such networks, broadcast is known to be an inefficient operation in terms of message complexity and overhead. In addition to the impact on the system performance, inefficient or delayed block propagation may have severe consequences regarding security and fairness of the consensus layer. In contrast, the Kadcast protocol is a structured peer-to-peer protocol for block and transaction propagation in blockchain networks. Kadcast utilizes the well-known overlay topology of Kademlia to realize an efficient broadcast operation with tunable overhead. We study the security and privacy of the Kadcast protocol based on probabilistic models and analyze its resilience to packet losses and node failures. Moreover, we evaluate Kadcast’s block delivery performance, broadcast reliability, efficiency, and security based on advanced network simulations. Lastly, we introduce a QUIC-based prototype implementation of the Kadcast protocol and show its merits through deployment in a large-scale cloud-based testbed

    Building Robust Distributed Infrastructure Networks

    Get PDF
    Many competing designs for Distributed Hash Tables exist exploring multiple models of addressing, routing and network maintenance. Designing a general theoretical model and implementation of a Distributed Hash Table allows exploration of the possible properties of Distributed Hash Tables. We will propose a generalized model of DHT behavior, centered on utilizing Delaunay triangulation in a given metric space to maintain the networks topology. We will show that utilizing this model we can produce network topologies that approximate existing DHT methods and provide a starting point for further exploration. We will use our generalized model of DHT construction to design and implement more efficient Distributed Hash Table protocols, and discuss the qualities of potential successors to existing DHT technologies

    Improving Data Availability in Decentralized Storage Systems

    Get PDF
    PhD thesis in Information technologyPreserving knowledge for future generations has been a primary concern for humanity since the dawn of civilization. State-of-the-art methods have included stone carvings, papyrus scrolls, and paper books. With each advance in technology, it has become easier to record knowledge. In the current digital age, humanity may preserve enormous amounts of knowledge on hard drives with the click of a button. The aggregation of several hard drives into a computer forms the basis for a storage system. Traditionally, large storage systems have comprised many distinct computers operated by a single administrative entity. With the rise in popularity of blockchain and cryptocurrencies, a new type of storage system has emerged. This new type of storage system is fully decentralized and comprises a network of untrusted peers cooperating to act as a single storage system. During upload, files are split into chunks and distributed across a network of peers. These storage systems encode files using Merkle trees, a hierarchical data structure that provides integrity verification and lookup services. While decentralized storage systems are popular and have a user base in the millions, many technical aspects are still in their infancy. As such, they have yet to prove themselves viable alternatives to traditional centralized storage systems. In this thesis, we contribute to the technical aspects of decentralized storage systems by proposing novel techniques and protocols. We make significant contributions with the design of three practical protocols that each improve data availability in different ways. Our first contribution is Snarl and entangled Merkle trees. Entangled Merkle trees are resilient data structures that decrease the impact hierarchical dependencies have on data availability. Whenever a chunk loss is detected, Snarl uses the entangled Merkle trees to find parity chunks to repair the lost chunk. Our results show that by encoding data as an entangled Merkle tree and using Snarl’s repair algorithm, the storage utilization in current systems could be improved by over five times, with improved data availability. Second, we propose SNIPS, a protocol that efficiently synchronizes the data stored on peers to ensure that all peers have the same data. We designed a Proof of Storage-like construction using a Minimal Perfect Hash Function. Each peer uses the PoS-like construction to create a storage proof for those chunks it wants to synchronize. Peers exchange storage proofs and use them to efficiently determine which chunks they are missing. The evaluation shows that by using SNIPS, the amount of synchronization data can be reduced by three orders of magnitude in current systems. Lastly, in our third contribution, we propose SUP, a protocol that uses cryptographic proofs to check if a chunk is already stored in the network before doing wasteful uploads. We show that SUP may reduce the amount of data transferred by up to 94 % in current systems. The protocols may be deployed independently or in combination to create a decentralized storage system that is more robust to major outages. Each of the protocols has been implemented and evaluated on a large cluster of 1,000 peers
    • …
    corecore