289 research outputs found

    Optimising Structured P2P Networks for Complex Queries

    Get PDF
    With network enabled consumer devices becoming increasingly popular, the number of connected devices and available services is growing considerably - with the number of connected devices es- timated to surpass 15 billion devices by 2015. In this increasingly large and dynamic environment it is important that users have a comprehensive, yet efficient, mechanism to discover services. Many existing wide-area service discovery mechanisms are centralised and do not scale to large numbers of users. Additionally, centralised services suffer from issues such as a single point of failure, high maintenance costs, and difficulty of management. As such, this Thesis seeks a Peer to Peer (P2P) approach. Distributed Hash Tables (DHTs) are well known for their high scalability, financially low barrier of entry, and ability to self manage. They can be used to provide not just a platform on which peers can offer and consume services, but also as a means for users to discover such services. Traditionally DHTs provide a distributed key-value store, with no search functionality. In recent years many P2P systems have been proposed providing support for a sub-set of complex query types, such as keyword search, range queries, and semantic search. This Thesis presents a novel algorithm for performing any type of complex query, from keyword search, to complex regular expressions, to full-text search, over any structured P2P overlay. This is achieved by efficiently broadcasting the search query, allowing each peer to process the query locally, and then efficiently routing responses back to the originating peer. Through experimentation, this technique is shown to be successful when the network is stable, however performance degrades under high levels of network churn. To address the issue of network churn, this Thesis proposes a number of enhancements which can be made to existing P2P overlays in order to improve the performance of both the existing DHT and the proposed algorithm. Through two case studies these enhancements are shown to improve not only the performance of the proposed algorithm under churn, but also the performance of traditional lookup operations in these networks

    Designs and Analyses in Structured Peer-To-Peer Systems

    Get PDF
    Peer-to-Peer (P2P) computing is a recent hot topic in the areas of networking and distributed systems. Work on P2P computing was triggered by a number of ad-hoc systems that made the concept popular. Later, academic research efforts started to investigate P2P computing issues based on scientific principles. Some of that research produced a number of structured P2P systems that were collectively referred to by the term "Distributed Hash Tables" (DHTs). However, the research occurred in a diversified way leading to the appearance of similar concepts yet lacking a common perspective and not heavily analyzed. In this thesis we present a number of papers representing our research results in the area of structured P2P systems grouped as two sets labeled respectively "Designs" and "Analyses". The contribution of the first set of papers is as follows. First, we present the princi- ple of distributed k-ary search and argue that it serves as a framework for most of the recent P2P systems known as DHTs. That is, given this framework, understanding existing DHT systems is done simply by seeing how they are instances of that frame- work. We argue that by perceiving systems as instances of that framework, one can optimize some of them. We illustrate that by applying the framework to the Chord system, one of the most established DHT systems. Second, we show how the frame- work helps in the design of P2P algorithms by two examples: (a) The DKS(n; k; f) system which is a system designed from the beginning on the principles of distributed k-ary search. (b) Two broadcast algorithms that take advantage of the distributed k-ary search tree. The contribution of the second set of papers is as follows. We account for two approaches that we used to evaluate the performance of a particular class of DHTs, namely the one adopting periodic stabilization for topology maintenance. The first approach was of an intrinsic empirical nature. In this approach, we tried to perceive a DHT as a physical system and account for its properties in a size-independent manner. The second approach was of a more analytical nature. In this approach, we applied the technique of Master Equations, which is a widely used technique in the analysis of natural systems. The application of the technique lead to a highly accurate description of the behavior of structured overlays. Additionally, the thesis contains a primer on structured P2P systems that tries to capture the main ideas prevailing in the field

    A Systems Approach to Minimize Wasted Work in Blockchains

    Get PDF
    Blockchain systems and distributed ledgers are getting increasing attention since the release of Bitcoin. Everyday they make headlines in the news involving economists, scientists, and technologists. The technology invented by Satoshi Nakamoto gave to the world a quantum leap in the fields of distributed systems and digital currencies. Even so, there are still some problems regarding the architecture in most existing blockchain systems. One of the main challenges in these systems is the structure of the network topology and how peers disseminate messages between them, this leads to problems regarding the system scalability and the efficiency of the transaction and blocks propagation, wasting computational power, energy and network resources. In this work we propose a novel solution to tackle these limitations. We propose the design of membership and message dissemination protocols, based on the state-ofart, that will boost the efficiency of the overlay network that support the interactions between miners, reducing the number of exchanged messages and the used bandwidth. This solution also reduces the computational power and energy consumed across all nodes in the network, since the nodes avoid to process redundant network messages, and, becoming aware of mined blocks faster, avoid to perform computations over an outdated chain configuration

    Distributed scheduling and data sharing in late-binding overlays

    Get PDF
    Pull-based late-binding overlays are used in some of today’s largest computational grids. Job agents are submitted to resources with the duty of retrieving real workload from a central queue at runtime. This helps overcome the problems of these very complex environments, namely, heterogeneity, imprecise status information and relatively high failure rates. In addition, the late job assignment allows dynamic adaptation to changes in the grid conditions or user priorities. However, as the scale grows, the central assignment queue may become a bottleneck for the whole system. This article presents a distributed scheduling architecture for late-binding overlays, which addresses these scalability issues. Our system lets execution nodes build a distributed hash table and delegates job matching and assignment to them. This reduces the load on the central server and makes the system much more scalable and robust. Moreover, scalability makes fine-grained scheduling possible, and enables new functionalities like the implementation of a distributed data cache on the execution nodes, which helps alleviate the commonly congested grid storage services

    Self-Correcting Broadcast in Distributed Hash Tables

    Get PDF
    We present two broadcast algorithms that can be used on top of distributed hash tables (DHTs) to perform group communication and arbitrary queries. Unlike other P2P group communication mechanisms, which either embed extra information in the DHTs or use random overlay networks, our algorithms take advantage of the structured DHT overlay networks without maintaining additional information. The proposed algorithms do not send any redundant messages. Furthermore the two algorithms ensure 100% coverage of the nodes in the system even when routing information is outdated as a result of dynamism in the network. The first algorithm performs some correction of outdated routing table entries with a low cost of correction traffic. The second algorithm exploits the nature of the broadcasts to extensively update erroneous routing information at the cost of higher correction traffic. The algorithms are validated and evaluated in our stochastic distributed-algorithms simulator

    Discreet - Pub/Sub for Edge Systems

    Get PDF
    The number of devices connected to the Internet has been growing exponentially over the last few years. Today, the amount of information available to users has reached a point that makes it impossible to consume it all, showing that we need better ways to filter what kind of information is sent our way. At the same time, while users are online and access all this information, their actions are also being collected, scrutinized and commercialized with little regard for privacy. This thesis addresses those issues in the context of a decentralized Publish/Subscribe solution for edge systems. Working at the edge of the Internet aims to prevent centralized control from a single entity and lessen the chance of abuse. Our goal was to devise a solution that achieves efficient message delivery, with good load-balancing properties, without revealing its participants subscription interests to preserve user privacy. Our solution uses cryptography and probabilistic data sets as a way to obfuscate event topics and user subscriptions. We modeled a cooperative solution, where publisher and subscriber nodes work in concert to route events among themselves, by leveraging a onehop structured overlay. By using an experimental evaluation, we attest the scalability and general performance of the proposed algorithms, including latency, false negative and false positive rates, and other useful metrics.O número de aparelhos ligados a Internet têm vindo a crescer exponencialmente ao longo dos últimos anos. Hoje em dia, a quantidade de informação que os utilizadores têm disponível, chegou a um ponto que torna impossível o seu total consumo. Isto leva a que seja necessário encontrarmos melhores formas de filtrar a informação que recebemos. Ao mesmo tempo, as ações do utilizadores estão a ser recolhidas, examinadas e comercializadas, sem qualquer respeito pela privacidade. Esta tese trata destes assuntos no contexto de um sistema Publish/Subscribe descentralizado, para sistemas na periferia. O objectivo de operar na preferia da Internet está em prevenir o controlo centralizado por uma única entidade e diminuir a oportunidade para abusos. O nosso objectivo foi conceber uma solução que realiza entrega de mensagens eficientemente, com boas propriedades na distribuição de carga e sem revelar on interesses dos participantes, de forma a preservar a sua privacidade. A nossa solução usa criptografia e estruturas de dados probabilísticas, como uma forma de ofuscar os tópicos dos eventos e as subscrições dos utilizadores. Modelamos o sistema com o objectivo de ser uma solução cooperativa, onde ambos os tipos de nós Editores e Assinantes trabalham em concertadamente para encaminhar eventos entre eles, ao fazerem uso de uma estrutura de rede sobreposta com um salto. Fazendo uma avaliação experimental testámos a escalabilidade e o desempenho geral dos algoritmos propostos, incluindo a latência, falsos negativos, falsos positivos e outras métricas úteis

    Distributed Late-binding Micro-scheduling and Data Caching for Data-Intensive Workflows

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 06-07-2015El mundo de hoy en día se encuentra inundado por ingentes cantidades de información digital procedente de muy diversas fuentes. Todo apunta, además, a que esta tendencia se agudizará en el futuro. Ni la industria, ni la sociedad en general, ni, muy particularmente, la ciencia, permanecen indiferentes ante este hecho. Al contrario, se esfuerzan por obtener el máximo provecho de esta información, lo que significa que deben capturarla, transferirla, almacenarla y procesarla puntual y eficientemente, utilizando una amplia gama de recursos computacionales. Pero esta tarea no es siempre sencilla. Un ejemplo representativo de los desafíos que suponen el manejo y procesamiento de grandes cantidades de datos es el de los experimentos de física de partículas del Large Hadron Collider (LHC), en Ginebra, que cada año deben gestionar decenas de petabytes de información. Basándonos en la experiencia de una de estas colaboraciones, hemos estudiado los principales problemas relativos a la gestión de volúmenes de datos masivos y a la ejecución de vastos flujos de trabajo que necesitan consumirlos. En este contexto, hemos desarrollado una arquitectura de propósito general para la planificación y ejecución de flujos de trabajo con importantes requisitos de datos, que hemos llamado Task Queue. Este nuevo sistema aprovecha el modelo de asignación tardía basado en agentes que ha ayudado a los experimentos del LHC a superar los problemas asociados con la heterogeneidad y la complejidad de las grandes infraestructuras grid de computación. Nuestra propuesta presenta varias mejoras con respecto a los sistemas existentes. Los agentes de ejecución de la arquitectura Task Queue comparten una tabla hash distribuida (Distributed Hash Table, DHT) y realizan la asignación de tareas de una manera cooperativa. De esta forma, se evitan los problemas de escalabilidad de los algoritmos centralizados de asignación y se mejoran los tiempos de ejecución. Esta escalabilidad nos permite realizar una microplanificación de grano fino lo cual posibilita nuevas funcionalidades, como la implementación de una cache distribuida en los nodos de ejecución y el uso de la información de ubicación de los datos en las decisiones de asignación de tareas. Esto mejora la eficiencia del procesado de datos y ayuda a aliviar los habitualmente congestionados servicios de almacenamiento del grid. Además, nuestro sistema es más robusto frente a problemas en la interacción con la cola central de tareas y ofrece mejor comportamiento en situaciones con patrones de acceso a datos exigentes o en ausencia de servicios de almacenamiento locales. Todo esto ha sido demostrado en una amplia serie de pruebas de evaluación. Dado que nuestro procedimiento de planificación de tareas distribuido requiere el uso de mensajes de broadcast, también hemos realizado un profundo estudio de las posibles aproximaciones a la implementación de esta operación sobre el DHT Kademlia, el cual es utilizado para la cache de datos compartida. Kademlia ofrece enrutamiento a nodos individuales pero no incluye ninguna primitiva de broadcast. Nuestro trabajo expone las peculiaridades de este sistema, particularmente su métrica basada en la operación XOR, y estudia analíticamente qué técnicas de broadcast pueden ser usadas con él. También se ha desarrollado un modelo que estima la cobertura de nodos en función de la probabilidad que cada mensaje individual alcance su destino correctamente. Como validación, los algoritmos se han implementado y se han evaluado exhaustivamente. Además, proponemos varias técnicas para mejorar los protocolos en situaciones adversas, por ejemplo cuando el sistema presenta una alta rotación de nodos o la tasa de error en las entregas no es despreciable. Esta técnicas incluyen redundancia, reenvío e inundación (flooding), así como combinaciones de las mismas. Presentamos un análisis de las fortalezas y debilidades de los diferentes algoritmos y las mencionadas técnicas complementarias.Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Proof of Latency Using a Verifiable Delay Function

    Get PDF
    In this thesis I present an interactive public-coin protocol called Proof of Latency (PoL) that aims to improve connections in peer-to-peer networks by measuring latencies with logical clocks built from verifiable delay functions (VDF). PoL is a tuple of three algorithms, Setup(e, λ), VCOpen(c, e), and Measure(g, T, l_p, l_v). Setup creates a vector commitment (VC), from which a vector commitment opening corresponding to a collaborator's public key is taken in VCOpen, which then gets used to create a common reference string used in Measure. If no collusion gets detected by neither party, a signed proof is ready for advertising. PoL is agnostic in terms of the individual implementations of the VC or VDF used. This said, I present a proof of concept in the form of a state machine implemented in Rust that uses RSA-2048, Catalano-Fiore vector commitments and Wesolowski's VDF to demonstrate PoL. As VDFs themselves have been shown to be useful in timestamping, they seem to work as a measurement of time in this context as well, albeit requiring a public performance metric for each peer to compare to during the measurement. I have imagined many use cases for PoL, like proving a geographical location, working as a benchmark query, or using the proofs to calculate VDFs with the latencies between peers themselves. As it stands, PoL works as a distance bounding protocol between two participants, considering their computing performance is relatively similar. More work is needed to verify the soundness of PoL as a publicly verifiable proof that a third party can believe in.Tässä tutkielmassa esitän interaktiivisen protokollan nimeltä Proof of latency (PoL), joka pyrkii parantamaan yhteyksiä vertaisverkoissa mittaamalla viivettä todennettavasta viivefunktiosta rakennetulla loogisella kellolla. Proof of latency koostuu kolmesta algoritmista, Setup(e, λ), VCOpen(c, e) ja Measure(g, T, l_p, l_v). Setup luo vektorisitoumuksen, josta luodaan avaus algoritmissa VCOpen avaamalla vektorisitoumus indeksistä, joka kuvautuu toisen mittaavan osapuolen julkiseen avaimeen. Tätä avausta käytetään luomaan yleinen viitemerkkijono, jota käytetään algoritmissa Measure alkupisteenä molempien osapuolien todennettavissa viivefunktioissa mittaamaan viivettä. Jos kumpikin osapuoli ei huomaa virheitä mittauksessa, on heidän allekirjoittama todistus valmis mainostettavaksi vertaisverkossa. PoL ei ota kantaa sen käyttämien kryptografisten funktioiden implementaatioon. Tästä huolimatta olen ohjelmoinut protokollasta prototyypin Rust-ohjelmointikielellä käyttäen RSA-2048:tta, Catalano-Fiore--vektorisitoumuksia ja Wesolowskin todennettavaa viivefunktiota protokollan esittelyyn. Todistettavat viivefunktiot ovat osoittaneet hyödyllisiksi aikaleimauksessa, mikä näyttäisi osoittavan niiden soveltumisen myös ajan mittaamiseen tässä konteksissa, huolimatta siitä että jokaisen osapuolen tulee ilmoittaa julkisesti teholukema, joka kuvaa niiden tehokkuutta viivefunktioiden laskemisessa. Toinen osapuoli käyttää tätä lukemaa arvioimaan valehteliko toinen viivemittauksessa. Olen kuvitellut monta käyttökohdetta PoL:lle, kuten maantieteellisen sijainnin todistaminen, suorituskykytestaus, tai itse viivetodistuksien käyttäminen uusien viivetodistusten laskemisessa vertaisverkon osallistujien välillä. Tällä hetkellä PoL toimii etäisyydenmittausprotokollana kahden osallistujan välillä, jos niiden suorituskyvyt ovat tarpeeksi lähellä toisiaan. Protokolla tarvitsee lisätutkimusta sen suhteen, voiko se toimia uskottavana todistuksena kolmansille osapuolille kahden vertaisverkon osallistujan välisestä viiveestä
    corecore