1,501 research outputs found

    Overview of Polkadot and its Design Considerations

    Get PDF
    In this paper we describe the design components of the heterogenous multi-chain protocol Polkadot and explain how these components help Polkadot address some of the existing shortcomings of blockchain technologies. At present, a vast number of blockchain projects have been introduced and employed with various features that are not necessarily designed to work with each other. This makes it difficult for users to utilise a large number of applications on different blockchain projects. Moreover, with the increase in number of projects the security that each one is providing individually becomes weaker. Polkadot aims to provide a scalable and interoperable framework for multiple chains with pooled security that is achieved by the collection of components described in this paper

    Software-implemented attack tolerance for critical information retrieval

    Get PDF
    The fast-growing reliance of our daily life upon online information services often demands an appropriate level of privacy protection as well as highly available service provision. However, most existing solutions have attempted to address these problems separately. This thesis investigates and presents a solution that provides both privacy protection and fault tolerance for online information retrieval. A new approach to Attack-Tolerant Information Retrieval (ATIR) is developed based on an extension of existing theoretical results for Private Information Retrieval (PIR). ATIR uses replicated services to protect a user's privacy and to ensure service availability. In particular, ATIR can tolerate any collusion of up to t servers for privacy violation and up to ƒ faulty (either crashed or malicious) servers in a system with k replicated servers, provided that k ≥ t + ƒ + 1 where t ≥ 1 and ƒ ≤ t. In contrast to other related approaches, ATIR relies on neither enforced trust assumptions, such as the use of tanker-resistant hardware and trusted third parties, nor an increased number of replicated servers. While the best solution known so far requires k (≥ 3t + 1) replicated servers to cope with t malicious servers and any collusion of up to t servers with an O(n^*^) communication complexity, ATIR uses fewer servers with a much improved communication cost, O(n1/2)(where n is the size of a database managed by a server).The majority of current PIR research resides on a theoretical level. This thesis provides both theoretical schemes and their practical implementations with good performance results. In a LAN environment, it takes well under half a second to use an ATIR service for calculations over data sets with a size of up to 1MB. The performance of the ATIR systems remains at the same level even in the presence of server crashes and malicious attacks. Both analytical results and experimental evaluation show that ATIR offers an attractive and practical solution for ever-increasing online information applications

    Cost-effective Data Upkeep in Decentralized Storage Systems

    Get PDF
    Decentralized storage systems split files into chunks and distribute the chunks across a network of peers. Each peer may only store a few chunks per file. To later reconstruct a file, all its chunks must be downloaded. Chunks can disappear from the network at any time as peers are untrusted and may misbehave, fail or leave the network. Current systems lack a secure and cost-effective mechanism for discovering missing chunks. Hence, a client must periodically re-upload all of the file's chunks to keep it available, even if only a few are missing from the network. Needlessly re-uploading chunks waste significant amounts of the network's bandwidth, takes additional time to complete, and forces the client to pay for unwarranted resources. To address the above problem, we propose SUP, a novel protocol that utilizes proof-of-storage queries to detect missing chunks. We have evaluated SUP on a large cluster of 1000 peers running a recent version of Ethereum Swarm. Our contributions include the design and implementation of SUP and a study of Swarm's redundancy characteristics. Our evaluation shows that SUP significantly improves bandwidth utilization and time spent on data upkeep compared to the existing solution. In common scenarios, SUP can save as much as 94 % bandwidth and reduce the time spent re-uploading by up to 82 %. While dependent on the storage network's bandwidth pricing policy, using SUP may also reduce the overall monetary costs of data upkeep.acceptedVersio

    Trade-offs between Distributed Ledger Technology Characteristics

    Get PDF
    When developing peer-to-peer applications on distributed ledger technology (DLT), a crucial decision is the selection of a suitable DLT design (e.g., Ethereum), because it is hard to change the underlying DLT design post hoc. To facilitate the selection of suitable DLT designs, we review DLT characteristics and identify trade-offs between them. Furthermore, we assess how DLT designs account for these trade-offs and we develop archetypes for DLT designs that cater to specific requirements of applications on DLT. The main purpose of our article is to introduce scientific and practical audiences to the intricacies of DLT designs and to support development of viable applications on DLT

    Modeling and Verification for Timing Satisfaction of Fault-Tolerant Systems with Finiteness

    Full text link
    The increasing use of model-based tools enables further use of formal verification techniques in the context of distributed real-time systems. To avoid state explosion, it is necessary to construct verification models that focus on the aspects under consideration. In this paper, we discuss how we construct a verification model for timing analysis in distributed real-time systems. We (1) give observations concerning restrictions of timed automata to model these systems, (2) formulate mathematical representations on how to perform model-to-model transformation to derive verification models from system models, and (3) propose some theoretical criteria how to reduce the model size. The latter is in particular important, as for the verification of complex systems, an efficient model reflecting the properties of the system under consideration is equally important to the verification algorithm itself. Finally, we present an extension of the model-based development tool FTOS, designed to develop fault-tolerant systems, to demonstrate %the benefits of our approach.Comment: 1. Appear in the 13-th IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT'09). 2. Compared to the DS-RT version, we add motivations for editing automata, and footnote that the sketch of editing algo is only applicable in our job-processing element to avoid ambiguity (because actions are chained
    • …
    corecore