457 research outputs found

    LightChain: A DHT-based Blockchain for Resource Constrained Environments

    Get PDF
    As an append-only distributed database, blockchain is utilized in a vast variety of applications including the cryptocurrency and Internet-of-Things (IoT). The existing blockchain solutions have downsides in communication and storage efficiency, convergence to centralization, and consistency problems. In this paper, we propose LightChain, which is the first blockchain architecture that operates over a Distributed Hash Table (DHT) of participating peers. LightChain is a permissionless blockchain that provides addressable blocks and transactions within the network, which makes them efficiently accessible by all the peers. Each block and transaction is replicated within the DHT of peers and is retrieved in an on-demand manner. Hence, peers in LightChain are not required to retrieve or keep the entire blockchain. LightChain is fair as all of the participating peers have a uniform chance of being involved in the consensus regardless of their influence such as hashing power or stake. LightChain provides a deterministic fork-resolving strategy as well as a blacklisting mechanism, and it is secure against colluding adversarial peers attacking the availability and integrity of the system. We provide mathematical analysis and experimental results on scenarios involving 10K nodes to demonstrate the security and fairness of LightChain. As we experimentally show in this paper, compared to the mainstream blockchains like Bitcoin and Ethereum, LightChain requires around 66 times less per node storage, and is around 380 times faster on bootstrapping a new node to the system, while each LightChain node is rewarded equally likely for participating in the protocol

    New models for efficient authenticated dictionaries

    No full text
    International audienceWe propose models for data authentication which take into account the behavior of the clients who perform queries. Our models reduce the size of the authenticated proof when the frequency of the query corresponding to a given data is higher. Existing models implicitly assume the frequency distribution of queries to be uniform, but in reality, this distribution generally follows Zipf's law. Our models better reflect reality and the communication cost between clients and the server provider is reduced allowing the server to save bandwidth. The obtained gain on the average proof size compared to existing schemes depends on the parameter of Zipf law. The greater the parameter, the greater the gain. When the frequency distribution follows a perfect Zipf's law, we obtain a gain that can reach 26%. Experiments show the existence of applications for which Zipf parameter is greater than 1, leading to even higher gains

    DTKI: a new formalized PKI with no trusted parties

    Get PDF
    The security of public key validation protocols for web-based applications has recently attracted attention because of weaknesses in the certificate authority model, and consequent attacks. Recent proposals using public logs have succeeded in making certificate management more transparent and verifiable. However, those proposals involve a fixed set of authorities. This means an oligopoly is created. Another problem with current log-based system is their heavy reliance on trusted parties that monitor the logs. We propose a distributed transparent key infrastructure (DTKI), which greatly reduces the oligopoly of service providers and allows verification of the behaviour of trusted parties. In addition, this paper formalises the public log data structure and provides a formal analysis of the security that DTKI guarantees.Comment: 19 page

    Auditable data structures: theory and applications

    Full text link
    Every digital process needs to consume some data in order to work properly. It is very common for applications to use some external data in their processes, getting them by sources such as external APIs. Therefore, trusting the received data becomes crucial in such scenarios, considering that if the data are not self-produced by the consumer, the trust in the external data source, or in the data that the source produces, can not always be taken for granted. The most used approach to generate trust in the external source is based on authenticated data structures, that are able to authenticate the source when queried through the generation of proofs. Such proofs are useful to assess authenticity or integrity, however, an external user could also be interested in verifying the data history and its consistency. This problem seems to be unaddressed by current literature, which proposes some approaches aimed at executing audits by internal actors with prior knowledge about the data structures. In this paper, we address the scenario of an external auditor with no data knowledge that wants to verify the data history consistency. We analyze the terminology and the current state of the art of the auditable data structures, then we will propose a general framework to support external audits from both internal and external users
    corecore