2,889 research outputs found

    A Flexible Network Approach to Privacy of Blockchain Transactions

    Full text link
    For preserving privacy, blockchains can be equipped with dedicated mechanisms to anonymize participants. However, these mechanism often take only the abstraction layer of blockchains into account whereas observations of the underlying network traffic can reveal the originator of a transaction request. Previous solutions either provide topological privacy that can be broken by attackers controlling a large number of nodes, or offer strong and cryptographic privacy but are inefficient up to practical unusability. Further, there is no flexible way to trade privacy against efficiency to adjust to practical needs. We propose a novel approach that combines existing mechanisms to have quantifiable and adjustable cryptographic privacy which is further improved by augmented statistical measures that prevent frequent attacks with lower resources. This approach achieves flexibility for privacy and efficency requirements of different blockchain use cases.Comment: 6 pages, 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS

    Non-blocking Patricia Tries with Replace Operations

    Full text link
    This paper presents a non-blocking Patricia trie implementation for an asynchronous shared-memory system using Compare&Swap. The trie implements a linearizable set and supports three update operations: insert adds an element, delete removes an element and replace replaces one element by another. The replace operation is interesting because it changes two different locations of tree atomically. If all update operations modify different parts of the trie, they run completely concurrently. The implementation also supports a wait-free find operation, which only reads shared memory and never changes the data structure. Empirically, we compare our algorithms to some existing set implementations.Comment: To appear in the 33rd IEEE International Conference on Distributed Computing Systems (ICDCS 2013

    On Consistency of Graph-based Semi-supervised Learning

    Full text link
    Graph-based semi-supervised learning is one of the most popular methods in machine learning. Some of its theoretical properties such as bounds for the generalization error and the convergence of the graph Laplacian regularizer have been studied in computer science and statistics literatures. However, a fundamental statistical property, the consistency of the estimator from this method has not been proved. In this article, we study the consistency problem under a non-parametric framework. We prove the consistency of graph-based learning in the case that the estimated scores are enforced to be equal to the observed responses for the labeled data. The sample sizes of both labeled and unlabeled data are allowed to grow in this result. When the estimated scores are not required to be equal to the observed responses, a tuning parameter is used to balance the loss function and the graph Laplacian regularizer. We give a counterexample demonstrating that the estimator for this case can be inconsistent. The theoretical findings are supported by numerical studies.Comment: This paper is accepted by 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS

    On the Minimal Knowledge Required for Solving Stellar Consensus

    Full text link
    Byzantine Consensus is fundamental for building consistent and fault-tolerant distributed systems. In traditional quorum-based consensus protocols, quorums are defined using globally known assumptions shared among all participants. Motivated by decentralized applications on open networks, the Stellar blockchain relaxes these global assumptions by allowing each participant to define its quorums using local information. A similar model called Consensus with Unknown Participants (CUP) studies the minimal knowledge required to solve consensus in ad-hoc networks where each participant knows only a subset of other participants of the system. We prove that Stellar cannot solve consensus using the initial knowledge provided to participants in the CUP model, even though CUP can. We propose an oracle called sink detector that augments this knowledge, enabling Stellar participants to solve consensus.Comment: Preprint of a paper to appear at the 43rd IEEE International Conference on Distributed Computing Systems (ICDCS 2023

    Joint Optimization of Energy Consumption and Completion Time in Federated Learning

    Full text link
    Federated Learning (FL) is an intriguing distributed machine learning approach due to its privacy-preserving characteristics. To balance the trade-off between energy and execution latency, and thus accommodate different demands and application scenarios, we formulate an optimization problem to minimize a weighted sum of total energy consumption and completion time through two weight parameters. The optimization variables include bandwidth, transmission power and CPU frequency of each device in the FL system, where all devices are linked to a base station and train a global model collaboratively. Through decomposing the non-convex optimization problem into two subproblems, we devise a resource allocation algorithm to determine the bandwidth allocation, transmission power, and CPU frequency for each participating device. We further present the convergence analysis and computational complexity of the proposed algorithm. Numerical results show that our proposed algorithm not only has better performance at different weight parameters (i.e., different demands) but also outperforms the state of the art.Comment: This paper appears in the Proceedings of IEEE International Conference on Distributed Computing Systems (ICDCS) 2022. Please feel free to contact us for questions or remark
    corecore