1,027 research outputs found

    Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries

    Full text link
    We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and other tasks, where the computing nodes is expected to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we try to bridge the gap between theoretical algorithms in the security domain, and a practical Peer-to-Peer deployment. We consider two security models. The first is the semi-honest model where peers correctly follow the protocol, but try to reveal private information. We provide three possible schemes for secure multi-party numerical computation for this model and identify a single light-weight scheme which outperforms the others. Using extensive simulation results over real Internet topologies, we demonstrate that our scheme is scalable to very large networks, with up to millions of nodes. The second model we consider is the malicious peers model, where peers can behave arbitrarily, deliberately trying to affect the results of the computation as well as compromising the privacy of other peers. For this model we provide a fourth scheme to defend the execution of the computation against the malicious peers. The proposed scheme has a higher complexity relative to the semi-honest model. Overall, we provide the Peer-to-Peer network designer a set of tools to choose from, based on the desired level of security.Comment: Submitted to Peer-to-Peer Networking and Applications Journal (PPNA) 200

    Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework

    Get PDF
    As the modern world becomes increasingly digitized and interconnected, distributed signal processing has proven to be effective in processing its large volume of data. However, a main challenge limiting the broad use of distributed signal processing techniques is the issue of privacy in handling sensitive data. To address this privacy issue, we propose a novel yet general subspace perturbation method for privacy-preserving distributed optimization, which allows each node to obtain the desired solution while protecting its private data. In particular, we show that the dual variables introduced in each distributed optimizer will not converge in a certain subspace determined by the graph topology. Additionally, the optimization variable is ensured to converge to the desired solution, because it is orthogonal to this non-convergent subspace. We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected. Moreover, the proposed method is shown to be secure under two widely-used adversary models: passive and eavesdropping. Furthermore, we consider several distributed optimizers such as ADMM and PDMM to demonstrate the general applicability of the proposed method. Finally, we test the performance through a set of applications. Numerical tests indicate that the proposed method is superior to existing methods in terms of several parameters like estimated accuracy, privacy level, communication cost and convergence rate
    • …
    corecore