510 research outputs found

    Privacy-Preserving Aggregation of Time-Series Data with Public Verifiability from Simple Assumptions

    Get PDF
    Aggregator oblivious encryption was proposed by Shi et al. (NDSS 2011), where an aggregator can compute an aggregated sum of data and is unable to learn anything else (aggregator obliviousness). Since the aggregator does not learn individual data that may reveal users\u27 habits and behaviors, several applications, such as privacy-preserving smart metering, have been considered. In this paper, we propose aggregator oblivious encryption schemes with public verifiability where the aggregator is required to generate a proof of an aggregated sum and anyone can verify whether the aggregated sum has been correctly computed by the aggregator. Though Leontiadis et al. (CANS 2015) considered the verifiability, their scheme requires an interactive complexity assumption to provide the unforgeability of the proof. Our schemes are proven to be unforgeable under a static and simple assumption (a variant of the Computational Diffie-Hellman assumption). Moreover, our schemes inherit the tightness of the reduction of the Benhamouda et al. scheme (ACM TISSEC 2016) for proving aggregator obliviousness. This tight reduction allows us to employ elliptic curves of a smaller order and leads to efficient implementation

    Zero-knowledge Proof Meets Machine Learning in Verifiability: A Survey

    Full text link
    With the rapid advancement of artificial intelligence technology, the usage of machine learning models is gradually becoming part of our daily lives. High-quality models rely not only on efficient optimization algorithms but also on the training and learning processes built upon vast amounts of data and computational power. However, in practice, due to various challenges such as limited computational resources and data privacy concerns, users in need of models often cannot train machine learning models locally. This has led them to explore alternative approaches such as outsourced learning and federated learning. While these methods address the feasibility of model training effectively, they introduce concerns about the trustworthiness of the training process since computations are not performed locally. Similarly, there are trustworthiness issues associated with outsourced model inference. These two problems can be summarized as the trustworthiness problem of model computations: How can one verify that the results computed by other participants are derived according to the specified algorithm, model, and input data? To address this challenge, verifiable machine learning (VML) has emerged. This paper presents a comprehensive survey of zero-knowledge proof-based verifiable machine learning (ZKP-VML) technology. We first analyze the potential verifiability issues that may exist in different machine learning scenarios. Subsequently, we provide a formal definition of ZKP-VML. We then conduct a detailed analysis and classification of existing works based on their technical approaches. Finally, we discuss the key challenges and future directions in the field of ZKP-based VML

    Masquerade: Verifiable Multi-Party Aggregation with Secure Multiplicative Commitments

    Get PDF
    In crowd-sourced data aggregation, participants share their data points with curators. However, the lack of privacy guarantees may discourage participation, which motivates the need for privacy-preserving aggregation protocols. Unfortunately, existing solutions do not support public auditing without revealing the participants\u27 data. In real-world applications, there is a need for public verifiability (i.e., verifying the protocol correctness) while preserving the privacy of the participants\u27 inputs since the participants do not always trust the data curator. Likewise, public distributed ledgers (e.g., blockchains) provide public auditing but may reveal sensitive information. We present Masquerade, a novel protocol for computing private statistics, such as sum, average, and histograms without revealing anything about participants\u27 data. We propose a tailored multiplicative commitment scheme to ensure the integrity of data aggregations and publish all the participants\u27 commitments on a ledger to provide public verifiability. We complement our methodology with two zero-knowledge proof protocols that detect potentially untrusted participants who attempt to poison the aggregation results. Thus, Masquerade ensures the validity of shared data points before being aggregated, enabling a broad range of numerical and categorical studies. In our experiments, we evaluate our protocol\u27s runtime and communication overhead using homomorphic ciphertexts and commitments for a variable number of participants

    Private Stream Aggregation with Labels in the Standard Model

    Get PDF

    ADSNARK: Nearly practical and privacy-preserving proofs on authenticated data

    Get PDF
    We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space

    An individually verifiable voting protocol with complete recorded-as-intended and counted-as-recorded guarantees

    Full text link
    Democratic principles demand that every voter should be able to individually verify that their vote is recorded as intended and counted as recorded, without having to trust any authorities. However, most end-to-end (E2E) verifiable voting protocols that provide universal verifiability and voter secrecy implicitly require to trust some authorities or auditors for the correctness guarantees that they provide. In this paper, we explore the notion of individual verifiability. We evaluate the existing E2E voting protocols and propose a new protocol that guarantees such verifiability without any trust requirements. Our construction depends on a novel vote commitment scheme to capture voter intent that allows voters to obtain a direct zero-knowledge proof of their vote being recorded as intended. We also ensure protection against spurious vote injection or deletion post eligibility verification, and polling-booth level community profiling

    Verifiable Encodings for Secure Homomorphic Analytics

    Full text link
    Homomorphic encryption, which enables the execution of arithmetic operations directly on ciphertexts, is a promising solution for protecting privacy of cloud-delegated computations on sensitive data. However, the correctness of the computation result is not ensured. We propose two error detection encodings and build authenticators that enable practical client-verification of cloud-based homomorphic computations under different trade-offs and without compromising on the features of the encryption algorithm. Our authenticators operate on top of trending ring learning with errors based fully homomorphic encryption schemes over the integers. We implement our solution in VERITAS, a ready-to-use system for verification of outsourced computations executed over encrypted data. We show that contrary to prior work VERITAS supports verification of any homomorphic operation and we demonstrate its practicality for various applications, such as ride-hailing, genomic-data analysis, encrypted search, and machine-learning training and inference.Comment: update authors, typos corrected, scheme update

    Advances and Open Problems in Federated Learning

    Get PDF
    Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges

    Advances and Open Problems in Federated Learning

    Get PDF
    Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.Comment: Published in Foundations and Trends in Machine Learning Vol 4 Issue 1. See: https://www.nowpublishers.com/article/Details/MAL-08

    VeriVoting: A decentralized, verifiable and privacy-preserving scheme for weighted voting

    Get PDF
    Decentralization, verifiability, and privacy-preserving are three fundamental properties of modern e-voting. In this paper, we conduct extensive investigations into them and present a novel e-voting scheme, VeriVoting, which is the first to satisfy these properties. More specifically, decentralization is realized through blockchain technology and the distribution of decryption power among competing entities, such as candidates. Furthermore, verifiability is satisfied when the public verifies the ballots and decryption keys. And finally, bidirectional unlinkability is achieved to help preserve privacy by decoupling voter identity from ballot content. Following the ideas above, we first leverage linear homomorphic encryption schemes and non-interactive zero-knowledge argument systems to construct a voting primitive, SemiVoting, which meets decentralization, decryption-key verifiability, and ballot privacy. To further achieve ballot ciphertext verifiability and anonymity, we extend this primitive with blockchain and verifiable computation to finally arrive at VeriVoting. Through security analysis and per-formance evaluations, VeriVoting offers a new trade-off between security and efficiency that differs from all previous e-voting schemes and provides a radically novel practical ap-proach to large-scale elections
    corecore