105,478 research outputs found

    How to prove knowledge of small secrets

    Get PDF
    We propose a new zero-knowledge protocol applicable to additively homomorphic functions that map integer vectors to an Abelian group. The protocol demonstrates knowledge of a short preimage and achieves amortised efficiency comparable to the approach of Cramer and DamgÄrd from Crypto 2010, but gives a much tighter bound on what we can extract from a dishonest prover. Towards achieving this result, we develop an analysis for bins-and-balls games that might be of independent interest. We also provide a general analysis of rewinding of a cut-and-choose protocol as well as a method to use Lyubachevsky\u27s rejection sampling technique efficiently in an interactive protocol when many proofs are given simultaneously. Our new protocol yields improved proofs of plaintext knowledge for (Ring-)LWE-based cryptosystems, where such general techniques were not known before. Moreover, they can be extended to prove preimages of homomorphic hash functions as well

    Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies

    Full text link
    Privacy definitions provide ways for trading-off the privacy of individuals in a statistical database for the utility of downstream analysis of the data. In this paper, we present Blowfish, a class of privacy definitions inspired by the Pufferfish framework, that provides a rich interface for this trade-off. In particular, we allow data publishers to extend differential privacy using a policy, which specifies (a) secrets, or information that must be kept secret, and (b) constraints that may be known about the data. While the secret specification allows increased utility by lessening protection for certain individual properties, the constraint specification provides added protection against an adversary who knows correlations in the data (arising from constraints). We formalize policies and present novel algorithms that can handle general specifications of sensitive information and certain count constraints. We show that there are reasonable policies under which our privacy mechanisms for k-means clustering, histograms and range queries introduce significantly lesser noise than their differentially private counterparts. We quantify the privacy-utility trade-offs for various policies analytically and empirically on real datasets.Comment: Full version of the paper at SIGMOD'14 Snowbird, Utah US

    On the Design of Cryptographic Primitives

    Full text link
    The main objective of this work is twofold. On the one hand, it gives a brief overview of the area of two-party cryptographic protocols. On the other hand, it proposes new schemes and guidelines for improving the practice of robust protocol design. In order to achieve such a double goal, a tour through the descriptions of the two main cryptographic primitives is carried out. Within this survey, some of the most representative algorithms based on the Theory of Finite Fields are provided and new general schemes and specific algorithms based on Graph Theory are proposed

    The Capacity of Online (Causal) qq-ary Error-Erasure Channels

    Full text link
    In the qq-ary online (or "causal") channel coding model, a sender wishes to communicate a message to a receiver by transmitting a codeword x=(x1,
,xn)∈{0,1,
,q−1}n\mathbf{x} =(x_1,\ldots,x_n) \in \{0,1,\ldots,q-1\}^n symbol by symbol via a channel limited to at most pnpn errors and/or p∗np^{*} n erasures. The channel is "online" in the sense that at the iith step of communication the channel decides whether to corrupt the iith symbol or not based on its view so far, i.e., its decision depends only on the transmitted symbols (x1,
,xi)(x_1,\ldots,x_i). This is in contrast to the classical adversarial channel in which the corruption is chosen by a channel that has a full knowledge on the sent codeword x\mathbf{x}. In this work we study the capacity of qq-ary online channels for a combined corruption model, in which the channel may impose at most pnpn {\em errors} and at most p∗np^{*} n {\em erasures} on the transmitted codeword. The online channel (in both the error and erasure case) has seen a number of recent studies which present both upper and lower bounds on its capacity. In this work, we give a full characterization of the capacity as a function of q,pq,p, and p∗p^{*}.Comment: This is a new version of the binary case, which can be found at arXiv:1412.637

    ZETA - Zero-Trust Authentication: Relying on Innate Human Ability, not Technology

    Get PDF
    Reliable authentication requires the devices and channels involved in the process to be trustworthy; otherwise authentication secrets can easily be compromised. Given the unceasing efforts of attackers worldwide such trustworthiness is increasingly not a given. A variety of technical solutions, such as utilising multiple devices/channels and verification protocols, has the potential to mitigate the threat of untrusted communications to a certain extent. Yet such technical solutions make two assumptions: (1) users have access to multiple devices and (2) attackers will not resort to hacking the human, using social engineering techniques. In this paper, we propose and explore the potential of using human-based computation instead of solely technical solutions to mitigate the threat of untrusted devices and channels. ZeTA (Zero Trust Authentication on untrusted channels) has the potential to allow people to authenticate despite compromised channels or communications and easily observed usage. Our contributions are threefold: (1) We propose the ZeTA protocol with a formal definition and security analysis that utilises semantics and human-based computation to ameliorate the problem of untrusted devices and channels. (2) We outline a security analysis to assess the envisaged performance of the proposed authentication protocol. (3) We report on a usability study that explores the viability of relying on human computation in this context

    Sharing knowledge without sharing data: on the false choice between the privacy and utility of information

    Get PDF
    Presentation slides for Azer Bestavros' June 1, 2017 talk at the BU Law School.As part of an ongoing collaboration, the Law School hosted a talk by Azer Bestavros, BU Professor of Computer Science and the Director of the Hariri Institute for Computing. Prof. Bestavros will detailed his groundbreaking research project regarding pay equity. In this project, he and his colleagues conducted a study of more than 170 employers in the Boston area, analyzing and reporting pay equity results without compromising any of the firms' confidentiality. The project - and the methodology - have broad implications well beyond the employment context
    • 

    corecore