266 research outputs found

    Bloom Filters in Adversarial Environments

    Get PDF
    Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography. A Bloom filter represents a set SS of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any xSx \in S it should always answer `Yes', and for any xSx \notin S it should answer `Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the number of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size nn and error ε\varepsilon, that is secure against tt queries and uses only O(nlog1ε+t)O(n \log{\frac{1}{\varepsilon}}+t) bits of memory. In comparison, nlog1εn\log{\frac{1}{\varepsilon}} is the best possible under a non-adaptive adversary

    Communication Complexity and Secure Function Evaluation

    Full text link
    We suggest two new methodologies for the design of efficient secure protocols, that differ with respect to their underlying computational models. In one methodology we utilize the communication complexity tree (or branching for f and transform it into a secure protocol. In other words, "any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter". The second methodology uses the circuit computing f, enhanced with look-up tables as its underlying computational model. It is possible to simulate any RAM machine in this model with polylogarithmic blowup. Hence it is possible to start with a computation of f on a RAM machine and transform it into a secure protocol. We show many applications of these new methodologies resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the "millionaires problem", where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation

    The hardness of decoding linear codes with preprocessing

    Get PDF
    The problem of maximum-likelihood decoding of linear block codes is known to be hard. The fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown. The hardness is based on the fact that existence of a polynomial-time algorithm implies that the polynomial hierarchy collapses. Thus, some linear block codes probably do not have an efficient decoder. The proof is based on results in complexity theory that relate uniform and nonuniform complexity classes

    Can Two Walk Together: Privacy Enhancing Methods and Preventing Tracking of Users

    Get PDF
    We present a new concern when collecting data from individuals that arises from the attempt to mitigate privacy leakage in multiple reporting: tracking of users participating in the data collection via the mechanisms added to provide privacy. We present several definitions for untrackable mechanisms, inspired by the differential privacy framework. Specifically, we define the trackable parameter as the log of the maximum ratio between the probability that a set of reports originated from a single user and the probability that the same set of reports originated from two users (with the same private value). We explore the implications of this new definition. We show how differentially private and untrackable mechanisms can be combined to achieve a bound for the problem of detecting when a certain user changed their private value. Examining Google's deployed solution for everlasting privacy, we show that RAPPOR (Erlingsson et al. ACM CCS, 2014) is trackable in our framework for the parameters presented in their paper. We analyze a variant of randomized response for collecting statistics of single bits, Bitwise Everlasting Privacy, that achieves good accuracy and everlasting privacy, while only being reasonably untrackable, specifically grows linearly in the number of reports. For collecting statistics about data from larger domains (for histograms and heavy hitters) we present a mechanism that prevents tracking for a limited number of responses. We also present the concept of Mechanism Chaining, using the output of one mechanism as the input of another, in the scope of Differential Privacy, and show that the chaining of an ε1\varepsilon_1-LDP mechanism with an ε2\varepsilon_2-LDP mechanism is lneε1+ε2+1eε1+eε2\ln\frac{e^{\varepsilon_1+\varepsilon_2}+1}{e^{\varepsilon_1}+e^{\varepsilon_2}}-LDP and that this bound is tight.Comment: 45 pages, 4 figures. To appear on FORC 202

    Construction of asymptotically good low-rate error-correcting codes through pseudo-random graphs

    Get PDF
    A novel technique, based on the pseudo-random properties of certain graphs known as expanders, is used to obtain novel simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling, and then regrouping the code coordinates. For any fixed (small) rate, and for a sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF(2)) as well. Although these concatenated codes lie below the Zyablov bound, they are still superior to previously known explicit constructions in the zero-rate neighborhood

    information-theoretic security, conflict resolution, expander graphs

    Get PDF
    Abstract: Motivated by the challenging task of designing “secure ” vote storage mechanisms, we study information storage mechanisms that operate in extremely hostile environments. In such environments, the majority of existing techniques for information storage and for security are susceptible to powerful adversarial attacks. We propose a mechanism for storing a set of at most K elements from a large universe of size N on write-once memories in a manner that does not reveal the insertion order of the elements. We consider a standard model for write-once memories, in which the memory is initialized to the all-zero state, and the only operation allowed is flipping bits from 0 to 1. Whereas previously known constructions were either inefficient (required Θ(K2) memory), randomized, or employed cryptographic techniques which are unlikely to be available in hostile environments, we eliminate each of these undesirable properties. The total amount of memory used by the mechanism is linear in the number of stored elements and poly-logarithmic in the size of the universe of elements

    Keep That Card in Mind: Card Guessing with Limited Memory

    Get PDF
    A card guessing game is played between two players, Guesser and Dealer. At the beginning of the game, the Dealer holds a deck of nn cards (labeled 1,...,n1, ..., n). For nn turns, the Dealer draws a card from the deck, the Guesser guesses which card was drawn, and then the card is discarded from the deck. The Guesser receives a point for each correctly guessed card. With perfect memory, a Guesser can keep track of all cards that were played so far and pick at random a card that has not appeared so far, yielding in expectation lnn\ln n correct guesses. With no memory, the best a Guesser can do will result in a single guess in expectation. We consider the case of a memory bounded Guesser that has m<nm < n memory bits. We show that the performance of such a memory bounded Guesser depends much on the behavior of the Dealer. In more detail, we show that there is a gap between the static case, where the Dealer draws cards from a properly shuffled deck or a prearranged one, and the adaptive case, where the Dealer draws cards thoughtfully, in an adversarial manner. Specifically: 1. We show a Guesser with O(log2n)O(\log^2 n) memory bits that scores a near optimal result against any static Dealer. 2. We show that no Guesser with mm bits of memory can score better than O(m)O(\sqrt{m}) correct guesses, thus, no Guesser can score better than min{m,lnn}\min \{\sqrt{m}, \ln n\}, i.e., the above Guesser is optimal. 3. We show an efficient adaptive Dealer against which no Guesser with mm memory bits can make more than lnm+2lnlogn+O(1)\ln m + 2 \ln \log n + O(1) correct guesses in expectation. These results are (almost) tight, and we prove them using compression arguments that harness the guessing strategy for encoding.Comment: 51 pages, 12 figure

    The Journey from NP to TFNP Hardness

    Get PDF
    The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist. Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption). Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP? In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs
    corecore