235 research outputs found
On Extractors and Exposure-Resilient Functions for Sublogarithmic Entropy
We study deterministic extractors for oblivious bit-fixing sources (a.k.a.
resilient functions) and exposure-resilient functions with small min-entropy:
of the function's n input bits, k << n bits are uniformly random and unknown to
the adversary. We simplify and improve an explicit construction of extractors
for bit-fixing sources with sublogarithmic k due to Kamp and Zuckerman (SICOMP
2006), achieving error exponentially small in k rather than polynomially small
in k. Our main result is that when k is sublogarithmic in n, the short output
length of this construction (O(log k) output bits) is optimal for extractors
computable by a large class of space-bounded streaming algorithms.
Next, we show that a random function is an extractor for oblivious bit-fixing
sources with high probability if and only if k is superlogarithmic in n,
suggesting that our main result may apply more generally. In contrast, we show
that a random function is a static (resp. adaptive) exposure-resilient function
with high probability even if k is as small as a constant (resp. log log n). No
explicit exposure-resilient functions achieving these parameters are known
Redrawing the Boundaries on Purchasing Data from Privacy-Sensitive Individuals
We prove new positive and negative results concerning the existence of
truthful and individually rational mechanisms for purchasing private data from
individuals with unbounded and sensitive privacy preferences. We strengthen the
impossibility results of Ghosh and Roth (EC 2011) by extending it to a much
wider class of privacy valuations. In particular, these include privacy
valuations that are based on ({\epsilon}, {\delta})-differentially private
mechanisms for non-zero {\delta}, ones where the privacy costs are measured in
a per-database manner (rather than taking the worst case), and ones that do not
depend on the payments made to players (which might not be observable to an
adversary). To bypass this impossibility result, we study a natural special
setting where individuals have mono- tonic privacy valuations, which captures
common contexts where certain values for private data are expected to lead to
higher valuations for privacy (e.g. having a particular disease). We give new
mech- anisms that are individually rational for all players with monotonic
privacy valuations, truthful for all players whose privacy valuations are not
too large, and accurate if there are not too many players with too-large
privacy valuations. We also prove matching lower bounds showing that in some
respects our mechanism cannot be improved significantly
Recommended from our members
The Unified Theory of Pseudorandomness
Pseudorandomness is the theory of efficiently generating objects that look "random" despite being constructed with little or no randomness. One of the achievements of this research area has been the realization that a number of fundamental and widely studied "pseudorandom" objects are all almost equivalent when viewed appropriately. These objects include pseudorandom generators, expander graphs, list-decodable error-correcting codes, averaging samplers, and hardness amplifi ers. In this survey, we describe the connections between all of these objects, showing how they can all be cast within a single "list-decoding framework" that brings out both their similarities and differences.Engineering and Applied Science
Recommended from our members
On Transformations of Interactive Proofs that Preserve the Prover's Complexity
Goldwasser and Sipser [GS89] proved that every interactive proof system can be transformed into a public-coin one (a.k.a., an Arthur-Merlin game). Their transformation has the drawback that the computational complexity of the prover's strategy is not preserved. We show that this is inherent, by proving that the same must be true of any transformation which only uses the original prover and verifier strategies as "black boxes". Our negative result holds even if the original proof system is restricted to be honest-verifier perfect zero knowledge and the transformation can also use the simulator as a black box.
We also examine a similar deficiency in a transformation of Fürer et al. [FGM+89] from interactive proofs to ones with perfect completeness. We argue that the increase in prover complexity incurred by their transformation is necessary, given that their construction is a black-box transformation which works regardless of the verifier's computational complexity.Engineering and Applied Science
Differential Privacy on Finite Computers
We consider the problem of designing and analyzing differentially private algorithms that can be implemented on discrete models of computation in strict polynomial time, motivated by known attacks on floating point implementations of real-arithmetic differentially private algorithms (Mironov, CCS 2012) and the potential for timing attacks on expected polynomial-time algorithms. We use a case study: the basic problem of approximating the histogram of a categorical dataset over a possibly large data universe X. The classic Laplace Mechanism (Dwork, McSherry, Nissim, Smith, TCC 2006 and J. Privacy & Confidentiality 2017) does not satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such as the Geometric Mechanism (Ghosh, Roughgarden, Sundarajan, STOC 2009 and SICOMP 2012), take time at least linear in |X|, which can be exponential in the bit length of the input.
In this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the Laplace Mechanism up to constant factors, while retaining the same (pure) differential privacy guarantee. One of our algorithms produces a sparse histogram as output. Its "per-bin accuracy" (the error on individual bins) is worse than that of the Laplace Mechanism by a factor of log |X|, but we prove a lower bound showing that this is necessary for any algorithm that produces a sparse histogram. A second algorithm avoids this lower bound, and matches the per-bin accuracy of the Laplace Mechanism, by producing a compact and efficiently computable representation of a dense histogram; it is based on an (n+1)-wise independent implementation of an appropriately clamped version of the Discrete Geometric Mechanism
- …