34 research outputs found
Information complexity of the AND function in the two-Party, and multiparty settings
In a recent breakthrough paper [M. Braverman, A. Garg, D. Pankratov, and O.
Weinstein, From information to exact communication, STOC'13] Braverman et al.
developed a local characterization for the zero-error information complexity in
the two party model, and used it to compute the exact internal and external
information complexity of the 2-bit AND function, which was then applied to
determine the exact asymptotic of randomized communication complexity of the
set disjointness problem.
In this article, we extend their results on AND function to the multi-party
number-in-hand model by proving that the generalization of their protocol has
optimal internal and external information cost for certain distributions. Our
proof has new components, and in particular it fixes some minor gaps in the
proof of Braverman et al
Constructive Relationships Between Algebraic Thickness and Normality
We study the relationship between two measures of Boolean functions;
\emph{algebraic thickness} and \emph{normality}. For a function , the
algebraic thickness is a variant of the \emph{sparsity}, the number of nonzero
coefficients in the unique GF(2) polynomial representing , and the normality
is the largest dimension of an affine subspace on which is constant. We
show that for , any function with algebraic thickness
is constant on some affine subspace of dimension
. Furthermore, we give an algorithm
for finding such a subspace. We show that this is at most a factor of
from the best guaranteed, and when restricted to the
technique used, is at most a factor of from the best
guaranteed. We also show that a concrete function, majority, has algebraic
thickness .Comment: Final version published in FCT'201
Revealed Preference Dimension via Matrix Sign Rank
Given a data-set of consumer behaviour, the Revealed Preference Graph
succinctly encodes inferred relative preferences between observed outcomes as a
directed graph. Not all graphs can be constructed as revealed preference graphs
when the market dimension is fixed. This paper solves the open problem of
determining exactly which graphs are attainable as revealed preference graphs
in -dimensional markets. This is achieved via an exact characterization
which closely ties the feasibility of the graph to the Matrix Sign Rank of its
signed adjacency matrix. The paper also shows that when the preference
relations form a partially ordered set with order-dimension , the graph is
attainable as a revealed preference graph in a -dimensional market.Comment: Submitted to WINE `1
On Tackling the Limits of Resolution in SAT Solving
The practical success of Boolean Satisfiability (SAT) solvers stems from the
CDCL (Conflict-Driven Clause Learning) approach to SAT solving. However, from a
propositional proof complexity perspective, CDCL is no more powerful than the
resolution proof system, for which many hard examples exist. This paper
proposes a new problem transformation, which enables reducing the decision
problem for formulas in conjunctive normal form (CNF) to the problem of solving
maximum satisfiability over Horn formulas. Given the new transformation, the
paper proves a polynomial bound on the number of MaxSAT resolution steps for
pigeonhole formulas. This result is in clear contrast with earlier results on
the length of proofs of MaxSAT resolution for pigeonhole formulas. The paper
also establishes the same polynomial bound in the case of modern core-guided
MaxSAT solvers. Experimental results, obtained on CNF formulas known to be hard
for CDCL SAT solvers, show that these can be efficiently solved with modern
MaxSAT solvers
On the Streaming Indistinguishability of a Random Permutation and a Random Function
An adversary with bits of
memory obtains a stream of elements that are uniformly drawn from the set , either with or without replacement. This corresponds to sampling elements using either a random function or a random permutation. The adversary\u27s goal is to distinguish between these two cases.
This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary\u27s advantage is upper bounded by . Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of if . However, the bound\u27s proof assumed an unproven combinatorial conjecture. Moreover,
if there is a gap between the upper bound of and the advantage obtained by known attacks.
In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of on the adversary\u27s advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming
Memory Lower Bounds of Reductions Revisited
In Crypto 2017, Auerbach et al. initiated the study on memory-tight reductions and proved two negative results on the memory-tightness of restricted black-box reductions from multi-challenge security to single-challenge security for signatures and an artificial hash function. In this paper, we revisit the results by Auerbach et al. and show that for a large class of reductions treating multi-challenge security, it is impossible to avoid loss of memory-tightness unless we sacrifice the efficiency of their running-time. Specifically, we show three lower bound results. Firstly, we show a memory lower bound of natural black-box reductions from the multi-challenge unforgeability of unique signatures to any computational assumption. Then we show a lower bound of restricted reductions from multi-challenge security to single-challenge security for a wide class of cryptographic primitives with unique keys in the multi-user setting. Finally, we extend the lower bound result shown by Auerbach et al. treating a hash function to one treating any hash function with a large domain
Lower Bounds on the Time/Memory Tradeoff of Function Inversion
We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an -bit advice on a randomly chosen function and using oracle queries to , tries to invert a randomly chosen output of , i.e., to find . Much progress was done regarding adaptive function inversion - the inverter is allowed to make adaptive oracle queries. Hellman [IEEE transactions on Information Theory \u2780] presented an adaptive inverter that inverts with high probability a random . Fiat and Naor [SICOMP \u2700] proved that for any with (ignoring low-order terms), an -advice, -query variant of Hellman\u27s algorithm inverts a constant fraction of the image points of any function. Yao [STOC \u2790] proved a lower bound of for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question.
Very little is known for the non-adaptive variant of the question - the inverter chooses its queries in advance. The only known upper bounds, i.e., inverters, are the trivial ones (with ), and the only lower bound is the above bound of Yao. In a recent work, Corrigan-Gibbs and Kogan [TCC \u2719] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove.
We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters:
- Linear-advice (adaptive inverter): If the advice string is a linear function of (e.g., , for some matrix , viewing as a vector in ), then . The bound generalizes to the case where the advice string of , i.e., the coordinate-wise addition of the truth tables of and , can be computed from the description of and by a low communication protocol.
- Affine non-adaptive decoders: If the non-adaptive inverter has an affine decoder - it outputs a linear function, determined by the advice string and the element to invert, of the query answers - then (regardless of ).
- Affine non-adaptive decision trees: If the non-adaptive inversion algorithm is a -depth affine decision tree - it outputs the evaluation of a decision tree whose nodes compute a linear function of the answers to the queries - and , then
The Communication Complexity of Threshold Private Set Intersection
Threshold private set intersection enables Alice and Bob who hold sets and of size to compute the intersection if the sets do not differ by more than some threshold parameter .
In this work, we investigate the communication complexity of this problem and we establish the first upper and lower bounds.
We show that any protocol has to have a communication complexity of .
We show that an almost matching upper bound of can be obtained via fully homomorphic encryption.
We present a computationally more efficient protocol based on weaker assumptions, namely additively homomorphic encryption, with a communication complexity of .
We show how our protocols can be extended to the multiparty setting.
For applications like biometric authentication, where a given fingerprint has to have a large intersection with a fingerprint from a database, our protocols may result in significant communication savings.
We, furthermore, show how to extend all of our protocols to the multiparty setting.
Prior to this work, all previous protocols had a communication complexity of .
Our protocols are the first ones with communication complexities that mainly depend on the threshold parameter and only logarithmically on the set size
Fine-Grained Cryptography Revisited
Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries.
In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show constructions of three key fundamental fine-grained cryptographic primitives: one-way permutations, hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks), and trapdoor one-way functions.
All of our constructions are computable in and secure against (non-uniform) circuits under the widely believed worst-case assumption
From Obfuscation to the Security of Fiat-Shamir for Proofs
The Fiat-Shamir paradigm [CRYPTO\u2786] is a heuristic for converting
three-round identification schemes into signature schemes, and more
generally, for collapsing rounds in constant-round public-coin
interactive protocols. This heuristic is very popular both in theory
and in practice, and its security has been the focus of extensive
study.
In particular, this paradigm was shown to be secure in the so-called
Random Oracle Model. However, in the plain model, mainly negative
results were shown. In particular, this heuristic was shown to be
insecure when applied to computationally sound proofs (also known as
arguments). Moreover, recently it was shown that even in the
restricted setting where the heuristic is applied to interactive
proofs (as opposed to arguments), its soundness cannot be proven via a
black-box reduction to any so-called falsifiable assumption.
In this work, we give a positive result for the security of this
paradigm in the plain model. Specifically, we construct a hash
function for which the Fiat Shamir paradigm is secure when applied to
proofs (as opposed to arguments), assuming the existence of a
sub-exponentially secure indistinguishability obfuscator, the
existence of an exponentially secure input-hiding obfuscator for the
class of multi-bit point functions, and the existence of a
sub-exponentially secure one-way function.
While the hash function we construct is far from practical, we believe
that this is a first step towards instantiations that are both more
efficient and provably secure. In addition, we show that this result
resolves a long-lasting open problem in the study of zero-knowledge
proofs: It implies that there does not exist a public-coin
constant-round zero-knowledge proof with negligible soundness (under
the assumptions stated above)