33 research outputs found
Lower Bounds for Oblivious Data Structures
An oblivious data structure is a data structure where the memory access
patterns reveals no information about the operations performed on it. Such data
structures were introduced by Wang et al. [ACM SIGSAC'14] and are intended for
situations where one wishes to store the data structure at an untrusted server.
One way to obtain an oblivious data structure is simply to run a classic data
structure on an oblivious RAM (ORAM). Until very recently, this resulted in an
overhead of for the most natural setting of parameters.
Moreover, a recent lower bound for ORAMs by Larsen and Nielsen [CRYPTO'18] show
that they always incur an overhead of at least if used in a
black box manner. To circumvent the overhead, researchers have
instead studied classic data structure problems more directly and have obtained
efficient solutions for many such problems such as stacks, queues, deques,
priority queues and search trees. However, none of these data structures
process operations faster than , leaving open the question of
whether even faster solutions exist. In this paper, we rule out this
possibility by proving lower bounds for oblivious stacks,
queues, deques, priority queues and search trees.Comment: To appear at SODA'1
AuroraLight: Improved prover efficiency and SRS size in a Sonic-like system
Using ideas from the recent Aurora zk-STARK of Ben-Sasson et al. [BCRSVW, Eurocrypt 2019], we present a zk-SNARK with a universal and updatable SRS similar to the recent construction of Maller et al. [MBKM, 2019], called .
Compared to , our construction achieves significantly better prover run time (less than half) and smaller SRS size (one sixth). However, we only achieve amortized succinct verification time for batches of proofs, either when the proofs are generated in parallel or in [MBKM]\u27s helper setting, and our proofs are longer than those of [MBKM] (but still contain a number of field and group elements)
Non-Malleable Codes for Small-Depth Circuits
We construct efficient, unconditional non-malleable codes that are secure
against tampering functions computed by small-depth circuits. For
constant-depth circuits of polynomial size (i.e. tampering
functions), our codes have codeword length for a -bit
message. This is an exponential improvement of the previous best construction
due to Chattopadhyay and Li (STOC 2017), which had codeword length
. Our construction remains efficient for circuit depths as
large as (indeed, our codeword length remains
, and extending our result beyond this would require
separating from .
We obtain our codes via a new efficient non-malleable reduction from
small-depth tampering to split-state tampering. A novel aspect of our work is
the incorporation of techniques from unconditional derandomization into the
framework of non-malleable reductions. In particular, a key ingredient in our
analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC
2013), a derandomization of the influential switching lemma from circuit
complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure
Zero-Knowledge Proofs of Proximity
Interactive proofs of proximity (IPPs) are interactive proofs in which the verifier runs in time sub-linear in the input length. Since the verifier cannot even read the entire input, following the property testing literature, we only require that the verifier reject inputs that are far from the language (and, as usual, accept inputs that are in the language).
In this work, we initiate the study of zero-knowledge proofs of proximity (ZKPP). A ZKPP convinces a sub-linear time verifier that the input is close to the language (similarly to an IPP) while simultaneously guaranteeing a natural zero-knowledge property. Specifically, the verifier learns nothing beyond (1) the fact that the input is in the language, and (2) what it could additionally infer by reading a few bits of the input.
Our main focus is the setting of statistical zero-knowledge where we show that the following hold unconditionally (where N denotes the input length):
- Statistical ZKPPs can be sub-exponentially more efficient than property testers (or even non-interactive IPPs): We show a natural property which has a statistical ZKPP with a polylog(N) time verifier, but requires Omega(sqrt(N)) queries (and hence also runtime) for every property tester.
- Statistical ZKPPs can be sub-exponentially less efficient than IPPs: We show a property which has an IPP with a polylog(N) time verifier, but cannot have a statistical ZKPP with even an N^(o(1)) time verifier.
- Statistical ZKPPs for some graph-based properties such as promise versions of expansion and bipartiteness, in the bounded degree graph model, with polylog(N) time verifiers exist.
Lastly, we also consider the computational setting where we show that:
- Assuming the existence of one-way functions, every language computable either in (logspace uniform) NC or in SC, has a computational ZKPP with a (roughly) sqrt(N) time verifier.
- Assuming the existence of collision-resistant hash functions, every language in NP has a statistical zero-knowledge argument of proximity with a polylog(N) time verifier
The Journey from NP to TFNP Hardness
The class TFNP is the search analog of NP with the additional guarantee that any instance has a solution. TFNP has attracted extensive attention due to its natural syntactic subclasses that capture the computational complexity of important search problems from algorithmic game theory, combinatorial optimization and computational topology. Thus, one of the main research objectives in the context of TFNP is to search for efficient algorithms for its subclasses, and at the same time proving hardness results where efficient algorithms cannot exist.
Currently, no problem in TFNP is known to be hard under assumptions such as NP hardness, the existence of one-way functions, or even public-key cryptography. The only known hardness results are based on less general assumptions such as the existence of collision-resistant hash functions, one-way permutations less established cryptographic primitives (e.g. program obfuscation or functional encryption).
Several works explained this status by showing various barriers to proving hardness of TFNP. In particular, it has been shown that hardness of TFNP hardness cannot be based on worst-case NP hardness, unless NP=coNP. Therefore, we ask the following question: What is the weakest assumption sufficient for showing hardness in TFNP?
In this work, we answer this question and show that hard-on-average TFNP problems can be based on the weak assumption that there exists a hard-on-average language in NP. In particular, this includes the assumption of the existence of one-way functions. In terms of techniques, we show an interesting interplay between problems in TFNP, derandomization techniques, and zero-knowledge proofs
On the power of Public-key Function-Private Functional Encryption
In the public-key setting, known constructions of function-private functional encryption (FPFE) were limited to very restricted classes of functionalities like inner-product [Agrawal et al. - PKC 2015]. Moreover, its power has not been well investigated. In this paper, we construct FPFE for general functions and explore its powerful applications, both for general and specific functionalities.
As warmup, we construct from FPFE a natural generalization of a signature scheme endowed with functional properties, that we call functional anonymous signature (FAS) scheme. In a FAS, Alice can sign a circuit C chosen from some distribution D to get a signature s and can publish a verification key that allows anybody holding a message m to verify that (1) s is a valid signature of Alice for some (possibly unknown to him) circuit C and (2) C(m)=1. Beyond unforgeability the security of FAS guarantees that the signature s hide as much information as possible about C except what can be inferred from knowledge of D.
Then, we show that FPFE can be used to construct in a black-box way functional encryption schemes for randomized functionalities (RFE).
%Previous constructions of (public-key) RFE relied on iO [Goyal et al. - TCC 2015].
As further application, we show that specific instantiations of FPFE can be used to achieve adaptively-secure CNF/DNF encryption for bounded degree formulae (BoolEnc). Though it was known how to implement BoolEnc from inner-product encryption (IPE) [Katz et al. - EUROCRYPT 2008], as already observed by Katz et al. this reduction only works for selective security and completely breaks down for adaptive security; however, we show that the reduction works if the IPE scheme is function-private.
Finally, we present a general picture of the relations among all these related primitives. One key observation is that Attribute-based Encryption with function privacy implies FE, a notable fact that sheds light on the importance of the function privacy property for FE
Perfectly Oblivious (Parallel) RAM Revisited, and Improved Constructions
Oblivious RAM (ORAM)
is a technique for compiling any RAM program to an oblivious counterpart, i.e.,
one whose access patterns do not leak information about the secret inputs.
Similarly, Oblivious Parallel RAM (OPRAM) compiles a
{\it parallel} RAM program to an oblivious counterpart.
In this paper, we care about ORAM/OPRAM with {\it perfect security}, i.e.,
the access patterns must be {\it identically distributed}
no matter what the program\u27s memory request sequence is.
In the past, two types of perfect ORAMs/OPRAMs
have been considered:
constructions whose performance bounds hold {\it in expectation} (but may occasionally
run more slowly);
and constructions whose performance bounds hold {\it deterministically} (even though
the algorithms themselves are randomized).
In this paper, we revisit the performance metrics for perfect
ORAM/OPRAM, and
show novel constructions that achieve asymptotical improvements
for all performance metrics.
Our first result
is a new perfectly secure OPRAM
scheme with {\it expected} overhead.
In comparison, prior literature
has been stuck at for more than a decade.
Next, we show how to construct a perfect ORAM
with
{\it deterministic} simulation overhead. We further show how
to make the scheme parallel, resulting in an perfect OPRAM
with
{\it deterministic} simulation overhead.
For perfect ORAMs/OPRAMs
with deterministic performance bounds, our results achieve
{\it subexponential} improvement over the state-of-the-art.
Specifically, the best known prior scheme
incurs more than deterministic simulation overhead
(Raskin and Simkin, Asiacrypt\u2719); moreover, their scheme works
only for the sequential setting and is {\it not} amenable to parallelization.
Finally, we additionally consider perfect ORAMs/OPRAMs
whose performance bounds hold with high probability.
For this new performance metric, we show new constructions
whose simulation overhead is upper bounded by
except with negligible in probability, i.e., we prove
high-probability performance bounds that match the expected
bounds mentioned earlier
Controlled Homomorphic Encryption: Definition and Construction
Abstract. Fully Homomorphic Encryption schemes (FHEs) and Functional Encryption schemes (FunctEs) have a tremendous impact in cryptography both for the natural questions that they address and for the wide range of applications in which they have been (sometimes critically) used. In this work we put forth the notion of a Controllable Homomorphic Encryption scheme (CHES), a new primitive that includes features of both FHEs and FunctEs. In a CHES it is possible (similarly to a FHE) to homomorphically evaluate a ciphertext Ct = Enc(m) and a circuit C therefore obtaining Enc(C(m)) but only if (similarly to a FunctE) a token for C has been received from the owner of the secret key. We discuss difficulties in constructing a CHES and then show a construction based on any FunctE. As a byproduct our CHES also represents a FunctE supporting the reencryption functionality and in that respect improves existing solutions
Receiver and Sender Deniable Functional Encryption
Deniable encryption, first introduced by Canetti et al. (CRYPTO 1997), allows equivocation of encrypted communication.
In this work we generalize its study to functional encryption (FE).
Our results are summarized as follows:
We first put forward and motivate the concept of receiver deniable FE, for which we consider two models. In the first model, as previously considered by O'Neill et al. (CRYPTO 2011) in the case of identity-based encryption, a receiver gets assistance from the master authority to generate a fake secret key. In the second model, there are ``normal'' and ``deniable'' secret keys, and a receiver in possession of a deniable secret key can produce a fake but authentic-looking normal key on its own.
In the first model, we show a compiler from any FE scheme for the general circuit functionality to a FE scheme having receiver deniability.
In addition we show an efficient receiver deniable FE scheme for Boolean Formulae from bilinear maps. In the second (multi-distributional) model, we present a specific FE scheme for the general circuit functionality having receiver deniability. To our knowledge, a scheme in the multi-distributional model was not previously known even for the special case of identity-based encryption.
Finally, we construct the first sender (non-multi-distributional) deniable FE scheme