15 research outputs found

    Pseudodistributions That Beat All Pseudorandom Generators (Extended Abstract)

    Get PDF

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    Imperfect Gaps in Gap-ETH and PCPs

    Get PDF
    We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s\u27}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest. We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively

    Range‐Efficient Counting of Distinct Elements in a Massive Data Stream

    Get PDF
    Efficient one‐pass estimation of F0F_0, the number of distinct elements in a data stream, is a fundamental problem arising in various contexts in databases and networking. We consider range‐efficient estimation of F0F_0: estimation of the number of distinct elements in a data stream where each element of the stream is not just a single integer but an interval of integers. We present a randomized algorithm which yields an (Δ, ÎŽ)‐approximation of F0F_0, with the following time and space complexities (n is the size of the universe of the items): (1) The amortized processing time per interval is O(log⁥1ÎŽlog⁥nÏ”)O(\log{\frac{1}{\delta}}\log \frac{n}{\epsilon}). (2) The workspace used is O(1Ï”2log⁥1ÎŽlog⁥n)O(\frac{1}{\epsilon^2}\log{\frac{1}{\delta}}\log n) bits. Our algorithm improves upon a previous algorithm by Bar‐Yossef, Kumar and Sivakumar [Proceedings of the 1313th ACM–SIAM Symposium on Discrete Algorithms (SODA), 2002, pp. 623–632], which requires O(1Ï”5log⁥1ÎŽlog⁥5n)O(\frac{1}{\epsilon^5} \log{\frac{1}{\delta}}\log^5 n) processing time per item. This algorithm can also be used to compute the max‐dominance norm of a stream of multiple signals and significantly improves upon the previous best time and space bounds by Cormode and Muthukrishnan [Proceedings of the 1111th European Symposium on Algorithms (ESA), Lecture Notes in Comput. Sci. 2938, Springer, Berlin, 2003, pp. 148–160]. This algorithm also provides an efficient solution to the distinct summation problem, which arises during data aggregation in sensor networks [Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems, ACM Press, New York, 2004, pp. 250–262, Proceedings of the 2020th International Conference on Data Engineering (ICDE), 2004, pp. 449–460]

    Efficient Simultaneous Privately and Publicly Verifiable Robust Provable Data Possession from Elliptic Curves

    Get PDF
    When outsourcing large sets of data to the cloud, it is desirable for clients to efficiently check, whether all outsourced data is still retrievable at any later point in time without requiring to download all of it. Provable data possession (PDP)/proofs of retrievability (PoR), for which various constructions exist, are concepts to solve this issue. Interestingly, by now, no PDP/PoR scheme leading to an efficient construction supporting both private and public verifiability simultaneously is known. In particular, this means that up to now all PDP/PoR schemes either allow public or private verifiability exclusively, since different setup procedures and metadata sets are required. However, supporting both variants simultaneously seems interesting, as publicly verifiable schemes are far less efficient than privately verifiable ones. In this paper, we propose the first simultaneous privately and publicly verifiable (robust) PDP protocol, which allows the data owner to use the more efficient private verification and anyone else to run the public verification algorithm. Our construction, which is based on elliptic curves, achieves this, as it uses the same setup procedure and the same metadata set for private and public verifiability. We provide a rigorous security analysis and prove our construction secure in the random oracle model under the assumption that the elliptic curve discrete logarithm problem is intractable. We give detailed comparisons with the most efficient existing approaches for either private or public verifiability with our proposed scheme in terms of storage and communication overhead, as well as computational effort for the client and the server. Our analysis shows that for choices of parameters, which are relevant for practical applications, our construction outperforms all existing privately and publicly verifiable schemes significantly. This means, that even when our construction is used for either private or public verifiability alone, it still outperforms the most efficient constructions known, which is particularly appealing in the public verifiability setting

    Towards Efficient Provable Data Possession in Cloud Storage

    Get PDF
    Provable Data Possession (\PDP) allows data owner to periodically and remotely audit their data stored in a cloud storage, without retrieving the file and without keeping a local copy. Ateniese~\emph{et al.} (CCS 07) proposed the first {\PDP} scheme, which is very efficient in communication and storage. However their scheme requires a lot of group exponentiation operations: In the setup, one group exponentiation is required to generate a tag per each data block. In each verification, (equivalently) (m+ℓ)(m + \ell) group exponentiations are required to generate a proof, where mm is the size of a data block and ℓ\ell is the number of blocks accessed during a verification. This paper proposed an efficient {\PDP} scheme. Compared to Ateniese~\emph{et al.} (CCS 07), the proposed scheme has the same complexities in communication and storage, but is more efficient in computation: In the setup, no group exponentiations are required. In each verification, only (equivalently) mm group exponentiations are required to generate a proof. The security of the proposed scheme is proved under Knowledge of Exponent Assumption and Factoriztion Assumption

    Explicit List-Decodable Codes with Optimal Rate for Computationally Bounded Channels

    Get PDF
    A stochastic code is a pair of encoding and decoding procedures where Encoding procedure receives a k bit message m, and a d bit uniform string S. The code is (p,L)-list-decodable against a class C of "channel functions" from n bits to n bits, if for every message m and every channel C in C that induces at most pnpn errors, applying decoding on the "received word" C(Enc(m,S)) produces a list of at most L messages that contain m with high probability (over the choice of uniform S). Note that both the channel C and the decoding algorithm Dec do not receive the random variable S. The rate of a code is the ratio between the message length and the encoding length, and a code is explicit if Enc, Dec run in time poly(n). Guruswami and Smith (J. ACM, to appear), showed that for every constants 0 1 there are Monte-Carlo explicit constructions of stochastic codes with rate R >= 1-H(p)-epsilon that are (p,L=poly(1/epsilon))-list decodable for size n^c channels. Monte-Carlo, means that the encoding and decoding need to share a public uniformly chosen poly(n^c) bit string Y, and the constructed stochastic code is (p,L)-list decodable with high probability over the choice of Y. Guruswami and Smith pose an open problem to give fully explicit (that is not Monte-Carlo) explicit codes with the same parameters, under hardness assumptions. In this paper we resolve this open problem, using a minimal assumption: the existence of poly-time computable pseudorandom generators for small circuits, which follows from standard complexity assumptions by Impagliazzo and Wigderson (STOC 97). Guruswami and Smith also asked to give a fully explicit unconditional constructions with the same parameters against O(log n)-space online channels. (These are channels that have space O(log n) and are allowed to read the input codeword in one pass). We resolve this open problem. Finally, we consider a tighter notion of explicitness, in which the running time of encoding and list-decoding algorithms does not increase, when increasing the complexity of the channel. We give explicit constructions (with rate approaching 1-H(p) for every p 0) for channels that are circuits of size 2^{n^{Omega(1/d)}} and depth d. Here, the running time of encoding and decoding is a fixed polynomial (that does not depend on d). Our approach builds on the machinery developed by Guruswami and Smith, replacing some probabilistic arguments with explicit constructions. We also present a simplified and general approach that makes the reductions in the proof more efficient, so that we can handle weak classes of channels

    Hardness Amplification Proofs Require Majority

    Full text link

    Black-Box Computational Zero-Knowledge Proofs, Revisited: The Simulation-Extraction Paradigm

    Get PDF
    The concept of zero-knowledge proofs has been around for about 25 years. It has been redefined over and over to suit the special security requirements of protocols and systems. Common among all definitions is the requirement of the existence of some efficient ``device\u27\u27 simulating the view of the verifier (or the transcript of the protocol), such that the simulation is indistinguishable from the reality. The definitions differ in many respects, including the type and power of the devices, the order of quantifiers, the type of indistinguishability, and so on. In this paper, we will scrutinize the definition of ``black-box computational\u27\u27 zero-knowledge, in which there exists one simulator \emph{for all} verifiers, the simulator has black-box access to the verifier, and the quality of simulation is such that the real and simulated views cannot be distinguished by polynomial tests (\emph{computational} indistinguishability). Working in a theoretical model (the Random-Oracle Model), we show that the indistinguishability requirement is stated in a \emph{conceptually} inappropriate way: Present definitions allow the knowledge of the \emph{verifier} and \emph{distinguisher} to be independent, while the two entities are essentially coupled. Therefore, our main take on the problem will be \emph{conceptual} and \emph{semantic}, rather than \emph{literal}. We formalize the concept by introducing a ``knowledge extractor\u27\u27 into the model, which tries to extract the extra knowledge hard-coded into the distinguisher (if any), and then helps the simulator to construct the view of the verifier. The new paradigm is termed \emph{Simulation-Extraction Paradigm}, as opposed to the previous \emph{Simulation Paradigm}. We also provide an important application of the new formalization: Using the simulation-extraction paradigm, we construct one-round (i.e. two-move) zero-knowledge protocols of proving ``the computational ability to invert some trapdoor permutation\u27\u27 in the Random-Oracle Model. It is shown that the protocol cannot be proven zero-knowledge in the classical Simulation Paradigm. The proof of the zero-knowledge property in the new paradigm is interesting in that it does not require knowing the internal structure of the trapdoor permutation, or a polynomial-time reduction from it to another (e.g. an NP\mathcal{NP}-complete) problem

    Leakage Resilient Proofs of Ownership in Cloud Storage, Revisited

    Get PDF
    Client-side deduplication is a very effective mechanism to reduce both storage and communication cost in cloud storage service. Halevi~\emph{et al.} (CCS \u2711) discovered security vulnerability in existing implementation of client-side deduplication and proposed a cryptographic primitive called ``proofs of ownership\u27\u27 (PoW) as a countermeasure. In a proof of ownership scheme, any owner of the same file can prove to the cloud storage server that he/she owns that file in an efficient and secure manner, even if a bounded amount of any efficiently extractable information of that file has been leaked. We revisit Halevi~\emph{et al.}\u27s formulation of PoW and significantly improve the understanding and construction of PoW. Our contribution is twofold: \begin{itemize} \item Firstly, we propose a generic and conceptually simple approach to construct \emph{Privacy-Preserving} Proofs of Ownership scheme, by leveraging on well-known primitives (i.e. Randomness Extractor and Proofs of Retrievability) and technique (i.e. sample-then-extract). Our approach can be roughly described as \textsf{Privacy-Preserving PoW = Randomness Extractor ++ Proofs of Retrievability}. \item Secondly, in order to provide a better instantiation of Privacy-Preserving-PoW, we propose a novel design of randomness extractor with large output size, which improves the state of art by reducing both the random seed length and entropy loss (i.e. the difference between the entropy of input and output) simultaneously. \end{itemize
    corecore