12 research outputs found

    Verifying proofs in constant depth

    Get PDF
    In this paper we initiate the study of proof systems where verification of proofs proceeds by NC circuits. We investigate the question which languages admit proof systems in this very restricted model. Formulated alternatively, we ask which languages can be enumerated by NC functions. Our results show that the answer to this problem is not determined by the complexity of the language. On the one hand, we construct NC proof systems for a variety of languages ranging from regular to NP-complete. On the other hand, we show by combinatorial methods that even easy regular languages such as Exact-OR do not admit NC proof systems. We also present a general construction of proof systems for regular languages with strongly connected NFA's

    Negation-Limited Formulas

    Get PDF
    We give an efficient structural decomposition theorem for formulas that depends on their negation complexity and demonstrate its power with the following applications. We prove that every formula that contains t negation gates can be shrunk using a random restriction to a formula of size O(t) with the shrinkage exponent of monotone formulas. As a result, the shrinkage exponent of formulas that contain a constant number of negation gates is equal to the shrinkage exponent of monotone formulas. We give an efficient transformation of formulas with t negation gates to circuits with log(t) negation gates. This transformation provides a generic way to cast results for negation-limited circuits to the setting of negation-limited formulas. For example, using a result of Rossman (CCC\u2715), we obtain an average-case lower bound for formulas of polynomial-size on n variables with n^{1/2-epsilon} negations. In addition, we prove a lower bound on the number of negations required to compute one-way permutations by polynomial-size formulas

    Depth-Bounded Quantum Cryptography with Applications to One-Time Memory and More

    Get PDF
    With the power of quantum information, we can achieve exciting and classically impossible cryptographic primitives. However, almost all quantum cryptography faces extreme difficulties with the near-term intermediate-scale quantum technology (NISQ technology); namely, the short lifespan of quantum states and limited sequential computation. At the same time, considering only limited quantum adversaries may still enable us to achieve never-before-possible tasks. In this work, we consider quantum cryptographic primitives against limited quantum adversaries - depth-bounded adversaries. We introduce a model for (depth-bounded) NISQ computers, which are classical circuits interleaved with shallow quantum circuits. Then, we show one-time memory can be achieved against any depth-bounded quantum adversaries introduced in the work, with their depth being any pre-fixed polynomial. Therefore we obtain applications like one-time programs and one-time proofs. Finally, we show our one-time memory has correctness even against constant-rate errors

    Unconditionally Secure NIZK in the Fine-Grained Setting

    Get PDF
    Non-interactive zero-knowledge (NIZK) proof systems are often constructed based on cryptographic assumptions. In this paper, we propose the first unconditionally secure NIZK system in the AC0-fine-grained setting. More precisely, our NIZK system has perfect soundness for all adversaries and unconditional zero-knowledge for AC0 adversaries, namely, an AC0 adversary can only break the zero-knowledge property with negligible probability unconditionally. At the core of our construction is an OR-proof system for satisfiability of 1 out of polynomial many statements

    Fine-Grained Secure Computation

    Get PDF
    This paper initiates a study of Fine Grained Secure Computation: i.e. the construction of secure computation primitives against moderately complex adversaries. We present definitions and constructions for compact Fully Homomorphic Encryption and Verifiable Computation secure against (non-uniform) NC1\mathsf{NC}^1 adversaries. Our results do not require the existence of one-way functions and hold under a widely believed separation assumption, namely NC1⊊⊕L/poly\mathsf{NC}^1 \subsetneq \oplus \mathsf{L} / \mathsf{poly}. We also present two application scenarios for our model: (i)hardware chips that prove their own correctness, and (ii) protocols against rational adversaries potentially relevant to the Verifier\u27s Dilemma in smart-contracts transactions such as Ethereum

    Rationality and Efficient Verifiable Computation

    Full text link
    In this thesis, we study protocols for delegating computation in a model where one of the parties is rational. In our model, a delegator outsources the computation of a function f on input x to a worker, who receives a (possibly monetary) reward. Our goal is to design very efficient delegation schemes where a worker is economically incentivized to provide the correct result f(x). In this work we strive for not relying on cryptographic assumptions, in particular our results do not require the existence of one-way functions. We provide several results within the framework of rational proofs introduced by Azar and Micali (STOC 2012).We make several contributions to efficient rational proofs for general feasible computations. First, we design schemes with a sublinear verifier with low round and communication complexity for space-bounded computations. Second, we provide evidence, as lower bounds, against the existence of rational proofs: with logarithmic communication and polylogarithmic verification for P and with polylogarithmic communication for NP. We then move to study the case where a delegator outsources multiple inputs. First, we formalize an extended notion of rational proofs for this scenario (sequential composability) and we show that existing schemes do not satisfy it. We show how these protocols incentivize workers to provide many ``fast\u27\u27 incorrect answers which allow them to solve more problems and collect more rewards. We then design a d-rounds rational proof for sufficiently ``regular\u27\u27 arithmetic circuit of depth d = O(log(n)) with sublinear verification. We show, that under certain cost assumptions, our scheme is sequentially composable, i.e. it can be used to delegate multiple inputs. We finally show that our scheme for space-bounded computations is also sequentially composable under certain cost assumptions. In the last part of this thesis we initiate the study of Fine Grained Secure Computation: i.e. the construction of secure computation primitives against ``moderately complex adversaries. Such fine-grained protocols can be used to obtain sequentially composable rational proofs. We present definitions and constructions for compact Fully Homomorphic Encryption and Verifiable Computation secure against (non-uniform) NC1 adversaries. Our results hold under a widely believed separation assumption implied by L ≠NC1 . We also present two application scenarios for our model: (i) hardware chips that prove their own correctness, and (ii) protocols against rational adversaries potentially relevant to the Verifier\u27s Dilemma in smart-contracts transactions such as Ethereum

    Average-Case Fine-Grained Hardness

    Get PDF
    We present functions that can be computed in some fixed polynomial time but are hard on average for any algorithm that runs in slightly smaller time, assuming widely-conjectured worst-case hardness for problems from the study of fine-grained complexity. Unconditional constructions of such functions are known from before (Goldmann et al., IPL \u2794), but these have been canonical functions that have not found further use, while our functions are closely related to well-studied problems and have considerable algebraic structure. We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems -- namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) -- in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us functions that require n2−o(1)n^{2-o(1)} time to compute on average, and that of APSP gives us a function that requires n3−o(1)n^{3-o(1)} time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions. Based on the average-case hardness and structural properties of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO \u2703)

    Survey of local algorithms

    Get PDF
    A local algorithm is a distributed algorithm that runs in constant time, independently of the size of the network. Being highly scalable and fault-tolerant, such algorithms are ideal in the operation of large-scale distributed systems. Furthermore, even though the model of local algorithms is very limited, in recent years we have seen many positive results for non-trivial problems. This work surveys the state-of-the-art in the field, covering impossibility results, deterministic local algorithms, randomised local algorithms, and local algorithms for geometric graphs.Peer reviewe

    Optimisation problems in wireless sensor networks : Local algorithms and local graphs

    Get PDF
    This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs
    corecore