66 research outputs found

    Hitting Sets Give Two-Sided Derandomization of Small Space

    Get PDF

    Pseudodeterministic constructions in subexponential time

    Get PDF
    We study pseudodeterministic constructions, i.e., randomized algorithms which output the same solution on most computation paths. We establish unconditionally that there is an infinite sequence {pn}n∈N of increasing primes and a randomized algorithm A running in expected sub-exponential time such that for each n, on input 1|pn|, A outputs pn with probability 1. In other words, our result provides a pseudodeterministic construction of primes in sub-exponential time which works infinitely often. This result follows from a much more general theorem about pseudodeterministic constructions. A property Q ⊆ {0, 1}* is γ-dense if for large enough n, |Q ⋂ {0, 1}n| ≥ γ2n. We show that for each c > 0 at least one of the following holds: (1) There is a pseudodeterministic polynomial time construction of a family {Hn} of sets, Hn ⊆ {0, 1}n, such that for each (1=nc)-dense property Q ∈ DTIME(n^c) and every large enough n, Hn ⋂ Q ≠ ∅; or (2) There is a deterministic sub-exponential time construction of a family {H'n} of sets, H'n ⊆ {0, 1}n, such that for each (1/n^c)-dense property Q ∈ DTIME(n^c) and for infinitely many values of n, H'n ⋂ Q ≠ ∅. We provide further algorithmic applications that might be of independent interest. Perhaps intriguingly, while our main results are unconditional, they have a non-constructive element, arising from a sequence of applications of the hardness versus randomness paradigm.</p

    Lower Bounds on Black-Box Reductions of Hitting to Density Estimation

    Get PDF
    Consider a deterministic algorithm that tries to find a string in an unknown set Ssubseteq{0,1}^n, under the promise that S has large density. The only information that the algorithm can obtain about S is estimates of the density of S in adaptively chosen subsets of {0,1}^n, up to an additive error of mu>0. This problem is appealing as a derandomization problem, when S is the set of satisfying inputs for a circuit C:{0,1}^n->{0,1} that accepts many inputs: In this context, an algorithm as above constitutes a deterministic black-box reduction of the problem of hitting C (i.e., finding a satisfying input for C) to the problem of approximately counting the number of satisfying inputs for C on subsets of {0,1}^n. We prove tight lower bounds for this problem, demonstrating that naive approaches to solve the problem cannot be improved upon, in general. First, we show a tight trade-off between the estimation error mu and the required number of queries to solve the problem: When mu=O(log(n)/n) a polynomial number of queries suffices, and when mu>=(4log(n)/n) the required number of queries is 2^{Theta(mu cdot n)}. Secondly, we show that the problem "resists" parallelization: Any algorithm that works in iterations, and can obtain p=p(n) density estimates "in parallel" in each iteration, still requires Omega( frac{n}{log(p)+log(1/mu)} ) iterations to solve the problem. This work extends the well-known work of Karp, Upfal, and Wigderson (1988), who studied the setting in which S is only guaranteed to be non-empty (rather than dense), and the algorithm can only probe subsets for the existence of a solution in them. In addition, our lower bound on parallel algorithms affirms a weak version of a conjecture of Motwani, Naor, and Naor (1994); we also make progress on a stronger version of their conjecture

    On Nonadaptive Security Reductions of Hitting Set Generators

    Get PDF
    One of the central open questions in the theory of average-case complexity is to establish the equivalence between the worst-case and average-case complexity of the Polynomial-time Hierarchy (PH). One general approach is to show that there exists a PH-computable hitting set generator whose security is based on some NP-hard problem. We present the limits of such an approach, by showing that there exists no exponential-time-computable hitting set generator whose security can be proved by using a nonadaptive randomized polynomial-time reduction from any problem outside AM ? coAM, which significantly improves the previous upper bound BPP^NP of Gutfreund and Vadhan (RANDOM/APPROX 2008 [Gutfreund and Vadhan, 2008]). In particular, any security proof of a hitting set generator based on some NP-hard problem must use either an adaptive or non-black-box reduction (unless the polynomial-time hierarchy collapses). To the best of our knowledge, this is the first result that shows limits of black-box reductions from an NP-hard problem to some form of a distributional problem in DistPH. Based on our results, we argue that the recent worst-case to average-case reduction of Hirahara (FOCS 2018 [Hirahara, 2018]) is inherently non-black-box, without relying on any unproven assumptions. On the other hand, combining the non-black-box reduction with our simulation technique of black-box reductions, we exhibit the existence of a "non-black-box selector" for GapMCSP, i.e., an efficient algorithm that solves GapMCSP given as advice two circuits one of which is guaranteed to compute GapMCSP

    An Algorithmic Approach to Uniform Lower Bounds

    Get PDF
    • …
    corecore