6 research outputs found

    A Lower Bound for Sampling Disjoint Sets

    Get PDF
    Suppose Alice and Bob each start with private randomness and no other input, and they wish to engage in a protocol in which Alice ends up with a set x subseteq[n] and Bob ends up with a set y subseteq[n], such that (x,y) is uniformly distributed over all pairs of disjoint sets. We prove that for some constant beta0 of the uniform distribution over all pairs of disjoint sets of size sqrt{n}

    Complexity of Distributions and Average-Case Hardness

    Get PDF
    We address the following question in the average-case complexity: does there exists a language L such that for all easy distributions D the distributional problem (L, D) is easy on the average while there exists some more hard distribution D\u27 such that (L, D\u27) is hard on the average? We consider two complexity measures of distributions: the complexity of sampling and the complexity of computing the distribution function. For the complexity of sampling of distribution, we establish a connection between the above question and the hierarchy theorem for sampling distribution recently studied by Thomas Watson. Using this connection we prove that for every 0 < a < b there exist a language L, an ensemble of distributions D samplable in n^{log^b n} steps and a linear-time algorithm A such that for every ensemble of distribution F that samplable in n^{log^a n} steps, A correctly decides L on all inputs from {0, 1}^n except for a set that has infinitely small F-measure, and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has infinitely small D-measure. In case of complexity of computing the distribution function we prove the following tight result: for every a > 0 there exist a language L, an ensemble of polynomial-time computable distributions D, and a linear-time algorithm A such that for every computable in n^a steps ensemble of distributions FA correctly decides L on all inputs from {0, 1}^n except for a set that has F-measure at most 2^{-n/2}and for every algorithm B there are infinitely many n such that the set of all elements of {0, 1}^n for which B correctly decides L has D-measure at most 2^{-n+1}

    An efficient coding theorem via probabilistic representations and its applications

    Get PDF
    A probabilistic representation of a string x ∈ {0,1}ⁿ is given by the code of a randomized algorithm that outputs x with high probability [Igor C. Oliveira, 2019]. We employ probabilistic representations to establish the first unconditional Coding Theorem in time-bounded Kolmogorov complexity. More precisely, we show that if a distribution ensembl

    The Space Complexity of Sampling

    Get PDF

    Time Hierarchies for Sampling Distributions

    No full text
    We show that “a little more time gives a lot more power to sampling algorithms. ” We prove that for every constant k ≥ 2, every polynomial time bound t, and every polynomially small ǫ, there exists a family of distributions on k elements that can be sampled exactly in polynomial time but cannot be sampled within statistical distance 1−1/k −ǫ in time t. This implies the following general time hierarchy for sampling distributions on arbitrary-size domains such as {0,1} n: For every polynomial time bound t and every constant ǫ&gt; 0, there exists a family of distributions that can be sampled exactly in polynomial time but cannot be sampled within statistical distance 1−ǫ in time t. Our proof involves reducing the problem to a communication problem over a certain type of noisy channel. To solve the latter problem we use a type of list-decodable code for a setting where there is no bound on the number of errors but each error gives more information than an erasure. This type of code can be constructed using certain known traditional list-decodable codes, but we give a new construction that is elementary, self-contained, and tailored to this setting.
    corecore