120 research outputs found

    Separation Results for Boolean Function Classes

    Get PDF
    We show (almost) separation between certain important classes of Boolean functions. The technique that we use is to show that the total influence of functions in one class is less than the total influence of functions in the other class. In particular, we show (almost) separation of several classes of Boolean functions which have been studied in the coding theory and cryptography from classes which have been studied in combinatorics and complexity theory

    Randomness Extraction in AC0 and with Small Locality

    Get PDF
    Randomness extractors, which extract high quality (almost-uniform) random bits from biased random sources, are important objects both in theory and in practice. While there have been significant progress in obtaining near optimal constructions of randomness extractors in various settings, the computational complexity of randomness extractors is still much less studied. In particular, it is not clear whether randomness extractors with good parameters can be computed in several interesting complexity classes that are much weaker than P. In this paper we study randomness extractors in the following two models of computation: (1) constant-depth circuits (AC0), and (2) the local computation model. Previous work in these models, such as [Vio05a], [GVW15] and [BG13], only achieve constructions with weak parameters. In this work we give explicit constructions of randomness extractors with much better parameters. As an application, we use our AC0 extractors to study pseudorandom generators in AC0, and show that we can construct both cryptographic pseudorandom generators (under reasonable computational assumptions) and unconditional pseudorandom generators for space bounded computation with very good parameters. Our constructions combine several previous techniques in randomness extractors, as well as introduce new techniques to reduce or preserve the complexity of extractors, which may be of independent interest. These include (1) a general way to reduce the error of strong seeded extractors while preserving the AC0 property and small locality, and (2) a seeded randomness condenser with small locality.Comment: 62 page

    Top-Down Induction of Decision Trees: Rigorous Guarantees and Inherent Limitations

    Get PDF
    Consider the following heuristic for building a decision tree for a function f:{0,1}n{±1}f : \{0,1\}^n \to \{\pm 1\}. Place the most influential variable xix_i of ff at the root, and recurse on the subfunctions fxi=0f_{x_i=0} and fxi=1f_{x_i=1} on the left and right subtrees respectively; terminate once the tree is an ε\varepsilon-approximation of ff. We analyze the quality of this heuristic, obtaining near-matching upper and lower bounds: \circ Upper bound: For every ff with decision tree size ss and every ε(0,12)\varepsilon \in (0,\frac1{2}), this heuristic builds a decision tree of size at most sO(log(s/ε)log(1/ε))s^{O(\log(s/\varepsilon)\log(1/\varepsilon))}. \circ Lower bound: For every ε(0,12)\varepsilon \in (0,\frac1{2}) and s2O~(n)s \le 2^{\tilde{O}(\sqrt{n})}, there is an ff with decision tree size ss such that this heuristic builds a decision tree of size sΩ~(logs)s^{\tilde{\Omega}(\log s)}. We also obtain upper and lower bounds for monotone functions: sO(logs/ε)s^{O(\sqrt{\log s}/\varepsilon)} and sΩ~(logs4)s^{\tilde{\Omega}(\sqrt[4]{\log s } )} respectively. The lower bound disproves conjectures of Fiat and Pechyony (2004) and Lee (2009). Our upper bounds yield new algorithms for properly learning decision trees under the uniform distribution. We show that these algorithms---which are motivated by widely employed and empirically successful top-down decision tree learning heuristics such as ID3, C4.5, and CART---achieve provable guarantees that compare favorably with those of the current fastest algorithm (Ehrenfeucht and Haussler, 1989). Our lower bounds shed new light on the limitations of these heuristics. Finally, we revisit the classic work of Ehrenfeucht and Haussler. We extend it to give the first uniform-distribution proper learning algorithm that achieves polynomial sample and memory complexity, while matching its state-of-the-art quasipolynomial runtime

    Coin flipping from a cosmic source: On error correction of truly random bits

    Full text link
    We study a problem related to coin flipping, coding theory, and noise sensitivity. Consider a source of truly random bits x \in \bits^n, and kk parties, who have noisy versions of the source bits y^i \in \bits^n, where for all ii and jj, it holds that \Pr[y^i_j = x_j] = 1 - \eps, independently for all ii and jj. That is, each party sees each bit correctly with probability 1ϵ1-\epsilon, and incorrectly (flipped) with probability ϵ\epsilon, independently for all bits and all parties. The parties, who cannot communicate, wish to agree beforehand on {\em balanced} functions f_i : \bits^n \to \bits such that Pr[f1(y1)=...=fk(yk)]\Pr[f_1(y^1) = ... = f_k(y^k)] is maximized. In other words, each party wants to toss a fair coin so that the probability that all parties have the same coin is maximized. The functions fif_i may be thought of as an error correcting procedure for the source xx. When k=2,3k=2,3 no error correction is possible, as the optimal protocol is given by fi(xi)=y1if_i(x^i) = y^i_1. On the other hand, for large values of kk, better protocols exist. We study general properties of the optimal protocols and the asymptotic behavior of the problem with respect to kk, nn and \eps. Our analysis uses tools from probability, discrete Fourier analysis, convexity and discrete symmetrization

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    The Cryptographic Hardness of Random Local Functions -- Survey

    Get PDF
    Constant parallel-time cryptography allows to perform complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions that each of their output bits depend on a constant number of input bits. A natural way to obtain local cryptographic constructions is to use \emph{random local functions} in which each output bit is computed by applying some fixed dd-ary predicate PP to a randomly chosen dd-size subset of the input bits. In this work, we will study the cryptographic hardness of random local functions. In particular, we will survey known attacks and hardness results, discuss different flavors of hardness (one-wayness, pseudorandomness, collision resistance, public-key encryption), and mention applications to other problems in cryptography and computational complexity. We also present some open questions with the hope to develop a systematic study of the cryptographic hardness of local functions

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    High-Dimensional Function Approximation: Breaking the Curse with Monte Carlo Methods

    Get PDF
    In this dissertation we study the tractability of the information-based complexity n(ε,d)n(\varepsilon,d) for dd-variate function approximation problems. In the deterministic setting for many unweighted problems the curse of dimensionality holds, that means, for some fixed error tolerance ε>0\varepsilon>0 the complexity n(ε,d)n(\varepsilon,d) grows exponentially in dd. For integration problems one can usually break the curse with the standard Monte Carlo method. For function approximation problems, however, similar effects of randomization have been unknown so far. The thesis contains results on three more or less stand-alone topics. For an extended five page abstract, see the section "Introduction and Results". Chapter 2 is concerned with lower bounds for the Monte Carlo error for general linear problems via Bernstein numbers. This technique is applied to the LL_{\infty}-approximation of certain classes of CC^{\infty}-functions, where it turns out that randomization does not affect the tractability classification of the problem. Chapter 3 studies the LL_{\infty}-approximation of functions from Hilbert spaces with methods that may use arbitrary linear functionals as information. For certain classes of periodic functions from unweighted periodic tensor product spaces, in particular Korobov spaces, we observe the curse of dimensionality in the deterministic setting, while with randomized methods we achieve polynomial tractability. Chapter 4 deals with the L1L_1-approximation of monotone functions via function values. It is known that this problem suffers from the curse in the deterministic setting. An improved lower bound shows that the problem is still intractable in the randomized setting. However, Monte Carlo breaks the curse, in detail, for any fixed error tolerance ε>0\varepsilon>0 the complexity n(ε,d)n(\varepsilon,d) grows exponentially in d\sqrt{d} only.Comment: This is the author's submitted PhD thesis, still in the referee proces

    Post-quantum security of hash functions

    Get PDF
    corecore