154 research outputs found

    On Learning Monotone DNF Formulae under Uniform Distributions

    Get PDF
    AbstractWe show how to learn in polynomial time monotone d-term DNF formulae (formulae in disjunctive normal form with at most d terms) using positive examples drawn from a distribution that is a generalization of the uniform distribution

    Learning DNF Expressions from Fourier Spectrum

    Full text link
    Since its introduction by Valiant in 1984, PAC learning of DNF expressions remains one of the central problems in learning theory. We consider this problem in the setting where the underlying distribution is uniform, or more generally, a product distribution. Kalai, Samorodnitsky and Teng (2009) showed that in this setting a DNF expression can be efficiently approximated from its "heavy" low-degree Fourier coefficients alone. This is in contrast to previous approaches where boosting was used and thus Fourier coefficients of the target function modified by various distributions were needed. This property is crucial for learning of DNF expressions over smoothed product distributions, a learning model introduced by Kalai et al. (2009) and inspired by the seminal smoothed analysis model of Spielman and Teng (2001). We introduce a new approach to learning (or approximating) a polynomial threshold functions which is based on creating a function with range [-1,1] that approximately agrees with the unknown function on low-degree Fourier coefficients. We then describe conditions under which this is sufficient for learning polynomial threshold functions. Our approach yields a new, simple algorithm for approximating any polynomial-size DNF expression from its "heavy" low-degree Fourier coefficients alone. Our algorithm greatly simplifies the proof of learnability of DNF expressions over smoothed product distributions. We also describe an application of our algorithm to learning monotone DNF expressions over product distributions. Building on the work of Servedio (2001), we give an algorithm that runs in time \poly((s \cdot \log{(s/\eps)})^{\log{(s/\eps)}}, n), where ss is the size of the target DNF expression and \eps is the accuracy. This improves on \poly((s \cdot \log{(ns/\eps)})^{\log{(s/\eps)} \cdot \log{(1/\eps)}}, n) bound of Servedio (2001).Comment: Appears in Conference on Learning Theory (COLT) 201

    DNF Sparsification and a Faster Deterministic Counting Algorithm

    Full text link
    Given a DNF formula on n variables, the two natural size measures are the number of terms or size s(f), and the maximum width of a term w(f). It is folklore that short DNF formulas can be made narrow. We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be ϵ\epsilon-approximated by a width ww DNF with at most (wlog(1/ϵ))O(w)(w\log(1/\epsilon))^{O(w)} terms. We combine our sparsification result with the work of Luby and Velikovic to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic nO~(loglog(n))n^{\tilde{O}(\log \log(n))} time algorithm that computes an additive ϵ\epsilon approximation to the fraction of satisfying assignments of f for \epsilon = 1/\poly(\log n). The previous best result due to Luby and Velickovic from nearly two decades ago had a run-time of nexp(O(loglogn))n^{\exp(O(\sqrt{\log \log n}))}.Comment: To appear in the IEEE Conference on Computational Complexity, 201

    Learning pseudo-Boolean k-DNF and Submodular Functions

    Full text link
    We prove that any submodular function f: {0,1}^n -> {0,1,...,k} can be represented as a pseudo-Boolean 2k-DNF formula. Pseudo-Boolean DNFs are a natural generalization of DNF representation for functions with integer range. Each term in such a formula has an associated integral constant. We show that an analog of Hastad's switching lemma holds for pseudo-Boolean k-DNFs if all constants associated with the terms of the formula are bounded. This allows us to generalize Mansour's PAC-learning algorithm for k-DNFs to pseudo-Boolean k-DNFs, and hence gives a PAC-learning algorithm with membership queries under the uniform distribution for submodular functions of the form f:{0,1}^n -> {0,1,...,k}. Our algorithm runs in time polynomial in n, k^{O(k \log k / \epsilon)}, 1/\epsilon and log(1/\delta) and works even in the agnostic setting. The line of previous work on learning submodular functions [Balcan, Harvey (STOC '11), Gupta, Hardt, Roth, Ullman (STOC '11), Cheraghchi, Klivans, Kothari, Lee (SODA '12)] implies only n^{O(k)} query complexity for learning submodular functions in this setting, for fixed epsilon and delta. Our learning algorithm implies a property tester for submodularity of functions f:{0,1}^n -> {0, ..., k} with query complexity polynomial in n for k=O((\log n/ \loglog n)^{1/2}) and constant proximity parameter \epsilon

    A Proposed Algorithm Toward Uniform-distribution Monotone DNF Learning

    Get PDF
    In 1984 Valiant introduced the distribution-independent model of Probably Approximately Correct (PAC) learning from random examples and brought up the problem of whether polynomial-size DNF functions are PAC learnable in polynomial time. It has been about twenty years that the DNF learning problem has been widely regarded as one of the most important ---and challenging --- open questions in Computational Learning Theory. We consider a related but simpler question: are polynomial-size monotone DNF functions PAC learnable in polynomial time if examples of the function are uniformly generated? Our research develops an algorithm that we hope to learn a monotone DNF in polynomial time by using Threshold Function Hypotheses. We tested with some interesting cases and got some impressive and encouraging results. However, further testing revealed other cases for which the algorithm appears to fail. Some ideas for addressing these problem cases will be discussed

    A composition theorem for the Fourier Entropy-Influence conjecture

    Full text link
    The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai [FK96] seeks to relate two fundamental measures of Boolean function complexity: it states that H[f]CInf[f]H[f] \leq C Inf[f] holds for every Boolean function ff, where H[f]H[f] denotes the spectral entropy of ff, Inf[f]Inf[f] is its total influence, and C>0C > 0 is a universal constant. Despite significant interest in the conjecture it has only been shown to hold for a few classes of Boolean functions. Our main result is a composition theorem for the FEI conjecture. We show that if g1,...,gkg_1,...,g_k are functions over disjoint sets of variables satisfying the conjecture, and if the Fourier transform of FF taken with respect to the product distribution with biases E[g1],...,E[gk]E[g_1],...,E[g_k] satisfies the conjecture, then their composition F(g1(x1),...,gk(xk))F(g_1(x^1),...,g_k(x^k)) satisfies the conjecture. As an application we show that the FEI conjecture holds for read-once formulas over arbitrary gates of bounded arity, extending a recent result [OWZ11] which proved it for read-once decision trees. Our techniques also yield an explicit function with the largest known ratio of C6.278C \geq 6.278 between H[f]H[f] and Inf[f]Inf[f], improving on the previous lower bound of 4.615

    Proper Learning Algorithm for Functions of k Terms under Smooth Distributions

    Get PDF
    AbstractIn this paper, we introduce a probabilistic distribution, called a smooth distribution, which is a generalization of variants of the uniform distribution such as q-bounded distribution and product distribution. Then, we give an algorithm that, under the smooth distribution, properly learns the class of functions of k terms given as Fk∘Tkn={g(f1(v), …, fk(v))|g∈Fk, f1, …, fk∈Tn} in polynomial time for constant k, where Fk is the class of all Boolean functions of k variables and Tn is the class of terms over n variables. Although class Fk∘Tkn was shown by Blum and Singh to be learned using DNF as the hypothesis class, it has remained open whether it is properly learnable under a distribution-free setting

    Agnostically Learning Boolean Functions with Finite Polynomial Representation

    Get PDF
    Agnostic learning is an extremely hard task in computational learning theory. In this paper we revisit the results in [Kalai et al. SIAM J. Comput. 2008] on agnostically learning boolean functions with finite polynomial representation and those that can be approximated by the former. An example of the former is the class of all boolean low-degree polynomials. For the former, [Kalai et al. SIAM J. Comput. 2008] introduces the l_1-polynomial regression method to learn them to error opt+epsilon. We present a simple instantiation for one step in the method and accordingly give the analysis. Moreover, we show that even ignoring this step can bring a learning result of error 2opt+epsilon as well. Then we consider applying the result for learning concept classes that can be approximated by the former to learn richer specific classes. Our result is that the class of s-term DNF formulae can be agnostically learned to error opt+epsilon with respect to arbitrary distributions for any epsilon in time poly(n^d, 1/epsilon), where d=O(sqrt{n}cdot scdot log slog^2(1/epsilon))
    corecore