612 research outputs found

    Efficient enumeration of solutions produced by closure operations

    Full text link
    In this paper we address the problem of generating all elements obtained by the saturation of an initial set by some operations. More precisely, we prove that we can generate the closure of a boolean relation (a set of boolean vectors) by polymorphisms with a polynomial delay. Therefore we can compute with polynomial delay the closure of a family of sets by any set of "set operations": union, intersection, symmetric difference, subsets, supersets \dots). To do so, we study the MembershipFMembership_{\mathcal{F}} problem: for a set of operations F\mathcal{F}, decide whether an element belongs to the closure by F\mathcal{F} of a family of elements. In the boolean case, we prove that MembershipFMembership_{\mathcal{F}} is in P for any set of boolean operations F\mathcal{F}. When the input vectors are over a domain larger than two elements, we prove that the generic enumeration method fails, since MembershipFMembership_{\mathcal{F}} is NP-hard for some F\mathcal{F}. We also study the problem of generating minimal or maximal elements of closures and prove that some of them are related to well known enumeration problems such as the enumeration of the circuits of a matroid or the enumeration of maximal independent sets of a hypergraph. This article improves on previous works of the same authors.Comment: 30 pages, 1 figure. Long version of the article arXiv:1509.05623 of the same name which appeared in STACS 2016. Final version for DMTCS journa

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage

    On the Power of Democratic Networks

    Get PDF
    Linear Threshold Boolean units (LTU) are the basic processing components of artificial neural networks of Boolean activations. Quantization of their parameters is a central question in hardware implementation, when numerical technologies are used to store the configuration of the circuit. In the previous studies on the circuit complexity of feedforward neural networks, no differences had been made between a network with ``small'' integer weights and one composed of majority units (LTU with weights in {-1,0, 1}), since any connection of weight w (w integer) can be simulated by |w| connections of value Sgn(w). This paper will focus on the circuit complexity of democratic networks, i.e. circuits of majority units with at most one connection between each pair of units. The main results presented are the following: any Boolean function can be computed by a depth-3 non-degenerate democratic network and can be expressed as a linear threshold function of majorities; AT-LEAST-k and AT-MOST-k are computable by a depth-2, polynomial size democratic network; the smallest sizes of depth-2 circuits computing PARITY are identical for a democratic network and for a usual network; the VC of the class of the majority functions is n 1, i.e. equal to that of the class of any linear threshold functions

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    On the Power of Democratic Networks

    Full text link

    Approximate degree in classical and quantum computing

    Full text link
    In this book, the authors survey what is known about a particularly natural notion of approximation by polynomials, capturing pointwise approximation over the real numbers.FG-2022-18482 - Alfred P. Sloan Foundation; CNS-2046425 - National Science Foundation; CCF-1947889 - National Science FoundationAccepted manuscrip

    Signal Perceptron: On the Identifiability of Boolean Function Spaces and Beyond

    Get PDF
    In a seminal book, Minsky and Papert define the perceptron as a limited implementation of what they called “parallel machines.” They showed that some binary Boolean functions including XOR are not definable in a single layer perceptron due to its limited capacity to learn only linearly separable functions. In this work, we propose a new more powerful implementation of such parallel machines. This new mathematical tool is defined using analytic sinusoids—instead of linear combinations—to form an analytic signal representation of the function that we want to learn. We show that this re-formulated parallel mechanism can learn, with a single layer, any non-linear k-ary Boolean function. Finally, to provide an example of its practical applications, we show that it outperforms the single hidden layer multilayer perceptron in both Boolean function learning and image classification tasks, while also being faster and requiring fewer parameters
    corecore