1,667 research outputs found

    Counting the learnable functions of structured data

    Get PDF
    Cover's function counting theorem is a milestone in the theory of artificial neural networks. It provides an answer to the fundamental question of determining how many binary assignments (dichotomies) of pp points in nn dimensions can be linearly realized. Regrettably, it has proved hard to extend the same approach to more advanced problems than the classification of points. In particular, an emerging necessity is to find methods to deal with structured data, and specifically with non-pointlike patterns. A prominent case is that of invariant recognition, whereby identification of a stimulus is insensitive to irrelevant transformations on the inputs (such as rotations or changes in perspective in an image). An object is therefore represented by an extended perceptual manifold, consisting of inputs that are classified similarly. Here, we develop a function counting theory for structured data of this kind, by extending Cover's combinatorial technique, and we derive analytical expressions for the average number of dichotomies of generically correlated sets of patterns. As an application, we obtain a closed formula for the capacity of a binary classifier trained to distinguish general polytopes of any dimension. These results may help extend our theoretical understanding of generalization, feature extraction, and invariant object recognition by neural networks

    Beyond the storage capacity: data driven satisfiability transition

    Full text link
    Data structure has a dramatic impact on the properties of neural networks, yet its significance in the established theoretical frameworks is poorly understood. Here we compute the Vapnik-Chervonenkis entropy of a kernel machine operating on data grouped into equally labelled subsets. At variance with the unstructured scenario, entropy is non-monotonic in the size of the training set, and displays an additional critical point besides the storage capacity. Remarkably, the same behavior occurs in margin classifiers even with randomly labelled data, as is elucidated by identifying the synaptic volume encoding the transition. These findings reveal aspects of expressivity lying beyond the condensed description provided by the storage capacity, and they indicate the path towards more realistic bounds for the generalization error of neural networks.Comment: 5 pages, 2 figure

    Enumeration of strong dichotomy patterns

    Get PDF
    We apply the version of P\'{o}lya-Redfield theory obtained by White to count patterns with a given automorphism group to the enumeration of strong dichotomy patterns, that is, we count bicolor patterns of Z2k\mathbb{Z}_{2k} with respect to the action of \Aff(\mathbb{Z}_{2k}) and with trivial isotropy group. As a byproduct, a conjectural instance of phenomenon similar to cyclic sieving for special cases of these combinatorial objects is proposed.Comment: Some errors and unclear sentences had been correcte

    Antichains and counterpoint dichotomies

    Get PDF
    We construct a special type of antichain (i. e., a family of subsets of a set, such that no subset is contained in another) using group-theoretical considerations, and obtain an upper bound on the cardinality of such an antichain. We apply the result to bound the number of strong counterpoint dichotomies up to affine isomorphisms
    corecore