893 research outputs found

    Bounding Embeddings of VC Classes into Maximum Classes

    Full text link
    One of the earliest conjectures in computational learning theory-the Sample Compression conjecture-asserts that concept classes (equivalently set systems) admit compression schemes of size linear in their VC dimension. To-date this statement is known to be true for maximum classes---those that possess maximum cardinality for their VC dimension. The most promising approach to positively resolving the conjecture is by embedding general VC classes into maximum classes without super-linear increase to their VC dimensions, as such embeddings would extend the known compression schemes to all VC classes. We show that maximum classes can be characterised by a local-connectivity property of the graph obtained by viewing the class as a cubical complex. This geometric characterisation of maximum VC classes is applied to prove a negative embedding result which demonstrates VC-d classes that cannot be embedded in any maximum class of VC dimension lower than 2d. On the other hand, we show that every VC-d class C embeds in a VC-(d+D) maximum class where D is the deficiency of C, i.e., the difference between the cardinalities of a maximum VC-d class and of C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible results on embedding into maximum classes. For some special classes of Boolean functions, relationships with maximum classes are investigated. Finally we give a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum classes for smallest k.Comment: 22 pages, 2 figure

    Unlabeled sample compression schemes and corner peelings for ample and maximum classes

    Full text link
    We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by Tracy Hall (2004) of a partial shelling of the cross-polytope which can not be extended. We use it to derive a maximum class of VC dimension 3 that has no corners. This refutes several previous works in machine learning from the past 11 years. In particular, it implies that all previous constructions of optimal unlabeled sample compression schemes for maximum classes are erroneous. On the positive side we present a new construction of an unlabeled sample compression scheme for maximum classes. We leave as open whether our unlabeled sample compression scheme extends to ample (a.k.a. lopsided or extremal) classes, which represent a natural and far-reaching generalization of maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the 1-skeletons of associated cubical complexes

    Optimal Collusion-Free Teaching

    Get PDF
    Formal models of learning from teachers need to respect certain criteria toavoid collusion. The most commonly accepted notion of collusion-freeness wasproposed by Goldman and Mathias (1996), and various teaching models obeyingtheir criterion have been studied. For each model MM and each concept classC\mathcal{C}, a parameter MM-TD(C)\mathrm{TD}(\mathcal{C}) refers to theteaching dimension of concept class C\mathcal{C} in model MM---defined to bethe number of examples required for teaching a concept, in the worst case overall concepts in C\mathcal{C}. This paper introduces a new model of teaching, called no-clash teaching,together with the corresponding parameter NCTD(C)\mathrm{NCTD}(\mathcal{C}).No-clash teaching is provably optimal in the strong sense that, given anyconcept class C\mathcal{C} and any model MM obeying Goldman and Mathias'scollusion-freeness criterion, one obtains \mathrm{NCTD}(\mathcal{C})\leM-TD(C)\mathrm{TD}(\mathcal{C}). We also study a corresponding notionNCTD+\mathrm{NCTD}^+ for the case of learning from positive data only, establishuseful bounds on NCTD\mathrm{NCTD} and NCTD+\mathrm{NCTD}^+, and discuss relationsof these parameters to the VC-dimension and to sample compression. In addition to formulating an optimal model of collusion-free teaching, ourmain results are on the computational complexity of deciding whetherNCTD+(C)=k\mathrm{NCTD}^+(\mathcal{C})=k (or NCTD(C)=k\mathrm{NCTD}(\mathcal{C})=k) for givenC\mathcal{C} and kk. We show some such decision problems to be equivalent tothe existence question for certain constrained matchings in bipartite graphs.Our NP-hardness results for the latter are of independent interest in the studyof constrained graph matchings.<br

    An Overview of Machine Teaching

    Full text link
    In this paper we try to organize machine teaching as a coherent set of ideas. Each idea is presented as varying along a dimension. The collection of dimensions then form the problem space of machine teaching, such that existing teaching problems can be characterized in this space. We hope this organization allows us to gain deeper understanding of individual teaching problems, discover connections among them, and identify gaps in the field.Comment: A tutorial document grown out of NIPS 2017 Workshop on Teaching Machines, Robots, and Human

    Sign rank versus VC dimension

    Full text link
    This work studies the maximum possible sign rank of N×NN \times N sign matrices with a given VC dimension dd. For d=1d=1, this maximum is {three}. For d=2d=2, this maximum is Θ~(N1/2)\tilde{\Theta}(N^{1/2}). For d>2d >2, similar but slightly less accurate statements hold. {The lower bounds improve over previous ones by Ben-David et al., and the upper bounds are novel.} The lower bounds are obtained by probabilistic constructions, using a theorem of Warren in real algebraic topology. The upper bounds are obtained using a result of Welzl about spanning trees with low stabbing number, and using the moment curve. The upper bound technique is also used to: (i) provide estimates on the number of classes of a given VC dimension, and the number of maximum classes of a given VC dimension -- answering a question of Frankl from '89, and (ii) design an efficient algorithm that provides an O(N/log(N))O(N/\log(N)) multiplicative approximation for the sign rank. We also observe a general connection between sign rank and spectral gaps which is based on Forster's argument. Consider the N×NN \times N adjacency matrix of a Δ\Delta regular graph with a second eigenvalue of absolute value λ\lambda and ΔN/2\Delta \leq N/2. We show that the sign rank of the signed version of this matrix is at least Δ/λ\Delta/\lambda. We use this connection to prove the existence of a maximum class C{±1}NC\subseteq\{\pm 1\}^N with VC dimension 22 and sign rank Θ~(N1/2)\tilde{\Theta}(N^{1/2}). This answers a question of Ben-David et al.~regarding the sign rank of large VC classes. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. We further describe connections to communication complexity, geometry, learning theory, and combinatorics.Comment: 33 pages. This is a revised version of the paper "Sign rank versus VC dimension". Additional results in this version: (i) Estimates on the number of maximum VC classes (answering a question of Frankl from '89). (ii) Estimates on the sign rank of large VC classes (answering a question of Ben-David et al. from '03). (iii) A discussion on the computational complexity of computing the sign-ran
    corecore