19 research outputs found

    Unlabeled sample compression schemes and corner peelings for ample and maximum classes

    Full text link
    We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by Tracy Hall (2004) of a partial shelling of the cross-polytope which can not be extended. We use it to derive a maximum class of VC dimension 3 that has no corners. This refutes several previous works in machine learning from the past 11 years. In particular, it implies that all previous constructions of optimal unlabeled sample compression schemes for maximum classes are erroneous. On the positive side we present a new construction of an unlabeled sample compression scheme for maximum classes. We leave as open whether our unlabeled sample compression scheme extends to ample (a.k.a. lopsided or extremal) classes, which represent a natural and far-reaching generalization of maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the 1-skeletons of associated cubical complexes

    Unlabeled Sample Compression Schemes and Corner Peelings for Ample and Maximum Classes

    Get PDF
    We examine connections between combinatorial notions that arise in machine learning and topological notions in cubical/simplicial geometry. These connections enable to export results from geometry to machine learning. Our first main result is based on a geometric construction by H. Tracy Hall (2004) of a partial shelling of the cross-polytope which can not be extended. We use it to derive a maximum class of VC dimension 3 that has no corners. This refutes several previous works in machine learning from the past 11 years. In particular, it implies that the previous constructions of optimal unlabeled compression schemes for maximum classes are erroneous. On the positive side we present a new construction of an optimal unlabeled compression scheme for maximum classes. We leave as open whether our unlabeled compression scheme extends to ample (a.k.a. lopsided or extremal) classes, which represent a natural and far-reaching generalization of maximum classes. Towards resolving this question, we provide a geometric characterization in terms of unique sink orientations of the 1-skeletons of associated cubical complexes

    Boundary rigidity of finite CAT(0) cube complexes

    Full text link
    In this note, we prove that finite CAT(0) cube complexes can be reconstructed from their boundary distances (computed in their 1-skeleta). This result was conjectured by Haslegrave, Scott, Tamitegama, and Tan (2023). The reconstruction of a finite cell complex from the boundary distances is the discrete version of the boundary rigidity problem, which is a classical problem from Riemannian geometry. In the proofs, we use the bijection between CAT(0) cube complexes and median graphs and the corner peelings of median graphs

    Sample Compression Schemes for Balls in Graphs

    Get PDF
    One of the open problems in machine learning is whether any set-family of VC-dimension d admits a sample compression scheme of size O(d). In this paper, we study this problem for balls in graphs. For balls of arbitrary radius r, we design proper sample compression schemes of size 4 for interval graphs, of size 6 for trees of cycles, and of size 22 for cube-free median graphs. We also design approximate sample compression schemes of size 2 for balls of δ-hyperbolic graphs

    Optimal Collusion-Free Teaching

    Get PDF
    Formal models of learning from teachers need to respect certain criteria toavoid collusion. The most commonly accepted notion of collusion-freeness wasproposed by Goldman and Mathias (1996), and various teaching models obeyingtheir criterion have been studied. For each model MM and each concept classC\mathcal{C}, a parameter MM-TD(C)\mathrm{TD}(\mathcal{C}) refers to theteaching dimension of concept class C\mathcal{C} in model MM---defined to bethe number of examples required for teaching a concept, in the worst case overall concepts in C\mathcal{C}. This paper introduces a new model of teaching, called no-clash teaching,together with the corresponding parameter NCTD(C)\mathrm{NCTD}(\mathcal{C}).No-clash teaching is provably optimal in the strong sense that, given anyconcept class C\mathcal{C} and any model MM obeying Goldman and Mathias'scollusion-freeness criterion, one obtains \mathrm{NCTD}(\mathcal{C})\leM-TD(C)\mathrm{TD}(\mathcal{C}). We also study a corresponding notionNCTD+\mathrm{NCTD}^+ for the case of learning from positive data only, establishuseful bounds on NCTD\mathrm{NCTD} and NCTD+\mathrm{NCTD}^+, and discuss relationsof these parameters to the VC-dimension and to sample compression. In addition to formulating an optimal model of collusion-free teaching, ourmain results are on the computational complexity of deciding whetherNCTD+(C)=k\mathrm{NCTD}^+(\mathcal{C})=k (or NCTD(C)=k\mathrm{NCTD}(\mathcal{C})=k) for givenC\mathcal{C} and kk. We show some such decision problems to be equivalent tothe existence question for certain constrained matchings in bipartite graphs.Our NP-hardness results for the latter are of independent interest in the studyof constrained graph matchings.<br

    Tournaments, Johnson Graphs, and NC-Teaching

    Get PDF
    Quite recently a teaching model, called "No-Clash Teaching" or simply"NC-Teaching", had been suggested that is provably optimal in the followingstrong sense. First, it satisfies Goldman and Matthias' collusion-freenesscondition. Second, the NC-teaching dimension (= NCTD) is smaller than or equalto the teaching dimension with respect to any other collusion-free teachingmodel. It has also been shown that any concept class which has NC-teachingdimension dd and is defined over a domain of size nn can have at most 2d(nd)2^d\binom{n}{d} concepts. The main results in this paper are as follows. First,we characterize the maximum concept classes of NC-teaching dimension 11 asclasses which are induced by tournaments (= complete oriented graphs) in a verynatural way. Second, we show that there exists a family (\cC_n)_{n\ge1} ofconcept classes such that the well known recursive teaching dimension (= RTD)of \cC_n grows logarithmically in n = |\cC_n| while, for every n≥1n\ge1, theNC-teaching dimension of \cC_n equals 11. Since the recursive teachingdimension of a finite concept class \cC is generally bounded \log|\cC|, thefamily (\cC_n)_{n\ge1} separates RTD from NCTD in the most striking way. Theproof of existence of the family (\cC_n)_{n\ge1} makes use of theprobabilistic method and random tournaments. Third, we improve theafore-mentioned upper bound 2d(nd)2^d\binom{n}{d} by a factor of order d\sqrt{d}.The verification of the superior bound makes use of Johnson graphs and maximumsubgraphs not containing large narrow cliques.<br

    Non-Clashing Teaching Maps for Balls in Graphs

    Full text link
    Recently, Kirkpatrick et al. [ALT 2019] and Fallat et al. [JMLR 2023] introduced non-clashing teaching and showed it to be the most efficient machine teaching model satisfying the benchmark for collusion-avoidance set by Goldman and Mathias. A teaching map TT for a concept class C\cal{C} assigns a (teaching) set T(C)T(C) of examples to each concept C∈CC \in \cal{C}. A teaching map is non-clashing if no pair of concepts are consistent with the union of their teaching sets. The size of a non-clashing teaching map (NCTM) TT is the maximum size of a T(C)T(C), C∈CC \in \cal{C}. The non-clashing teaching dimension NCTD(C)(\cal{C}) of C\cal{C} is the minimum size of an NCTM for C\cal{C}. NCTM+^+ and NCTD+(C)^+(\cal{C}) are defined analogously, except the teacher may only use positive examples. We study NCTMs and NCTM+^+s for the concept class B(G)\mathcal{B}(G) consisting of all balls of a graph GG. We show that the associated decision problem {\sc B-NCTD+^+} for NCTD+^+ is NP-complete in split, co-bipartite, and bipartite graphs. Surprisingly, we even prove that, unless the ETH fails, {\sc B-NCTD+^+} does not admit an algorithm running in time 22o(vc)⋅nO(1)2^{2^{o(vc)}}\cdot n^{O(1)}, nor a kernelization algorithm outputting a kernel with 2o(vc)2^{o(vc)} vertices, where vc is the vertex cover number of GG. These are extremely rare results: it is only the second (fourth, resp.) problem in NP to admit a double-exponential lower bound parameterized by vc (treewidth, resp.), and only one of very few problems to admit an ETH-based conditional lower bound on the number of vertices in a kernel. We complement these lower bounds with matching upper bounds. For trees, interval graphs, cycles, and trees of cycles, we derive NCTM+^+s or NCTMs for B(G)\mathcal{B}(G) of size proportional to its VC-dimension. For Gromov-hyperbolic graphs, we design an approximate NCTM+^+ for B(G)\mathcal{B}(G) of size 2.Comment: Shortened abstract due to character limi
    corecore