6 research outputs found
Optimal Collusion-Free Teaching
Formal models of learning from teachers need to respect certain criteria toavoid collusion. The most commonly accepted notion of collusion-freeness wasproposed by Goldman and Mathias (1996), and various teaching models obeyingtheir criterion have been studied. For each model and each concept class, a parameter - refers to theteaching dimension of concept class in model ---defined to bethe number of examples required for teaching a concept, in the worst case overall concepts in . This paper introduces a new model of teaching, called no-clash teaching,together with the corresponding parameter .No-clash teaching is provably optimal in the strong sense that, given anyconcept class and any model obeying Goldman and Mathias'scollusion-freeness criterion, one obtains \mathrm{NCTD}(\mathcal{C})\leM-. We also study a corresponding notion for the case of learning from positive data only, establishuseful bounds on and , and discuss relationsof these parameters to the VC-dimension and to sample compression. In addition to formulating an optimal model of collusion-free teaching, ourmain results are on the computational complexity of deciding whether (or ) for given and . We show some such decision problems to be equivalent tothe existence question for certain constrained matchings in bipartite graphs.Our NP-hardness results for the latter are of independent interest in the studyof constrained graph matchings.<br
Tournaments, Johnson Graphs, and NC-Teaching
Quite recently a teaching model, called "No-Clash Teaching" or simply"NC-Teaching", had been suggested that is provably optimal in the followingstrong sense. First, it satisfies Goldman and Matthias' collusion-freenesscondition. Second, the NC-teaching dimension (= NCTD) is smaller than or equalto the teaching dimension with respect to any other collusion-free teachingmodel. It has also been shown that any concept class which has NC-teachingdimension and is defined over a domain of size can have at most concepts. The main results in this paper are as follows. First,we characterize the maximum concept classes of NC-teaching dimension asclasses which are induced by tournaments (= complete oriented graphs) in a verynatural way. Second, we show that there exists a family (\cC_n)_{n\ge1} ofconcept classes such that the well known recursive teaching dimension (= RTD)of \cC_n grows logarithmically in n = |\cC_n| while, for every , theNC-teaching dimension of \cC_n equals . Since the recursive teachingdimension of a finite concept class \cC is generally bounded \log|\cC|, thefamily (\cC_n)_{n\ge1} separates RTD from NCTD in the most striking way. Theproof of existence of the family (\cC_n)_{n\ge1} makes use of theprobabilistic method and random tournaments. Third, we improve theafore-mentioned upper bound by a factor of order .The verification of the superior bound makes use of Johnson graphs and maximumsubgraphs not containing large narrow cliques.<br
The Teaching Dimension of Kernel Perceptron
Algorithmic machine teaching has been studied under the linear setting where
exact teaching is possible. However, little is known for teaching nonlinear
learners. Here, we establish the sample complexity of teaching, aka teaching
dimension, for kernelized perceptrons for different families of feature maps.
As a warm-up, we show that the teaching complexity is for the exact
teaching of linear perceptrons in , and for kernel
perceptron with a polynomial kernel of order . Furthermore, under certain
smooth assumptions on the data distribution, we establish a rigorous bound on
the complexity for approximately teaching a Gaussian kernel perceptron. We
provide numerical examples of the optimal (approximate) teaching set under
several canonical settings for linear, polynomial and Gaussian kernel
perceptrons.Comment: AISTATS 202
Non-Clashing Teaching Maps for Balls in Graphs
Recently, Kirkpatrick et al. [ALT 2019] and Fallat et al. [JMLR 2023]
introduced non-clashing teaching and showed it to be the most efficient machine
teaching model satisfying the benchmark for collusion-avoidance set by Goldman
and Mathias. A teaching map for a concept class assigns a
(teaching) set of examples to each concept . A teaching
map is non-clashing if no pair of concepts are consistent with the union of
their teaching sets. The size of a non-clashing teaching map (NCTM) is the
maximum size of a , . The non-clashing teaching dimension
NCTD of is the minimum size of an NCTM for .
NCTM and NCTD are defined analogously, except the teacher may
only use positive examples.
We study NCTMs and NCTMs for the concept class
consisting of all balls of a graph . We show that the associated decision
problem {\sc B-NCTD} for NCTD is NP-complete in split, co-bipartite,
and bipartite graphs. Surprisingly, we even prove that, unless the ETH fails,
{\sc B-NCTD} does not admit an algorithm running in time
, nor a kernelization algorithm outputting a
kernel with vertices, where vc is the vertex cover number of .
These are extremely rare results: it is only the second (fourth, resp.) problem
in NP to admit a double-exponential lower bound parameterized by vc (treewidth,
resp.), and only one of very few problems to admit an ETH-based conditional
lower bound on the number of vertices in a kernel. We complement these lower
bounds with matching upper bounds. For trees, interval graphs, cycles, and
trees of cycles, we derive NCTMs or NCTMs for of size
proportional to its VC-dimension. For Gromov-hyperbolic graphs, we design an
approximate NCTM for of size 2.Comment: Shortened abstract due to character limi