26 research outputs found

    An Overview of Machine Teaching

    Full text link
    In this paper we try to organize machine teaching as a coherent set of ideas. Each idea is presented as varying along a dimension. The collection of dimensions then form the problem space of machine teaching, such that existing teaching problems can be characterized in this space. We hope this organization allows us to gain deeper understanding of individual teaching problems, discover connections among them, and identify gaps in the field.Comment: A tutorial document grown out of NIPS 2017 Workshop on Teaching Machines, Robots, and Human

    Tournaments, Johnson Graphs, and NC-Teaching

    Get PDF
    Quite recently a teaching model, called "No-Clash Teaching" or simply"NC-Teaching", had been suggested that is provably optimal in the followingstrong sense. First, it satisfies Goldman and Matthias' collusion-freenesscondition. Second, the NC-teaching dimension (= NCTD) is smaller than or equalto the teaching dimension with respect to any other collusion-free teachingmodel. It has also been shown that any concept class which has NC-teachingdimension dd and is defined over a domain of size nn can have at most 2d(nd)2^d\binom{n}{d} concepts. The main results in this paper are as follows. First,we characterize the maximum concept classes of NC-teaching dimension 11 asclasses which are induced by tournaments (= complete oriented graphs) in a verynatural way. Second, we show that there exists a family (\cC_n)_{n\ge1} ofconcept classes such that the well known recursive teaching dimension (= RTD)of \cC_n grows logarithmically in n = |\cC_n| while, for every n≥1n\ge1, theNC-teaching dimension of \cC_n equals 11. Since the recursive teachingdimension of a finite concept class \cC is generally bounded \log|\cC|, thefamily (\cC_n)_{n\ge1} separates RTD from NCTD in the most striking way. Theproof of existence of the family (\cC_n)_{n\ge1} makes use of theprobabilistic method and random tournaments. Third, we improve theafore-mentioned upper bound 2d(nd)2^d\binom{n}{d} by a factor of order d\sqrt{d}.The verification of the superior bound makes use of Johnson graphs and maximumsubgraphs not containing large narrow cliques.<br

    On the Teachability of Randomized Learners

    Get PDF
    The present paper introduces a new model for teaching {em randomized learners}. Our new model, though based on the classical teaching dimension model, allows to study the influence of various parameters such as the learner\u27s memory size, its ability to provide or to not provide feedback, and the influence of the order in which examples are presented. Furthermore, within the new model it is possible to investigate new aspects of teaching like teaching from positive data only or teaching with inconsistent teachers. Furthermore, we provide characterization theorems for teachability from positive data for both ordinary teachers and inconsistent teachers with and without feedback

    A Probabilistic Framework for Non-Cheating Machine Teaching

    Full text link
    Over the past decades in the field of machine teaching, several restrictions have been introduced to avoid ‘cheating’, such as collusion-free or non-clashing teaching. However, these restrictions forbid several teaching situations that we intuitively consider natural and fair, especially those ‘changes of mind’ of the learner as more evidence is given, affecting the likelihood of concepts and ultimately their posteriors. Under a new generalised probabilistic teaching, not only do these non-cheating constraints look too narrow but we also show that the most relevant machine teaching models are particular cases of this framework: the consistency graph between concepts and elements simply becomes a joint probability distribution. We show a simple procedure that builds the witness joint distribution from the ground joint distribution. We prove a chain of relations, also with a theoretical lower bound, on the teaching dimension of the old and new models. Overall, this new setting is more general than the traditional machine teaching models, yet at the same time more intuitively capturing a less abrupt notion of non-cheating teaching.Ferri Ramírez, C.; Hernández Orallo, J.; Telle, JA. (2022). A Probabilistic Framework for Non-Cheating Machine Teaching. http://hdl.handle.net/10251/18236

    Machine Learning: The Necessity of Order (is order in order ?)

    Get PDF
    In myriad of human-tailored activities, whether in the classroom or listening to a story, human learners receive selected pieces of information, presented in a chosen order and pace. This is what it takes to facilitate learning. Yet, when machine learners exhibited sequencing effects, showing that some data sampling, ordering and tempo are better than others, it almost came as a surprise. Seemingly simple questions had suddenly to be thought anew : what are good training data? How to select them? How to present them? Why is it that there are sequencing effects? How to measure them? Should we try to avoid them or take advantage of them? This chapter is intended to present ideas and directions of research that are currently studied in the machine learning field to answer these questions and others. As any other science, machine learning strives to develop models that stress fundamental aspects of the phenomenon under study. The basic concepts and models developed in machine learning are presented here, as well as some of the findings that may have significance and counterparts in related disciplines interested in learning and education

    Inferring Symbolic Automata

    Get PDF
    We study the learnability of symbolic finite state automata, a model shown useful in many applications in software verification. The state-of-the-art literature on this topic follows the query learning paradigm, and so far all obtained results are positive. We provide a necessary condition for efficient learnability of SFAs in this paradigm, from which we obtain the first negative result. The main focus of our work lies in the learnability of SFAs under the paradigm of identification in the limit using polynomial time and data. We provide a necessary condition and a sufficient condition for efficient learnability of SFAs in this paradigm, from which we derive a positive and a negative result
    corecore