211 research outputs found
Classifying the Arithmetical Complexity of Teaching Models
This paper classifies the complexity of various teaching models by their
position in the arithmetical hierarchy. In particular, we determine the
arithmetical complexity of the index sets of the following classes: (1) the
class of uniformly r.e. families with finite teaching dimension, and (2) the
class of uniformly r.e. families with finite positive recursive teaching
dimension witnessed by a uniformly r.e. teaching sequence. We also derive the
arithmetical complexity of several other decision problems in teaching, such as
the problem of deciding, given an effective coding of all uniformly r.e. families, any such that
, any and , whether or not the
teaching dimension of with respect to is upper bounded
by .Comment: 15 pages in International Conference on Algorithmic Learning Theory,
201
A Theory of Formal Synthesis via Inductive Learning
Formal synthesis is the process of generating a program satisfying a
high-level formal specification. In recent times, effective formal synthesis
methods have been proposed based on the use of inductive learning. We refer to
this class of methods that learn programs from examples as formal inductive
synthesis. In this paper, we present a theoretical framework for formal
inductive synthesis. We discuss how formal inductive synthesis differs from
traditional machine learning. We then describe oracle-guided inductive
synthesis (OGIS), a framework that captures a family of synthesizers that
operate by iteratively querying an oracle. An instance of OGIS that has had
much practical impact is counterexample-guided inductive synthesis (CEGIS). We
present a theoretical characterization of CEGIS for learning any program that
computes a recursive language. In particular, we analyze the relative power of
CEGIS variants where the types of counterexamples generated by the oracle
varies. We also consider the impact of bounded versus unbounded memory
available to the learning algorithm. In the special case where the universe of
candidate programs is finite, we relate the speed of convergence to the notion
of teaching dimension studied in machine learning theory. Altogether, the
results of the paper take a first step towards a theoretical foundation for the
emerging field of formal inductive synthesis
Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners
In real-world applications of education, an effective teacher adaptively
chooses the next example to teach based on the learner's current state.
However, most existing work in algorithmic machine teaching focuses on the
batch setting, where adaptivity plays no role. In this paper, we study the case
of teaching consistent, version space learners in an interactive setting. At
any time step, the teacher provides an example, the learner performs an update,
and the teacher observes the learner's new state. We highlight that adaptivity
does not speed up the teaching process when considering existing models of
version space learners, such as "worst-case" (the learner picks the next
hypothesis randomly from the version space) and "preference-based" (the learner
picks hypothesis according to some global preference). Inspired by human
teaching, we propose a new model where the learner picks hypotheses according
to some local preference defined by the current hypothesis. We show that our
model exhibits several desirable properties, e.g., adaptivity plays a key role,
and the learner's transitions over hypotheses are smooth/interpretable. We
develop efficient teaching algorithms and demonstrate our results via
simulation and user studies.Comment: NeurIPS 2018 (extended version
Recent Developments in Algorithmic Teaching
Abstract. The present paper surveys recent developments in algorith-mic teaching. First, the traditional teaching dimension model is recalled. Starting from the observation that the teaching dimension model some-times leads to counterintuitive results, recently developed approaches are presented. Here, main emphasis is put on the following aspects derived from human teaching/learning behavior: the order in which examples are presented should matter; teaching should become harder when the memory size of the learners decreases; teaching should become easier if the learners provide feedback; and it should be possible to teach infinite concepts and/or finite and infinite concept classes. Recent developments in the algorithmic teaching achieving (some) of these aspects are presented and compared.
- …