4 research outputs found
Combining Models of Approximation with Partial Learning
In Gold's framework of inductive inference, the model of partial learning
requires the learner to output exactly one correct index for the target object
and only the target object infinitely often. Since infinitely many of the
learner's hypotheses may be incorrect, it is not obvious whether a partial
learner can be modifed to "approximate" the target object.
Fulk and Jain (Approximate inference and scientific method. Information and
Computation 114(2):179--191, 1994) introduced a model of approximate learning
of recursive functions. The present work extends their research and solves an
open problem of Fulk and Jain by showing that there is a learner which
approximates and partially identifies every recursive function by outputting a
sequence of hypotheses which, in addition, are also almost all finite variants
of the target function.
The subsequent study is dedicated to the question how these findings
generalise to the learning of r.e. languages from positive data. Here three
variants of approximate learning will be introduced and investigated with
respect to the question whether they can be combined with partial learning.
Following the line of Fulk and Jain's research, further investigations provide
conditions under which partial language learners can eventually output only
finite variants of the target language. The combinabilities of other partial
learning criteria will also be briefly studied.Comment: 28 page
Language Learning in Dependence on the Space of Hypotheses
We study the learnability of indexed families L = (L j ) j2IN of uniformly recursive languages under certain monotonicity constraints. Thereby we distinguish between exact learnability (L has to be learnt with respect to the space L of hypotheses), class preserving learning (L has to be inferred with respect to some space G of hypotheses having the same range as L), and class comprising inference (L has to be learnt with respect to some space G of hypotheses that has a range comprising range(L)). In particular, it is proved that, whenever monotonicity requirements are involved, then exact learning is almost always weaker than class preserving inference which itself turns out to be almost always weaker than class comprising learning. Next, we provide additionally insight into the problem under what conditions, for example, exact and class preserving learning procedures are of equal power. Finally, we deal with the question what kind of languages has to be added to the space of hypo..