3 research outputs found

    Monotonic language learning of informant

    No full text
    The present paper deals with monotonic and dual monotonic language learning from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. The three versions of dual monotonicity describe the concept that the inference device has to produce exclusively specializaions that fit better and better to the target language. We characterize strong-monotonic, monotonic, weak-monotonic, dual strong-monotonic, dual monotonic and dual weak-monotonic as well as finite language learning from positive and negative data in terms of recursively generable finite sets. Thereby, we elaborate a unifying approach to monotonic language learning in showing that there is exactly one learning algorithm which can perform any monotonic inference task. (orig.)SIGLEAvailable from TIB Hannover: RR 1345(1992,11) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekBundesministerium fuer Forschung und Technologie (BMFT), Bonn (Germany)DEGerman

    The learnability of recursive languages in dependence on the hypothesis space

    No full text
    We study the learnability of indexed families of uniformly recursive languages under certain monotonicity constraints. Thereby we distinguish between exact learnability (L has to be learnt with respect to the space L of hypotheses), class preserving learning (L has to be inferred with respect to some space G of hypotheses having the same range as L), and class comprising inference (L has to be learnt with respect to some space G of hypotheses that has a range comprising range (L)). In particular, it is proved that, whenever monotonicity requirements are involved, then exact learning is almost always weaker than class preserving inference which itself turns out to be almost always weaker than class comprising learning. Next, we provide additionally insight into the problem under what conditions, for example, exact and class preserving learning procedures are of equal power. Finally, we deal with the question what kind of languages has to be added to the space of hypotheses in order to obtain superior learning algorithms. (orig.)SIGLEAvailable from TIB Hannover: RR 1345(1993,20)+a / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekBundesministerium fuer Forschung und Technologie (BMFT), Bonn (Germany)DEGerman

    Classifying recursive predicates and languages

    No full text
    We study the classification of recursive predicates and languages. In particular, we compare the classification of predicates and languages with the classification of arbitrary recursive functions and with learning. Moreover, we refine our investigations by introducing classification with a bounded number of mine changes and establish a new hierarchy. Furthermore, we introduce multi-classification and characterize it. Finally, we study the classification of families of languages that have attracted a lot of attention in learning theory. (orig.)SIGLEAvailable from TIB Hannover: RR 1345(1993,21)+a / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekBundesministerium fuer Forschung und Technologie (BMFT), Bonn (Germany)DEGerman
    corecore