10 research outputs found

    Measuring Learning Complexity with Criteria Epitomizers

    Get PDF
    In prior papers, beginning with the seminal work by Freivalds et al. 1995, the notion of intrinsic complexity is used to analyze the learning complexity of sets of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on epitomizing sets of functions -- sets, which are learnable under a given learning criterion, but not under other criteria which are not at least as powerful. To capture the idea of epitomizing sets, new reducibility notions are given based on robust learning (closure of learning under certain classes of operators). Various degrees of epitomizing sets are characterized as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing sets to be epitomizers, and they are, then, employed to prove several sets to be epitomizing. Furthermore, a scheme is provided to generate easily very strong epitomizers for a multitude of learning criteria. These strong epitomizers are so-called self-learning sets, previously applied by Case & Koetzing, 2010. These strong epitomizers can be generated and employed in a myriad of settings to witness the strict separation in learning power between the criteria so epitomized and other not as powerful criteria

    Reflective inductive inference of recursive functions

    Get PDF
    AbstractIn this paper, we investigate reflective inductive inference of recursive functions. A reflective IIM is a learning machine that is additionally able to assess its own competence.First, we formalize reflective learning from arbitrary, and from canonical, example sequences. Here, we arrive at four different types of reflection: reflection in the limit, optimistic, pessimistic and exact reflection.Then, we compare the learning power of reflective IIMs with each other as well as with the one of standard IIMs for learning in the limit, for consistent learning of three different types, and for finite learning

    Developments from enquiries into the learnability of the pattern languages from positive data

    Get PDF
    AbstractThe pattern languages are languages that are generated from patterns, and were first proposed by Angluin as a non-trivial class that is inferable from positive data [D. Angluin, Finding patterns common to a set of strings, Journal of Computer and System Sciences 21 (1980) 46–62; D. Angluin, Inductive inference of formal languages from positive data, Information and Control 45 (1980) 117–135]. In this paper we chronologize some results that developed from the investigations on the inferability of the pattern languages from positive data

    Learning via Queries with Teams and Anomalies

    Get PDF
    Most work in the field of inductive inference regards the learning machine to be a passive recipient of data. In a prior paper the passive approach was compared to an active form of learning where the machine is allowed to ask questions. In this paper we continue the study of machines that ask questions by comparing such machines to teams of passive machines. This yields, via work of Pitt and Smith, a comparison of active learning with probabilistic learning. Also considered are query inference machines that learn an approximation of what is desired. The approximation differs from the desired result in finitely many anomalous places

    Parallelism increases iterative learning power

    Get PDF
    AbstractIterative learning (It-learning) is a Gold-style learning model in which each of a learner’s output conjectures may depend only upon the learner’s current conjecture and the current input element. Two extensions of the It-learning model are considered, each of which involves parallelism. The first is to run, in parallel, distinct instantiations of a single learner on each input element. The second is to run, in parallel, n individual learners incorporating the first extension, and to allow the n learners to communicate their results. In most contexts, parallelism is only a means of improving efficiency. However, as shown herein, learners incorporating the first extension are more powerful than It-learners, and, collective learners resulting from the second extension increase in learning power as n increases. Attention is paid to how one would actually implement a learner incorporating each extension. Parallelism is the underlying mechanism employed

    Learning and consistency

    Get PDF
    In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1
    corecore