4,527 research outputs found

    A Map of Update Constraints in Inductive Inference

    Full text link
    We investigate how different learning restrictions reduce learning power and how the different restrictions relate to one another. We give a complete map for nine different restrictions both for the cases of complete information learning and set-driven learning. This completes the picture for these well-studied \emph{delayable} learning restrictions. A further insight is gained by different characterizations of \emph{conservative} learning in terms of variants of \emph{cautious} learning. Our analyses greatly benefit from general theorems we give, for example showing that learners with exclusively delayable restrictions can always be assumed total.Comment: fixed a mistake in Theorem 21, result is the sam

    Remarks on "Random Sequences"

    Get PDF
    We show that standard statistical tests for randomness of finite sequences are language-dependent in an inductively pernicious way

    Notes on Hierarchies and Inductive Inference

    Get PDF
    The following notes rework a discussion due to Kevin Kelly on the application of topological notions in the context of learning (see Kelly (1990)). All the results except for (2), (4) and (9) are due to Kelly, but are proved differently

    Combining Models of Approximation with Partial Learning

    Full text link
    In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.Comment: 28 page

    Relevant Consequence and Empirical Inquiry

    Get PDF
    A criterion of adequacy is proposed for theories of relevant consequence. According to the criterion, scientists whose deductive reasoning is limited to some proposed subset of the standard consequence relation must not thereby suffer a reduction in scientific competence. A simple theory of relevant consequence is introduced and shown to satisfy the criterion with respect to a formally defined paradigm of empirical inquiry

    Synthesizing inductive expertise

    Get PDF
    AbstractWe consider programs that accept descriptions of inductive inference problems and return machines that solve them. Several design specifications for synthesizers of this kind are considered from a recursion-theoretic perspective

    Uniform Inductive Improvement

    Get PDF
    We examine uniform procedures for improving the scientific competence of inductive inference machines. Formally, such procedures are construed as recursive operators. Several senses of improvement are considered, including (a) enlarging the class of functions on which success is certain, and (b) transforming probable success into certain success
    • …
    corecore