15,325 research outputs found

    Reflective inductive inference of recursive functions

    Get PDF
    AbstractIn this paper, we investigate reflective inductive inference of recursive functions. A reflective IIM is a learning machine that is additionally able to assess its own competence.First, we formalize reflective learning from arbitrary, and from canonical, example sequences. Here, we arrive at four different types of reflection: reflection in the limit, optimistic, pessimistic and exact reflection.Then, we compare the learning power of reflective IIMs with each other as well as with the one of standard IIMs for learning in the limit, for consistent learning of three different types, and for finite learning

    Inductive inference of recursive functions: complexity bounds

    Get PDF
    This survey includes principal results on complexity of inductive inference for recursively enumerable classes of total recursive functions. Inductive inference is a process to find an algorithm from sample computations. In the case when the given class of functions is recursively enumerable it is easy to define a natural complexity measure for the inductive inference, namely, the worst-case mindchange number for the first n functions in the given class. Surely, the complexity depends not only on the class, but also on the numbering, i.e. which function is the first, which one is the second, etc. It turns out that, if the result of inference is Goedel number, then complexity of inference may vary between log n+o(log2n ) and an arbitrarily slow recursive function. If the result of the inference is an index in the numbering of the recursively enumerable class, then the complexity may go up to const-n. Additionally, effects previously found in the Kolmogorov complexity theory are discovered in the complexity of inductive inference as well

    Tradeoffs in the inductive inference of nearly minimal size programs

    Get PDF
    Inductive inference machines are algorithmic devices which attempt to synthesize (in the limit) programs for a function while they examine more and more of the graph of the function. There are many possible criteria of success. We study the inference of nearly minimal size programs. Our principal results imply that nearly minimal size programs can be inferred (in the limit) without loss of inferring power provided we are willing to tolerate a finite, but not uniformly, bounded, number of anomalies in the synthesized programs. On the other hand, there is a severe reduction of inferring power in inferring nearly minimal size programs if the maximum number of anomalies allowed is any uniform constant. We obtain a general characterization for the classes of recursive functions which can be synthesized by inferring nearly minimal size programs with anomalies. We also obtain similar results for Popperian inductive inference machines. The exact tradeoffs between mind change bounds on inductive inference machines and anomalies in synthesized programs are obtained. The techniques of recursive function theory including the recursion theorem are employed

    Uniform Inductive Improvement

    Get PDF
    We examine uniform procedures for improving the scientific competence of inductive inference machines. Formally, such procedures are construed as recursive operators. Several senses of improvement are considered, including (a) enlarging the class of functions on which success is certain, and (b) transforming probable success into certain success

    Combining Models of Approximation with Partial Learning

    Full text link
    In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.Comment: 28 page

    A Comparative Study of Coq and HOL

    Get PDF
    This paper illustrates the differences between the style of theory mechanisation of Coq and of HOL. This comparative study is based on the mechanisation of fragments of the theory of computation in these systems. Examples from these implementations are given to support some of the arguments discussed in this paper. The mechanisms for specifying definitions and for theorem proving are discussed separately, building in parallel two pictures of the different approaches of mechanisation given by these systems

    A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions

    Full text link
    The paper describes the refinement algorithm for the Calculus of (Co)Inductive Constructions (CIC) implemented in the interactive theorem prover Matita. The refinement algorithm is in charge of giving a meaning to the terms, types and proof terms directly written by the user or generated by using tactics, decision procedures or general automation. The terms are written in an "external syntax" meant to be user friendly that allows omission of information, untyped binders and a certain liberal use of user defined sub-typing. The refiner modifies the terms to obtain related well typed terms in the internal syntax understood by the kernel of the ITP. In particular, it acts as a type inference algorithm when all the binders are untyped. The proposed algorithm is bi-directional: given a term in external syntax and a type expected for the term, it propagates as much typing information as possible towards the leaves of the term. Traditional mono-directional algorithms, instead, proceed in a bottom-up way by inferring the type of a sub-term and comparing (unifying) it with the type expected by its context only at the end. We propose some novel bi-directional rules for CIC that are particularly effective. Among the benefits of bi-directionality we have better error message reporting and better inference of dependent types. Moreover, thanks to bi-directionality, the coercion system for sub-typing is more effective and type inference generates simpler unification problems that are more likely to be solved by the inherently incomplete higher order unification algorithms implemented. Finally we introduce in the external syntax the notion of vector of placeholders that enables to omit at once an arbitrary number of arguments. Vectors of placeholders allow a trivial implementation of implicit arguments and greatly simplify the implementation of primitive and simple tactics

    Synthesis of Recursive ADT Transformations from Reusable Templates

    Full text link
    Recent work has proposed a promising approach to improving scalability of program synthesis by allowing the user to supply a syntactic template that constrains the space of potential programs. Unfortunately, creating templates often requires nontrivial effort from the user, which impedes the usability of the synthesizer. We present a solution to this problem in the context of recursive transformations on algebraic data-types. Our approach relies on polymorphic synthesis constructs: a small but powerful extension to the language of syntactic templates, which makes it possible to define a program space in a concise and highly reusable manner, while at the same time retains the scalability benefits of conventional templates. This approach enables end-users to reuse predefined templates from a library for a wide variety of problems with little effort. The paper also describes a novel optimization that further improves the performance and scalability of the system. We evaluated the approach on a set of benchmarks that most notably includes desugaring functions for lambda calculus, which force the synthesizer to discover Church encodings for pairs and boolean operations
    • …
    corecore