296,745 research outputs found

    Combining Models of Approximation with Partial Learning

    Full text link
    In Gold's framework of inductive inference, the model of partial learning requires the learner to output exactly one correct index for the target object and only the target object infinitely often. Since infinitely many of the learner's hypotheses may be incorrect, it is not obvious whether a partial learner can be modifed to "approximate" the target object. Fulk and Jain (Approximate inference and scientific method. Information and Computation 114(2):179--191, 1994) introduced a model of approximate learning of recursive functions. The present work extends their research and solves an open problem of Fulk and Jain by showing that there is a learner which approximates and partially identifies every recursive function by outputting a sequence of hypotheses which, in addition, are also almost all finite variants of the target function. The subsequent study is dedicated to the question how these findings generalise to the learning of r.e. languages from positive data. Here three variants of approximate learning will be introduced and investigated with respect to the question whether they can be combined with partial learning. Following the line of Fulk and Jain's research, further investigations provide conditions under which partial language learners can eventually output only finite variants of the target language. The combinabilities of other partial learning criteria will also be briefly studied.Comment: 28 page

    Didactiques de l’intercompréhension et enseignement du français en contexte plurilingue

    Get PDF
    Intercomprehension is an innovative technique for teaching and learning based on the ability of speakers quickly to master techniques for transferring competences between related languages, principally with respect to comprehension. This methodology relies on activities contrasting with those communicative practices that have become the rule in the area of teaching language and cultures, such as translation, contrastive grammar, the importance of writing. The methodological common denominator is that of a plurilingual and pluricultural pedagogy. Learning French thus opens a door to a range of romance languages spoken by more than 500 million people throughout the world. For French-speakers, intercomprehension represents a means of rapid access to related languages and cultures, at the same time encouraging reflexive observation of the first language. The practice of intercomprehension educates for plurilingualism. It targets the development of a new relationship with languages, by means of an active practice of observation, which makes it possible to justify the acquisition of partial competencies in a language as a valid goal for learning

    Computabilities of Validity and Satisfiability in Probability Logics over Finite and Countable Models

    Full text link
    The ϵ\epsilon-logic (which is called ϵ\epsilonE-logic in this paper) of Kuyper and Terwijn is a variant of first order logic with the same syntax, in which the models are equipped with probability measures and in which the ∀x\forall x quantifier is interpreted as "there exists a set AA of measure ≥1−ϵ\ge 1 - \epsilon such that for each x∈Ax \in A, ...." Previously, Kuyper and Terwijn proved that the general satisfiability and validity problems for this logic are, i) for rational ϵ∈(0,1)\epsilon \in (0, 1), respectively Σ11\Sigma^1_1-complete and Π11\Pi^1_1-hard, and ii) for ϵ=0\epsilon = 0, respectively decidable and Σ10\Sigma^0_1-complete. The adjective "general" here means "uniformly over all languages." We extend these results in the scenario of finite models. In particular, we show that the problems of satisfiability by and validity over finite models in ϵ\epsilonE-logic are, i) for rational ϵ∈(0,1)\epsilon \in (0, 1), respectively Σ10\Sigma^0_1- and Π10\Pi^0_1-complete, and ii) for ϵ=0\epsilon = 0, respectively decidable and Π10\Pi^0_1-complete. Although partial results toward the countable case are also achieved, the computability of ϵ\epsilonE-logic over countable models still remains largely unsolved. In addition, most of the results, of this paper and of Kuyper and Terwijn, do not apply to individual languages with a finite number of unary predicates. Reducing this requirement continues to be a major point of research. On the positive side, we derive the decidability of the corresponding problems for monadic relational languages --- equality- and function-free languages with finitely many unary and zero other predicates. This result holds for all three of the unrestricted, the countable, and the finite model cases. Applications in computational learning theory, weighted graphs, and neural networks are discussed in the context of these decidability and undecidability results.Comment: 47 pages, 4 tables. Comments welcome. Fixed errors found by Rutger Kuype

    Inductive Inference and Reverse Mathematics

    Get PDF
    The present work investigates inductive inference from the perspective of reverse mathematics. Reverse mathematics is a framework which relates the proof strength of theorems and axioms throughout many areas of mathematics in an interdisciplinary way. The present work looks at basic notions of learnability including Angluin\u27s tell-tale condition and its variants for learning in the limit and for conservative learning. Furthermore, the more general criterion of partial learning is investigated. These notions are studied in the reverse mathematics context for uniformly and weakly represented families of languages. The results are stated in terms of axioms referring to domination and induction strength

    An evaluation of the partial immersion project at St. Aloysius college junior school

    Get PDF

    A Theory of Formal Synthesis via Inductive Learning

    Full text link
    Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis
    • …
    corecore