4 research outputs found

    On an open problem in classification of languages

    Get PDF

    Inductive Inference Machines That Can Refute Hypothesis Spaces

    No full text
    Proc. 4th Intern. Workshop on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 744, 123-136, 1993, Revised: January, 1994.This paper intends to give a theoretical foundation of machine discovery from examples. We point out that the essence of a computational logic of scientific discovery or a logic of machine discovery is the refutability of the entire spaces of hypotheses. We discuss this issue in the framework of inductive inference of length-bounded elementary formal systems (EFS\u27s, for short), which are a kind of logic programs over strings of characters and correspond to contextsensitive grammars in Chomsky hierarchy. We first present some characterization theorems on inductive inference machines that can refute hypothesis spaces. Then we show differences between our inductive inference and some other related inferences such as in the criteria of reliable identification, finite identification and identification in the limit. Finally we show that for any n, the class, i.e. hypothesis space, of length-bounded EFS\u27s with at most n axioms is inferable in our sense, that is, the class is refutable by a consistently working inductive inference machine. This means that sufficiently large hypothesis spaces are identifiable and refutable

    On Learning of Functions Refutably

    Get PDF
    Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria

    Inductive Inference Machines That Can Refute Hypothesis Spaces

    No full text
    This paper intends to give a theoretical foundation of machine discovery from examples. We point out that the essence of a computational logic of scientific discovery or a logic of machine discovery is the refutability of the entire spaces of hypotheses. We discuss this issue in the framework of inductive inference of length-bounded elementary formal systems (EFS's, for short), which are a kind of logic programs over strings of characters and correspond to contextsensitive grammars in Chomsky hierarchy. We first present some characterization theorems on inductive inference machines that can refute hypothesis spaces. Then we show differences between our inductive inference and some other related inferences such as in the criteria of reliable identification, finite identification and identification in the limit. Finally we show that for any n, the class, i.e. hypothesis space, of length-bounded EFS's with at most n axioms is inferable in our sense, that is, the class is refutable by a consistently working inductive inference machine. This means that sufficiently large hypothesis spaces are identifiable and refutable.Proc. 4th Intern. Workshop on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 744, 123-136, 1993, Revised: January, 1994
    corecore