49,000 research outputs found

    A Theory of Formal Synthesis via Inductive Learning

    Full text link
    Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis

    Subjects, Models, Languages, Transformations

    Get PDF
    Discussions about model-driven approaches tend to be hampered by terminological confusion. This is at least partially caused by a lack of formal precision in defining the basic concepts, including that of "model" and "thing being modelled" - which we call subject in this paper. We propose a minimal criterion that a model should fulfill: essentially, it should come equipped with a clear and unambiguous membership test; in other words, a notion of which subjects it models. We then go on to discuss a certain class of models of models that we call languages, which apart from defining their own membership test also determine membership of their members. Finally, we introduce transformations on each of these layers: a subject transformation is essentially a pair of subjects, a model transformation is both a pair of models and a model of pairs (namely, subject transformations), and a language transformation is both a pair of languages and a language of model transformations. We argue that our framework has the benefits of formal precision (there can be no doubt about whether something satifies our criteria for being a model, a language or a transformation) and minimality (it is hard to imagine a case of modelling or transformation not having the characterstics that we propose)

    Learning Moore Machines from Input-Output Traces

    Full text link
    The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample

    Classifying the Arithmetical Complexity of Teaching Models

    Full text link
    This paper classifies the complexity of various teaching models by their position in the arithmetical hierarchy. In particular, we determine the arithmetical complexity of the index sets of the following classes: (1) the class of uniformly r.e. families with finite teaching dimension, and (2) the class of uniformly r.e. families with finite positive recursive teaching dimension witnessed by a uniformly r.e. teaching sequence. We also derive the arithmetical complexity of several other decision problems in teaching, such as the problem of deciding, given an effective coding {L0,L1,L2,…}\{\mathcal L_0,\mathcal L_1,\mathcal L_2,\ldots\} of all uniformly r.e. families, any ee such that Le={L0e,L1e,…,}\mathcal L_e = \{L^e_0,L^e_1,\ldots,\}, any ii and dd, whether or not the teaching dimension of LieL^e_i with respect to Le\mathcal L_e is upper bounded by dd.Comment: 15 pages in International Conference on Algorithmic Learning Theory, 201

    Self-growing neural network architecture using crisp and fuzzy entropy

    Get PDF
    The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed
    • …
    corecore