17 research outputs found

    Are There Good Mistakes? A Theoretical Analysis of CEGIS

    Full text link
    Counterexample-guided inductive synthesis CEGIS is used to synthesize programs from a candidate space of programs. The technique is guaranteed to terminate and synthesize the correct program if the space of candidate programs is finite. But the technique may or may not terminate with the correct program if the candidate space of programs is infinite. In this paper, we perform a theoretical analysis of counterexample-guided inductive synthesis technique. We investigate whether the set of candidate spaces for which the correct program can be synthesized using CEGIS depends on the counterexamples used in inductive synthesis, that is, whether there are good mistakes which would increase the synthesis power. We investigate whether the use of minimal counterexamples instead of arbitrary counterexamples expands the set of candidate spaces of programs for which inductive synthesis can successfully synthesize a correct program. We consider two kinds of counterexamples: minimal counterexamples and history bounded counterexamples. The history bounded counterexample used in any iteration of CEGIS is bounded by the examples used in previous iterations of inductive synthesis. We examine the relative change in power of inductive synthesis in both cases. We show that the synthesis technique using minimal counterexamples MinCEGIS has the same synthesis power as CEGIS but the synthesis technique using history bounded counterexamples HCEGIS has different power than that of CEGIS, but none dominates the other.Comment: In Proceedings SYNT 2014, arXiv:1407.493

    An optimally data efficient isomorphism inference algorithm

    Get PDF
    The time, space, and data complexity of an optimally data efficient isomorphism identification algorithm are presented. The data complexity, the amount of data required for an inference algorithm to terminate, is analyzed and shown to be the minimum possible for all possible isomorphism inference algorithms. The minimum data requirement is shown to be ⌈log2 (n)⌉, and a method for constructing this minimal sequence of data is presented. The average data requirement is shown to be approximately 2 log2(n). The time complexity is O(n2log2(n)) and the space requirement is O(n2

    On the complexity of minimum inference of regular sets

    Get PDF
    We prove results concerning the computational tractability of some problems related to determining minimum realizations of finite samples of regular sets by finite automata and regular expressions

    Algebraic properties of operator precedence languages

    Get PDF
    This paper presents new results on the algebraic ordering properties of operator precedence grammars and languages. This work was motivated by, and applied to, the mechanical acquisition or inference of operator precedence grammars. A new normal form of operator precedence grammars called homogeneous is defined. An algorithm is given to construct a grammar, called max-grammar, generating the largest language which is compatible with a given precedence matrix. Then the class of free grammars is introduced as a special subclass of operator precedence grammars. It is shown that operator precedence languages corresponding to a given precedence matrix form a Boolean algebra

    Incomputability at the Foundations of Physics (A Study in the Philosophy of Science)

    Get PDF
    info:eu-repo/semantics/publishedVersio

    Language inference from function words

    Get PDF
    Language surface structures demonstrate regularities that make it possible to learn a capacity for producing an infinite number of well-formed expressions. This paper outlines a system that uncovers and characterizes regularities through principled wholesale pattern analysis of copious amounts of machine-readable text. The system uses the notion of closed-class lexemes to divide the input into phrases, and from these phrases infers lexical and syntactic information. The set of closed-class lexemes is derived from the text, and then these lexemes are clustered into functional types. Next the open-class words are categorized according to how they tend to appear in phrases and then clustered into a smaller number of open-class types. Finally these types are used to infer, and generalize, grammar rules. Statistical criteria are employed for each of these inference operations. The result is a relatively compact grammar that is guaranteed to cover every sentence in the source text that was used to form it. Closed-class inferencing compares well with current linguistic theories of syntax and offers a wide range of potential applications

    Searching for arguments to support linguistic nativism

    Full text link

    A bibliography on formal languages and related topics

    Get PDF
    corecore