5,455 research outputs found

    Regular and context-free pattern languages over small alphabets

    Get PDF
    Pattern languages are generalisations of the copy language, which is a standard textbook example of a context-sensitive and non-context-free language. In this work, we investigate a counter-intuitive phenomenon: with respect to alphabets of size 2 and 3, pattern languages can be regular or context-free in an unexpected way. For this regularity and context-freeness of pattern languages, we give several sufficient and necessary conditions and improve known results

    Regular and context-free pattern languages over small alphabets

    Get PDF
    Pattern languages are generalisations of the copy language, which is a standard textbook example of a context-sensitive and noncontext- free language. In this work, we investigate a counter-intuitive phenomenon: with respect to alphabets of size 2 and 3, pattern languages can be regular or context-free in an unexpected way. For this regularity and context-freeness of pattern languages, we give several sufficient and necessary conditions and improve known results

    Regular and Context-Free Pattern Languages over Small Alphabets

    Get PDF
    Pattern languages are generalisations of the copy language, which is a standard textbook example of a context-sensitive and non-context-free language. In this work, we investigate a counter-intuitive phenomenon: with respect to alphabets of size 2 and 3, pattern languages can be regular or context-free in an unexpected way. For this regularity and context-freeness of pattern languages, we give several sufficient and necessary conditions and improve known results

    Digraph Complexity Measures and Applications in Formal Language Theory

    Full text link
    We investigate structural complexity measures on digraphs, in particular the cycle rank. This concept is intimately related to a classical topic in formal language theory, namely the star height of regular languages. We explore this connection, and obtain several new algorithmic insights regarding both cycle rank and star height. Among other results, we show that computing the cycle rank is NP-complete, even for sparse digraphs of maximum outdegree 2. Notwithstanding, we provide both a polynomial-time approximation algorithm and an exponential-time exact algorithm for this problem. The former algorithm yields an O((log n)^(3/2))- approximation in polynomial time, whereas the latter yields the optimum solution, and runs in time and space O*(1.9129^n) on digraphs of maximum outdegree at most two. Regarding the star height problem, we identify a subclass of the regular languages for which we can precisely determine the computational complexity of the star height problem. Namely, the star height problem for bideterministic languages is NP-complete, and this holds already for binary alphabets. Then we translate the algorithmic results concerning cycle rank to the bideterministic star height problem, thus giving a polynomial-time approximation as well as a reasonably fast exact exponential algorithm for bideterministic star height.Comment: 19 pages, 1 figur

    The separation problem for regular languages by piecewise testable languages

    Full text link
    Separation is a classical problem in mathematics and computer science. It asks whether, given two sets belonging to some class, it is possible to separate them by another set of a smaller class. We present and discuss the separation problem for regular languages. We then give a direct polynomial time algorithm to check whether two given regular languages are separable by a piecewise testable language, that is, whether a BÎŁ1(<)B{\Sigma}1(<) sentence can witness that the languages are indeed disjoint. The proof is a reformulation and a refinement of an algebraic argument already given by Almeida and the second author

    In the Maze of Data Languages

    Full text link
    In data languages the positions of strings and trees carry a label from a finite alphabet and a data value from an infinite alphabet. Extensions of automata and logics over finite alphabets have been defined to recognize data languages, both in the string and tree cases. In this paper we describe and compare the complexity and expressiveness of such models to understand which ones are better candidates as regular models

    Speech Recognition by Composition of Weighted Finite Automata

    Full text link
    We present a general framework based on weighted finite automata and weighted finite-state transducers for describing and implementing speech recognizers. The framework allows us to represent uniformly the information sources and data structures used in recognition, including context-dependent units, pronunciation dictionaries, language models and lattices. Furthermore, general but efficient algorithms can used for combining information sources in actual recognizers and for optimizing their application. In particular, a single composition algorithm is used both to combine in advance information sources such as language models and dictionaries, and to combine acoustic observations and information sources dynamically during recognition.Comment: 24 pages, uses psfig.st
    • …
    corecore