5 research outputs found

    Efficient deterministic finite automata split-minimization derived from Brzozowski's algorithm

    Full text link
    Minimization of deterministic finite automata is a classic problem in Computer Science which is still studied nowadays. In this paper, we relate the different split-minimization methods proposed to date, or to be proposed, and the algorithm due to Brzozowski which has been usually set aside in any classification of DFA minimization algorithms. In our work, we first propose a polynomial minimization method derived from a paper by Champarnaud et al. We also show how the consideration of some efficiency improvements on this algorithm lead to obtain an algorithm similar to Hopcroft s classic algorithm. The results obtained lead us to propose a characterization of the set of possible splitters.García Gómez, P.; López Rodríguez, D.; Vázquez-De-Parga Andrade, M. (2014). Efficient deterministic finite automata split-minimization derived from Brzozowski's algorithm. International Journal of Foundations of Computer Science. 25(6):679-696. doi:10.1142/S0129054114500282S679696256Vázquez de Parga, M., García, P., & López, D. (2013). A polynomial double reversal minimization algorithm for deterministic finite automata. Theoretical Computer Science, 487, 17-22. doi:10.1016/j.tcs.2013.03.005Courcelle, B., Niwinski, D., & Podelski, A. (1991). A geometrical view of the determinization and minimization of finite-state automata. Mathematical Systems Theory, 24(1), 117-146. doi:10.1007/bf02090394POLÁK, L. (2005). MINIMALIZATIONS OF NFA USING THE UNIVERSAL AUTOMATON. International Journal of Foundations of Computer Science, 16(05), 999-1010. doi:10.1142/s0129054105003431Gries, D. (1973). Describing an algorithm by Hopcroft. Acta Informatica, 2(2). doi:10.1007/bf00264025Blum, N. (1996). An O(n log n) implementation of the standard method for minimizing n-state finite automata. Information Processing Letters, 57(2), 65-69. doi:10.1016/0020-0190(95)00199-9Knuutila, T. (2001). Re-describing an algorithm by Hopcroft. Theoretical Computer Science, 250(1-2), 333-363. doi:10.1016/s0304-3975(99)00150-

    On the limits of the communication complexity technique for proving lower bounds on the size of minimal NFA’s

    Get PDF
    AbstractIn contrast to the minimization of deterministic finite automata (DFA’s), the task of constructing a minimal nondeterministic finite automaton (NFA) for a given NFA is PSPACE-complete. Moreover, there are no polynomial approximation algorithms with a constant approximation ratio for estimating the number of states of minimal NFA’s.Since one is unable to efficiently estimate the size of a minimal NFA in an efficient way, one should ask at least for developing mathematical proof methods that help to prove good lower bounds on the size of a minimal NFA for a given regular language. Here we consider the robust and most successful lower bound proof technique that is based on communication complexity. In this paper it is proved that even a strong generalization of this method fails for some concrete regular languages.“To fail” is considered here in a very strong sense. There is an exponential gap between the size of a minimal NFA and the achievable lower bound for a specific sequence of regular languages.The generalization of the concept of communication protocols is also strong here. It is shown that cutting the input word into 2O(n1/4) pieces for a size n of a minimal nondeterministic finite automaton and investigating the necessary communication transfer between these pieces as parties of a multiparty protocol does not suffice to get good lower bounds on the size of minimal nondeterministic automata. It seems that for some regular languages one cannot really abstract from the automata model that cuts the input words into particular symbols of the alphabet and reads them one by one using its input head

    Application and implementation of transducer tools in answering certain questions about regular languages

    Get PDF
    121 leaves : ill. ; 29 cm.Includes abstract.Includes bibliographical references (leaves 117-121).In this research, we investigate, refine, and implement algorithmic tools that allow us to answer decision questions about regular languages. We provide a thorough presentation of existing algorithmic tools to answer the satisfaction questions of whether a given language satisfies a given property described by an input-preserving transducer, which is equivalent to the question of whether a given language is error-detecting for the channel realized by the same input-preserving transducer; whether a given language is error-correcting for the channel realized by an input-preserving transducer; whether a given regular language satisfies the code property. In the process, we give a thorough presentation of an existing algorithm to decide whether a transducer is functional and an algorithm about how to translate a normal form transducer into a real-time transducer. We also introduce our method to provide counterexamples in cases where the answers to the satisfaction questions are negative. In addition, we discuss our new method to estimate the edit distance of a regular language by the error-correction property, which is much faster than the existing method of computing the edit distance via error-detection. Finally, we deliver an open implementation of these algorithms and methods via a web interface – I-LaSer, and add the implementation of transducer classes into our copy of the FAdo libraries
    corecore