5 research outputs found

    Search diversification techniques for grammatical inference

    Get PDF
    Grammatical Inference (GI) addresses the problem of learning a grammar G, from a finite set of strings generated by G. By using GI techniques we want to be able to learn relations between syntactically structured sequences. This process of inferring the target grammar G can easily be posed as a search problem through a lattice of possible solutions. The vast majority of research being carried out in this area focuses on non-monotonic searches, i.e. use the same heuristic function to perform a depth first search into the lattice until a hypothesis is chosen. EDSM and S-EDSM are prime examples of this technique. In this paper we discuss the introduction of diversification into our search space [5]. By introducing diversification through pairwise incompatible merges, we traverse multiple disjoint paths in the search lattice and obtain better results for the inference process.peer-reviewe

    Search diversification techniques for grammatical inference

    Get PDF
    Grammatical Inference (GI) addresses the problem of learning a grammar G, from a finite set of strings generated by G. By using GI techniques we want to be able to learn relations between syntactically structured sequences. This process of inferring the target grammar G can easily be posed as a search problem through a lattice of possible solutions. The vast majority of research being carried out in this area focuses on non-monotonic searches, i.e. use the same heuristic function to perform a depth first search into the lattice until a hypothesis is chosen. EDSM and S-EDSM are prime examples of this technique. In this paper we discuss the introduction of diversification into our search space [5]. By introducing diversification through pairwise incompatible merges, we traverse multiple disjoint paths in the search lattice and obtain better results for the inference process.peer-reviewe

    Comparación de dos algoritmos recientes para inferencia gramatical de lenguajes regulares mediante autómatas no deterministas

    Get PDF
    El desarrollo de nuevos algoritmos, que resulten convergentes y eficientes, es un paso necesario para un uso provechoso de la inferencia gramatical en la solución de problemas reales y de mayor tamaño. En este trabajo se presentan dos algoritmos llamados DeLeTe2 y MRIA, que implementan la inferencia gramatical por medio de autómatas no deterministas, en contraste con los algoritmos más comúnmente empleados, los cuales utilizan autómatas deterministas. Se consideran las ventajas y desventajas de este cambio en el modelo de representación, mediante la descripción detallada y la comparación de los dos algoritmos de inferencia con respecto al enfoque utilizado en su implementación, a su complejidad computacional, a sus criterios de terminación y a su desempeño sobre un cuerpo de datos sintéticos

    Mutually compatible and incompatible merges for the search of the smallest consistent DFA

    No full text
    Abstract. State Merging algorithms, such as Rodney Price’s EDSM (Evidence-Driven State Merging) algorithm, have been reasonably successful at solving DFA-learning problems. EDSM, however, often does not converge to the target DFA and, in the case of sparse training data, does not converge at all. In this paper we argue that is partially due to the particular heuristic used in EDSM and also to the greedy search strategy employed in EDSM. We then propose a new heuristic that is based on minimising the risk involved in making merges. In other words, the heuristic gives preference to merges, whose evidence is supported by high compatibility with other merges. Incompatible merges can be trivially detected during the computation of the heuristic. We also propose a new heuristic limitation of the set of candidates after a backtrack to these incompatible merges, allowing to introduce diversity in the search.
    corecore