335 research outputs found

    Complexity of Equivalence and Learning for Multiplicity Tree Automata

    Full text link
    We consider the complexity of equivalence and learning for multiplicity tree automata, i.e., weighted tree automata over a field. We first show that the equivalence problem is logspace equivalent to polynomial identity testing, the complexity of which is a longstanding open problem. Secondly, we derive lower bounds on the number of queries needed to learn multiplicity tree automata in Angluin's exact learning model, over both arbitrary and fixed fields. Habrard and Oncina (2006) give an exact learning algorithm for multiplicity tree automata, in which the number of queries is proportional to the size of the target automaton and the size of a largest counterexample, represented as a tree, that is returned by the Teacher. However, the smallest tree-counterexample may be exponential in the size of the target automaton. Thus the above algorithm does not run in time polynomial in the size of the target automaton, and has query complexity exponential in the lower bound. Assuming a Teacher that returns minimal DAG representations of counterexamples, we give a new exact learning algorithm whose query complexity is quadratic in the target automaton size, almost matching the lower bound, and improving the best previously-known algorithm by an exponential factor

    Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces

    Full text link
    We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RNN). Our algorithm is based on the WFA learning algorithm by Balle and Mohri, which is in turn an extension of Angluin's classic \lstar algorithm. Our technical novelty is in the use of \emph{regression} methods for the so-called equivalence queries, thus exploiting the internal state space of an RNN to prioritize counterexample candidates. This way we achieve a quantitative/weighted extension of the recent work by Weiss, Goldberg and Yahav that extracts DFAs. We experimentally evaluate the accuracy, expressivity and efficiency of the extracted WFAs.Comment: AAAI 2020. We are preparing to distribute the implementatio

    Bounds in Query Learning

    Full text link
    We introduce new combinatorial quantities for concept classes, and prove lower and upper bounds for learning complexity in several models of query learning in terms of various combinatorial quantities. Our approach is flexible and powerful enough to enough to give new and very short proofs of the efficient learnability of several prominent examples (e.g. regular languages and regular ω\omega-languages), in some cases also producing new bounds on the number of queries. In the setting of equivalence plus membership queries, we give an algorithm which learns a class in polynomially many queries whenever any such algorithm exists. We also study equivalence query learning in a randomized model, producing new bounds on the expected number of queries required to learn an arbitrary concept. Many of the techniques and notions of dimension draw inspiration from or are related to notions from model theory, and these connections are explained. We also use techniques from query learning to mildly improve a result of Laskowski regarding compression schemes

    Approximate Learning of Limit-Average Automata

    Get PDF
    Limit-average automata are weighted automata on infinite words that use average to aggregate the weights seen in infinite runs. We study approximate learning problems for limit-average automata in two settings: passive and active. In the passive learning case, we show that limit-average automata are not PAC-learnable as samples must be of exponential-size to provide (with good probability) enough details to learn an automaton. We also show that the problem of finding an automaton that fits a given sample is NP-complete. In the active learning case, we show that limit-average automata can be learned almost-exactly, i.e., we can learn in polynomial time an automaton that is consistent with the target automaton on almost all words. On the other hand, we show that the problem of learning an automaton that approximates the target automaton (with perhaps fewer states) is NP-complete. The abovementioned results are shown for the uniform distribution on words. We briefly discuss learning over different distributions

    MAT learners for recognizable tree languages and tree series

    Get PDF
    We review a family of closely related query learning algorithms for unweighted and weighted tree automata, all of which are based on adaptations of the minimal adequate teacher (MAT) model by Angluin. Rather than presenting new results, the goal is to discuss these algorithms in sufficient detail to make their similarities and differences transparent to the reader interested in grammatical inference of tree automata

    CALF: Categorical Automata Learning Framework

    Get PDF
    Automata learning is a popular technique used to automatically construct an automaton model from queries, and much research has gone into devising specific adaptations of such algorithms for different types of automata. This thesis presents a unifying approach to many existing algorithms using category theory, which eases correctness proofs and guides the design of new automata learning algorithms. We provide a categorical automata learning framework---CALF---that at its core includes an abstract version of the popular L* algorithm. Using this abstract algorithm we derive several concrete ones. We instantiate the framework to a large class of Set functors, by which we recover for the first time a tree automata learning algorithm from an abstract framework, which moreover is the first to cover also algebras of quotiented polynomial functors. We further develop a general algorithm to learn weighted automata over a semiring. On the one hand, we identify a class of semirings, principal ideal domains, for which this algorithm terminates and for which no learning algorithm previously existed; on the other hand, we show that it does not terminate over the natural numbers. Finally, we develop an algorithm to learn automata with side-effects determined by a monad and provide several optimisations, as well as an implementation with experimental evaluation. This allows us to improve existing algorithms and opens the door to learning a wide range of automata

    On Learning Polynomial Recursive Programs

    Full text link
    We introduce the class of P-finite automata. These are a generalisation of weighted automata, in which the weights of transitions can depend polynomially on the length of the input word. P-finite automata can also be viewed as simple tail-recursive programs in which the arguments of recursive calls can non-linearly refer to a variable that counts the number of recursive calls. The nomenclature is motivated by the fact that over a unary alphabet P-finite automata compute so-called P-finite sequences, that is, sequences that satisfy a linear recurrence with polynomial coefficients. Our main result shows that P-finite automata can be learned in polynomial time in Angluin's MAT exact learning model. This generalises the classical results that deterministic finite automata and weighted automata over a field are respectively polynomial-time learnable in the MAT model

    Optimizing Automata Learning via Monads

    Get PDF
    Automata learning has been successfully applied in the verification of hardware and software. The size of the automaton model learned is a bottleneck for scalability, and hence optimizations that enable learning of compact representations are important. This paper exploits monads, both as a mathematical structure and a programming construct, to design, prove correct, and implement a wide class of such optimizations. The former perspective on monads allows us to develop a new algorithm and accompanying correctness proofs, building upon a general framework for automata learning based on category theory. The new algorithm is parametric on a monad, which provides a rich algebraic structure to capture non-determinism and other side-effects. We show that our approach allows us to uniformly capture existing algorithms, develop new ones, and add optimizations. The latter perspective allows us to effortlessly translate the theory into practice: we provide a Haskell library implementing our general framework, and we show experimental results for two specific instances: non-deterministic and weighted automata
    • …
    corecore