8 research outputs found

    Automata cascades: expressivity and sample complexity

    Get PDF
    Every automaton can be decomposed into a cascade of basic prime automata. This is the Prime Decomposition Theorem by Krohn and Rhodes. Guided by this theory, we propose automata cascades as a structured, modular, way to describe automata as complex systems made of many components, each implementing a specific functionality. Any automaton can serve as a component; using specific components allows for a fine-grained control of the expressivity of the resulting class of automata; using prime automata as components implies specific expressivity guarantees. Moreover, specifying automata as cascades allows for describing the sample complexity of automata in terms of their components. We show that the sample complexity is linear in the number of components and the maximum complexity of a single component, modulo logarithmic factors. This opens to the possibility of learning automata representing large dynamical systems consisting of many parts interacting with each other. It is in sharp contrast with the established understanding of the sample complexity of automata, described in terms of the overall number of states and input letters, which implies that it is only possible to learn automata where the number of states is linear in the amount of data available. Instead our results show that one can learn automata with a number of states that is exponential in the amount of data available

    Small Test Suites for Active Automata Learning

    Full text link
    A bottleneck in modern active automata learning is to test whether a hypothesized Mealy machine correctly describes the system under learning. The search space for possible counterexamples is given by so-called test suites, consisting of input sequences that have to be checked to decide whether a counterexample exists. This paper shows that significantly smaller test suites suffice under reasonable assumptions on the structure of the black box. These smaller test suites help to refute false hypotheses during active automata learning, even when the assumptions do not hold. We combine multiple test suites using a multi-armed bandit setup that adaptively selects a test suite. An extensive empirical evaluation shows the efficacy of our approach. For small to medium-sized models, the performance gain is limited. However, the approach allows learning models from large, industrial case studies that were beyond the reach of known methods.Comment: Extended paper for TACAS 202

    Canonical Algebraic Generators in Automata Learning

    Full text link
    Many methods for the verification of complex computer systems require the existence of a tractable mathematical abstraction of the system, often in the form of an automaton. In reality, however, such a model is hard to come up with, in particular manually. Automata learning is a technique that can automatically infer an automaton model from a system -- by observing its behaviour. The majority of automata learning algorithms is based on the so-called L* algorithm. The acceptor learned by L* has an important property: it is canonical, in the sense that, it is, up to isomorphism, the unique deterministic finite automaton of minimal size accepting a given regular language. Establishing a similar result for other classes of acceptors, often with side-effects, is of great practical importance. Non-deterministic finite automata, for instance, can be exponentially more succinct than deterministic ones, allowing verification to scale. Unfortunately, identifying a canonical size-minimal non-deterministic acceptor of a given regular language is in general not possible: it can happen that a regular language is accepted by two non-isomorphic non-deterministic finite automata of minimal size. In particular, it thus is unclear which one of the automata should be targeted by a learning algorithm. In this thesis, we further explore the issue and identify (sub-)classes of acceptors that admit canonical size-minimal representatives.Comment: PhD thesi

    Canonical Algebraic Generators in Automata Learning

    Get PDF
    Many methods for the verification of complex computer systems require the existence of a tractable mathematical abstraction of the system, often in the form of an automaton. In reality, however, such a model is hard to come up with, in particular manually. Automata learning is a technique that can automatically infer an automaton model from a system -- by observing its behaviour. The majority of automata learning algorithms is based on the so-called L* algorithm. The acceptor learned by L* has an important property: it is canonical, in the sense that, it is, up to isomorphism, the unique deterministic finite automaton of minimal size accepting a given regular language. Establishing a similar result for other classes of acceptors, often with side-effects, is of great practical importance. Non-deterministic finite automata, for instance, can be exponentially more succinct than deterministic ones, allowing verification to scale. Unfortunately, identifying a canonical size-minimal non-deterministic acceptor of a given regular language is in general not possible: it can happen that a regular language is accepted by two non-isomorphic non-deterministic finite automata of minimal size. In particular, it thus is unclear which one of the automata should be targeted by a learning algorithm. In this thesis, we further explore the issue and identify (sub-)classes of acceptors that admit canonical size-minimal representatives. In more detail, the contributions of this thesis are three-fold. First, we expand the automata (learning) theory of Guarded Kleene Algebra with Tests (GKAT), an efficiently decidable logic expressive enough to model simple imperative programs. In particular, we present GL*, an algorithm that learns the unique size-minimal GKAT automaton for a given deterministic language, and prove that GL* is more efficient than an existing variation of L*. We implement both algorithms in OCaml, and compare them on example programs. Second, we present a category-theoretical framework based on generators, bialgebras, and distributive laws, which identifies, for a wide class of automata with side-effects in a monad, canonical target models for automata learning. Apart from recovering examples from the literature, we discover a new canonical acceptor of regular languages, and present a unifying minimality result. Finally, we show that the construction underlying our framework is an instance of a more general theory. First, we see that deriving a minimal bialgebra from a minimal coalgebra can be realized by applying a monad on a category of subobjects with respect to an epi-mono factorisation system. Second, we explore the abstract theory of generators and bases for algebras over a monad: we discuss bases for bialgebras, the product of bases, generalise the representation theory of linear maps, and compare our ideas to a coalgebra-based approach

    Learning Product Automata

    No full text
    Contains fulltext : 212459.pdf (publisher's version ) (Open Access

    Learning Product Automata

    No full text
    corecore