4,378 research outputs found

    On one-way cellular automata with a fixed number of cells

    Get PDF
    We investigate a restricted one-way cellular automaton (OCA) model where the number of cells is bounded by a constant number k, so-called kC-OCAs. In contrast to the general model, the generative capacity of the restricted model is reduced to the set of regular languages. A kC-OCA can be algorithmically converted to a deterministic finite automaton (DFA). The blow-up in the number of states is bounded by a polynomial of degree k. We can exhibit a family of unary languages which shows that this upper bound is tight in order of magnitude. We then study upper and lower bounds for the trade-off when converting DFAs to kC-OCAs. We show that there are regular languages where the use of kC-OCAs cannot reduce the number of states when compared to DFAs. We then investigate trade-offs between kC-OCAs with different numbers of cells and finally treat the problem of minimizing a given kC-OCA

    Efficient Computation of Descriptive Patterns

    Get PDF
    A pattern α\alpha is a word consisting of constants and variables and the pattern language L(α)L(\alpha) (over an alphabet Σ\Sigma) is the set of all words that can be obtained from α\alpha by uniformly replacing the variables with words over Σ\Sigma. We investigate the problem of computing a pattern α\alpha that is descriptive of a given finite set S⊆Σ∗S \subseteq \Sigma^* of words, i.\,e., S⊆L(α)S \subseteq L(\alpha) and there is no other pattern β\beta with S⊆L(β)⊂L(α)S \subseteq L(\beta) \subset L(\alpha). A pattern α\alpha that is descriptive of a set SS represents the structural commonalities of the words in SS and, thus, can serve as a classifier with respect to this structure. Furthermore, (polynomial time) computability of descriptive patterns is sufficient for (polynomial time) inductive inference of pattern languages. We investigate the complexity of computing descriptive patterns and, for subclasses of patterns, we present efficient algorithms for computing them

    Learning Moore Machines from Input-Output Traces

    Full text link
    The problem of learning automata from example traces (but no equivalence or membership queries) is fundamental in automata learning theory and practice. In this paper we study this problem for finite state machines with inputs and outputs, and in particular for Moore machines. We develop three algorithms for solving this problem: (1) the PTAP algorithm, which transforms a set of input-output traces into an incomplete Moore machine and then completes the machine with self-loops; (2) the PRPNI algorithm, which uses the well-known RPNI algorithm for automata learning to learn a product of automata encoding a Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore machine using PTAP extended with state merging. We prove that MooreMI has the fundamental identification in the limit property. We also compare the algorithms experimentally in terms of the size of the learned machine and several notions of accuracy, introduced in this paper. Finally, we compare with OSTIA, an algorithm that learns a more general class of transducers, and find that OSTIA generally does not learn a Moore machine, even when fed with a characteristic sample

    Recognizing well-parenthesized expressions in the streaming model

    Full text link
    Motivated by a concrete problem and with the goal of understanding the sense in which the complexity of streaming algorithms is related to the complexity of formal languages, we investigate the problem Dyck(s) of checking matching parentheses, with ss different types of parenthesis. We present a one-pass randomized streaming algorithm for Dyck(2) with space \Order(\sqrt{n}\log n), time per letter \polylog (n), and one-sided error. We prove that this one-pass algorithm is optimal, up to a \polylog n factor, even when two-sided error is allowed. For the lower bound, we prove a direct sum result on hard instances by following the "information cost" approach, but with a few twists. Indeed, we play a subtle game between public and private coins. This mixture between public and private coins results from a balancing act between the direct sum result and a combinatorial lower bound for the base case. Surprisingly, the space requirement shrinks drastically if we have access to the input stream in reverse. We present a two-pass randomized streaming algorithm for Dyck(2) with space \Order((\log n)^2), time \polylog (n) and one-sided error, where the second pass is in the reverse direction. Both algorithms can be extended to Dyck(s) since this problem is reducible to Dyck(2) for a suitable notion of reduction in the streaming model.Comment: 20 pages, 5 figure

    Symbolic Model Learning: New Algorithms and Applications

    Get PDF
    In this thesis, we study algorithms which can be used to extract, or learn, formal mathematical models from software systems and then using these models to test whether the given software systems satisfy certain security properties such as robustness against code injection attacks. Specifically, we focus on studying learning algorithms for automata and transducers and the symbolic extensions of these models, namely symbolic finite automata (SFAs). In a high level, this thesis contributes the following results: 1. In the first part of the thesis, we present a unified treatment of many common variations of the seminal L* algorithm for learning deterministic finite automata (DFAs) as a congruence learning algorithm for the underlying Nerode congruence which forms the basis of automata theory. Under this formulation the basic data structures used by different variations are unified as different ways to implement the Nerode congruence using queries. 2. Next, building on the new formulation of L*-style algorithms we proceed to develop new algorithms for learning transducer models. Firstly, we present the first algorithm for learning deterministic partial transducers. Furthermore, we extend my algorithm into non-deterministic models by introducing a novel, generalized congruence relation over string transformations which is able to capture a subclass of string transformations with regular lookahead. We demonstrate that this class is able to capture many practical string transformation from the domain of string sanitizers in Web applications. 3. Classical learning algorithms for automata and transducers operate over finite alphabets and have a query complexity that scales linearly with the size of the alphabet. However, in practice, this dependence on the alphabet size hinders the performance of the algorithms. To address this issue, we develop the MAT* algorithm for learning symbolic finite state automata (SFAs) which operate over infinite alphabets. In practice, the MAT* learning algorithm allow us to plug custom transition learning algorithms which will efficiently infer the predicates in the transitions of the SFA without querying the whole alphabet set. 4. Finally, we use our learning algorithm toolbox as the basis for the development of a set of black-box testing algorithms. More specifically, we present Grammar Oriented Filter Auditing (GOFA), a novel technique which allows one to utilize my learning algorithms to evaluate the robustness of a string sanitizer or filter against a set of attack strings given as a context-free grammar. Furthermore, because such grammars are many times unavailable, we developed sfadiff a differential testing technique based on symbolic automata learning which can be used in order to perform differential testing of two different parser implementations using SFA learning algorithms and we demonstrate how our algorithm can be used to develop program fingerprints. We evaluate our algorithms against state-of-the-art Web Application Firewalls and discover over 15 previously unknown vulnerabilities which result in evading the firewalls and performing code injection attacks in the backend Web application. Finally, we show how our learning algorithms can uncover vulnerabilities which are missed by other black-box methods such as fuzzing and grammar-based testing

    Energy Parity Games

    Get PDF
    Energy parity games are infinite two-player turn-based games played on weighted graphs. The objective of the game combines a (qualitative) parity condition with the (quantitative) requirement that the sum of the weights (i.e., the level of energy in the game) must remain positive. Beside their own interest in the design and synthesis of resource-constrained omega-regular specifications, energy parity games provide one of the simplest model of games with combined qualitative and quantitative objective. Our main results are as follows: (a) exponential memory is necessary and sufficient for winning strategies in energy parity games; (b) the problem of deciding the winner in energy parity games can be solved in NP \cap coNP; and (c) we give an algorithm to solve energy parity by reduction to energy games. We also show that the problem of deciding the winner in energy parity games is polynomially equivalent to the problem of deciding the winner in mean-payoff parity games, while optimal strategies may require infinite memory in mean-payoff parity games. As a consequence we obtain a conceptually simple algorithm to solve mean-payoff parity games

    Patterns with bounded treewidth

    Get PDF
    We show that any parameter of patterns that is an upper bound for the treewidth of appropriate encodings of patterns as relational structures, if restricted to a constant, allows the membership problem for pattern languages to be solved in polynomial time. Furthermore, we identify a new such parameter, called the scope coincidence degree

    Partial Orders for Efficient BMC of Concurrent Software

    Get PDF
    This version previously deposited at arXiv:1301.1629v1 [cs.LO]The vast number of interleavings that a concurrent program can have is typically identified as the root cause of the difficulty of automatic analysis of concurrent software. Weak memory is generally believed to make this problem even harder. We address both issues by modelling programs' executions with partial orders rather than the interleaving semantics (SC). We implemented a software analysis tool based on these ideas. It scales to programs of sufficient size to achieve first-time formal verification of non-trivial concurrent systems code over a wide range of models, including SC, Intel x86 and IBM Power
    • …
    corecore