1,991 research outputs found

    Incremental construction of minimal acyclic finite-state automata

    Get PDF
    In this paper, we describe a new method for constructing minimal, deterministic, acyclic finite-state automata from a set of strings. Traditional methods consist of two phases: the first to construct a trie, the second one to minimize it. Our approach is to construct a minimal automaton in a single phase by adding new strings one by one and minimizing the resulting automaton on-the-fly. We present a general algorithm as well as a specialization that relies upon the lexicographical ordering of the input strings.Comment: 14 pages, 7 figure

    Online determinization of large mutating automata

    Get PDF
    A mutating finite automaton (MFA) is a nondeterministic finite automaton (NFA) which changes its morphology over discrete time by a sequence of mutations, one mutation at each time instant. A mutation involves the insertion and/or removal of a set of states and/or transitions. This results in a sequence of NFAs, one mutated NFA for each mutation. Some application domains, including model-based diagnosis and monitoring of active systems in artificial intelligence and model-based testing in software engineering, require online determinization of MFAs. Determinizing an MFA online means generating a deterministic finite automaton (DFA) as soon as a mutation occurs, which is equivalent to the mutated NFA. Since the classical Subset Construction determinization algorithm may be inadequate for MFAs, a conservative algorithm is proposed, called Subset Restructuring, that generates the new DFA by restructuring the previous DFA based on the mutation occurred, instead of building it from scratch. Experimental results indicate the effectiveness of the approach, especially so when large MFAs change in time by small mutations

    Using the Regular Chains Library to build cylindrical algebraic decompositions by projecting and lifting

    Get PDF
    Cylindrical algebraic decomposition (CAD) is an important tool, both for quantifier elimination over the reals and a range of other applications. Traditionally, a CAD is built through a process of projection and lifting to move the problem within Euclidean spaces of changing dimension. Recently, an alternative approach which first decomposes complex space using triangular decomposition before refining to real space has been introduced and implemented within the RegularChains Library of Maple. We here describe a freely available package ProjectionCAD which utilises the routines within the RegularChains Library to build CADs by projection and lifting. We detail how the projection and lifting algorithms were modified to allow this, discuss the motivation and survey the functionality of the package

    Quick Subset Construction

    Get PDF
    A finite automaton can be either deterministic (DFA) or nondeterministic (NFA). An automaton-based task is in general more efficient when performed with a DFA rather than an NFA. For any NFA there is an equivalent DFA that can be generated by the classical Subset Construction algorithm. When, however, a large NFA may be transformed into an equivalent DFA by a series of actions operating directly on the NFA, Subset Construction may be unnecessarily expensive in computation, as a (possibly large) deterministic portion of the NFA is regenerated as is, a waste of processing. This is why a conservative algorithm for NFA determinization is proposed, called Quick Subset Construction, which progressively transforms an NFA into an equivalent DFA instead of generating the DFA from scratch, thereby avoiding unnecessary processing. Quick Subset Construction is proven, both formally and empirically, to be equivalent to Subset Construction, inasmuch it generates exactly the same DFA. Experimental results indicate that, the smaller the number of repair actions performed on the NFA, as compared to the size of the equivalent DFA, the faster Quick Subset Construction over Subset Construction

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com

    Deciding Second-order Logics using Database Evaluation Techniques

    Get PDF
    We outline a novel technique that maps the satisfiability problems of second-order logics, in particular WSnS (weak monadic second-order logic with n successors), S1S (monadic second-order logic with one successor), and of ÎĽ-calculus, to the problem of query evaluation of Complex-value Datalog queries. In this dissertation, we propose techniques that use database evaluation and optimization techniques for automata-based decision procedures for the above logics. We show how the use of advanced implementation techniques for Deductive databases and for Logic Programs, in particular the use of tabling, yields a considerable improvement in performance over more traditional approaches. We also explore various optimizations of the proposed technique, in particular we consider variants of tabling and goal reordering. We then show that the decision problem for S1S can be mapped to the problem of query evaluation of Complex-value Datalog queries. We explore optimizations that can be applied to various types of formulas. Last, we propose analogous techniques that allow us to approach ÎĽ-calculus satisfiability problem in an incremental fashion and without the need for re-computation. In addition, we outline a top-down evaluation technique to drive our incremental procedure and propose heuristics that guide the problem partitioning to reduce the size of the problems that need to be solved
    • …
    corecore