409 research outputs found

    Memoization for Unary Logic Programming: Characterizing PTIME

    Full text link
    We give a characterization of deterministic polynomial time computation based on an algebraic structure called the resolution semiring, whose elements can be understood as logic programs or sets of rewriting rules over first-order terms. More precisely, we study the restriction of this framework to terms (and logic programs, rewriting rules) using only unary symbols. We prove it is complete for polynomial time computation, using an encoding of pushdown automata. We then introduce an algebraic counterpart of the memoization technique in order to show its PTIME soundness. We finally relate our approach and complexity results to complexity of logic programming. As an application of our techniques, we show a PTIME-completeness result for a class of logic programming queries which use only unary function symbols.Comment: Soumis {\`a} LICS 201

    Tree-Structured Problems and Parallel Computation

    Get PDF
    Turing-Maschinen sind das klassische Beschreibungsmittel fĂŒr Wortsprachen und werden daher auch benĂŒtzt, um KomplexitĂ€tsklassen zu definieren. Dies geschieht zum Beispiel durch das EinschrĂ€nken des Platz- oder Zeitaufwandes der Berechnung zur Lösung eines Problems. FĂŒr sehr niedrige KomplexitĂ€t wie etwa sublineare Laufzeit, werden Schaltkreise verwendet. Schaltkreise können auf natĂŒrliche Art KomplexitĂ€ten wie etwa logarithmische Laufzeit modellieren. Ebenso können sie als eine Art paralleles Rechenmodell gesehen werden. Eine wichtige parallele KomplexitĂ€tsklasse ist NC1. Sie wird beschrieben durch Boolesche Schaltkreise logarithmischer Tiefe und beschrĂ€nktem Eingangsgrad der Gatter. Eine initiale Beobachtung, die die vorliegende Arbeit motiviert, ist, dass viele schwere Probleme in NC1 eine Ă€hnliche Struktur haben und auf Ă€hnliche Art und Weise gelöst werden. Das Auswertungsproblem fĂŒr Boolesche Formeln ist eines der reprĂ€sentativsten Probleme aus dieser Klasse: Gegeben ist hier eine aussagenlogische Formel samt Belegung fĂŒr die Variablen; gefragt ist, ob sie zu wahr oder zu falsch auswertet. Dieses Problem wird in NC1 gelöst durch den Algorithmus von Buss. Auf Ă€hnliche Art können arithmetische Formeln in #NC1 ausgewertet oder das Wortproblem fĂŒr Visibly-Pushdown-Sprachen gelöst werden. Zu besagter Klasse an Problemen gehört auch Courcelles Theorem, welches Berechnungen in Baumautomaten involviert. Zu bemerken ist, dass alle angesprochenen Probleme gemeinsam haben, dass sie aus Instanzen bestehen, die baumartig sind. Formeln sind BĂ€ume, Visibly-Pushdown-Sprachen enthalten als Wörter kodierte BĂ€ume und Courcelles Theorem betrachtet Graphen mit beschrĂ€nkter Baumweite, d.h. Graphen, die sich als Baum darstellen lassen. Insbesondere Letzteres ist ein Schema, das hĂ€ufiger auftritt. Zum Beispiel gibt es NP-vollstĂ€ndige Graphprobleme wie das Finden von Hamilton-Kreisen, welches unter beschrĂ€nkter Baumweite in P fĂ€llt. Neuere Analysen konnten diese Schranke weiter zu SAC1 verbessern, was eine parallele KomplexitĂ€tsklasse ist. Die angesprochenen Probleme kommen aus unterschiedlichen Bereichen und haben individuelle Lösungen. Hauptthese dieser Arbeit ist, dass sich diese Vielfalt vereinheitlichen lĂ€sst. Es wird ein generisches Lösungskonzept vorgestellt, welches darauf beruht, dass sich die Probleme auf ein Termevaluierungsproblem reduzieren lassen. KernstĂŒck ist daher ein Termevaluierungsalgorithmus, der unabhĂ€ngig von der Algebra, ĂŒber welche der Term evaluiert werden soll, ist. Resultat ist, dass eine Vielzahl, darunter die oben angesprochenen Probleme, sich auf analoge Art lösen lassen, und dass sich ebenso leicht neue Resultate zeigen lassen. Diese Menge an Resultaten hĂ€tte sich ohne den vereinheitlichten Lösungsansatz nicht innerhalb des Rahmens einer Arbeit wie der vorliegenden zeigen lassen. Der entwickelte Lösungsansatz fĂŒhrt stets zu Schaltkreisfamilien polylogarithmischer Tiefe. Es wird jedoch auch die Frage behandelt, wie mĂ€chtig Schaltkreisfamilien konstanter Tiefe noch bezĂŒglich Termevaluierung sind. Die Klasse AC0 ist hierfĂŒr ein natĂŒrlicher Kandidat; sie entspricht der Menge der Sprachen, die durch Logik erster Ordung beschreibbar sind. Um dieses Problem anzugehen, wird zunĂ€chst das Termevaluierungsproblem ĂŒber endlichen Algebren betrachtet. Dieses wiederum lĂ€sst sich in das Wortproblem von Visibly-Pushdown-Sprachen einbetten. Daher handelt dieser Teil der Arbeit vornehmlich von der Beschreibbarkeit von Visibly-Pushdown-Sprachen in Logik erster Ordnung. Hierbei treten ungelöste Probleme zu Tage, welche ein Indiz dafĂŒr sind, wie schlecht die KomplexitĂ€t konstanter Tiefe bisher noch verstanden ist, und das, trotz des Resultats von Furst, Saxe und Sipser, bzw. HĂ„stads. Die bis jetzt beschrieben Inhalte sind Teil einer kontinuierlichen Entwicklung. Es gibt jedoch ein Thema in dieser Arbeit, das orthogonal dazu ist: Automaten und im speziellen Cost-Register-Automaten. Zum einen sind, wie oben angedeutet, Automaten Beispiele fĂŒr Anwendungen des hier entwickelten generischen Lösungsansatzes. Zum anderen können sie selbst zur Beschreibung von Termevaluierungsproblemen dienen; so können Visibly-Pushdown-Automaten Termevaluierung ĂŒber endlichen Algebren ausfĂŒhren. Um ĂŒber endliche Algebren hinauszugehen, benötigen die Automaten mehr Speicher. Visibly-Pushdown-Automaten haben einen Keller, der genau dafĂŒr geeignet ist, die Baumstruktur einer Eingabeformel zu verifizieren. FĂŒr nichtendliche Algebren eignet sich ein Modell, welches hier vorgestellt werden soll. Es kombiniert Visibly-Pushdown-Automaten mit Cost-Register-Automaten. Ein Cost-Register-Automat ist ein endlicher Automat, welcher mit zusĂ€tzlichen Registern ausgestattet ist. Die Register können Werte einer Algebra speichern und werden in jedem Schritt in AbhĂ€ngigkeit des Eingabezeichens und des Zustandes aktualisiert. Dieser Einwegdatenfluss von ZustĂ€nden zu Registern sorgt dafĂŒr, dass dieses Modell nicht nur entscheidbar bleibt, sondern, in AbhĂ€ngigkeit der Algebra, auch niedrige KomplexitĂ€t hat. Das neue Modell der Cost-Register-Visibly-Pushdown-Automaten kann nun Terme evaluieren. Es werden grundlegende Eigenschaften gezeigt, einschließlich KomplexitĂ€tsaussagen

    Iterated uniform finite-state transducers

    Get PDF
    A deterministic iterated uniform finite-state transducer (for short, iufst) operates the same length-preserving transduction on several left-to-right sweeps. The first sweep occurs on the input string, while any other sweep processes the output of the previous one. We focus on constant sweep bounded iufsts. We study their descriptional power vs. deterministic finite automata, and the state cost of implementing language operations. Then, we focus on non-constant sweep bounded iufsts, showing a nonregular language hierarchy depending on sweep complexity

    Faster Algorithms for Weighted Recursive State Machines

    Full text link
    Pushdown systems (PDSs) and recursive state machines (RSMs), which are linearly equivalent, are standard models for interprocedural analysis. Yet RSMs are more convenient as they (a) explicitly model function calls and returns, and (b) specify many natural parameters for algorithmic analysis, e.g., the number of entries and exits. We consider a general framework where RSM transitions are labeled from a semiring and path properties are algebraic with semiring operations, which can model, e.g., interprocedural reachability and dataflow analysis problems. Our main contributions are new algorithms for several fundamental problems. As compared to a direct translation of RSMs to PDSs and the best-known existing bounds of PDSs, our analysis algorithm improves the complexity for finite-height semirings (that subsumes reachability and standard dataflow properties). We further consider the problem of extracting distance values from the representation structures computed by our algorithm, and give efficient algorithms that distinguish the complexity of a one-time preprocessing from the complexity of each individual query. Another advantage of our algorithm is that our improvements carry over to the concurrent setting, where we improve the best-known complexity for the context-bounded analysis of concurrent RSMs. Finally, we provide a prototype implementation that gives a significant speed-up on several benchmarks from the SLAM/SDV project

    On the Complexity of the Equivalence Problem for Probabilistic Automata

    Full text link
    Checking two probabilistic automata for equivalence has been shown to be a key problem for efficiently establishing various behavioural and anonymity properties of probabilistic systems. In recent experiments a randomised equivalence test based on polynomial identity testing outperformed deterministic algorithms. In this paper we show that polynomial identity testing yields efficient algorithms for various generalisations of the equivalence problem. First, we provide a randomized NC procedure that also outputs a counterexample trace in case of inequivalence. Second, we show how to check for equivalence two probabilistic automata with (cumulative) rewards. Our algorithm runs in deterministic polynomial time, if the number of reward counters is fixed. Finally we show that the equivalence problem for probabilistic visibly pushdown automata is logspace equivalent to the Arithmetic Circuit Identity Testing problem, which is to decide whether a polynomial represented by an arithmetic circuit is identically zero.Comment: technical report for a FoSSaCS'12 pape

    Low-Latency Sliding Window Algorithms for Formal Languages

    Get PDF
    Low-latency sliding window algorithms for regular and context-free languages are studied, where latency refers to the worst-case time spent for a single window update or query. For every regular language LL it is shown that there exists a constant-latency solution that supports adding and removing symbols independently on both ends of the window (the so-called two-way variable-size model). We prove that this result extends to all visibly pushdown languages. For deterministic 1-counter languages we present a O(log⁥n)\mathcal{O}(\log n) latency sliding window algorithm for the two-way variable-size model where nn refers to the window size. We complement these results with a conditional lower bound: there exists a fixed real-time deterministic context-free language LL such that, assuming the OMV (online matrix vector multiplication) conjecture, there is no sliding window algorithm for LL with latency n1/2−ϔn^{1/2-\epsilon} for any Ï”>0\epsilon>0, even in the most restricted sliding window model (one-way fixed-size model). The above mentioned results all refer to the unit-cost RAM model with logarithmic word size. For regular languages we also present a refined picture using word sizes O(1)\mathcal{O}(1), O(log⁥log⁥n)\mathcal{O}(\log\log n), and O(log⁥n)\mathcal{O}(\log n).Comment: A short version will be presented at the conference FSTTCS 202

    Low-Latency Sliding Window Algorithms for Formal Languages

    Get PDF
    Low-latency sliding window algorithms for regular and context-free languages are studied, where latency refers to the worst-case time spent for a single window update or query. For every regular language L it is shown that there exists a constant-latency solution that supports adding and removing symbols independently on both ends of the window (the so-called two-way variable-size model). We prove that this result extends to all visibly pushdown languages. For deterministic 1-counter languages we present a ?(log n) latency sliding window algorithm for the two-way variable-size model where n refers to the window size. We complement these results with a conditional lower bound: there exists a fixed real-time deterministic context-free language L such that, assuming the OMV (online matrix vector multiplication) conjecture, there is no sliding window algorithm for L with latency n^(1/2-?) for any ? > 0, even in the most restricted sliding window model (one-way fixed-size model). The above mentioned results all refer to the unit-cost RAM model with logarithmic word size. For regular languages we also present a refined picture using word sizes ?(1), ?(log log n), and ?(log n)

    REGULAR LANGUAGES: TO FINITE AUTOMATA AND BEYOND - SUCCINCT DESCRIPTIONS AND OPTIMAL SIMULATIONS

    Get PDF
    \uc8 noto che i linguaggi regolari \u2014 o di tipo 3 \u2014 sono equivalenti agli automi a stati finiti. Tuttavia, in letteratura sono presenti altre caratterizzazioni di questa classe di linguaggi, in termini di modelli riconoscitori e grammatiche. Per esempio, limitando le risorse computazionali di modelli pi\uf9 generali, quali grammatiche context-free, automi a pila e macchine di Turing, che caratterizzano classi di linguaggi pi\uf9 ampie, \ue8 possibile ottenere modelli che generano o riconoscono solamente i linguaggi regolari. I dispositivi risultanti forniscono delle rappresentazioni alternative dei linguaggi di tipo 3, che, in alcuni casi, risultano significativamente pi\uf9 compatte rispetto a quelle dei modelli che caratterizzano la stessa classe di linguaggi. Il presente lavoro ha l\u2019obiettivo di studiare questi modelli formali dal punto di vista della complessit\ue0 descrizionale, o, in altre parole, di analizzare le relazioni tra le loro dimensioni, ossia il numero di simboli utilizzati per specificare la loro descrizione. Sono presentati, inoltre, alcuni risultati connessi allo studio della famosa domanda tuttora aperta posta da Sakoda e Sipser nel 1978, inerente al costo, in termini di numero di stati, per l\u2019eliminazione del nondeterminismo dagli automi stati finiti sfruttando la capacit\ue0 degli automi two-way deterministici di muovere la testina avanti e indietro sul nastro di input.It is well known that regular \u2014 or type 3 \u2014 languages are equivalent to finite automata. Nevertheless, many other characterizations of this class of languages in terms of computational devices and generative models are present in the literature. For example, by suitably restricting more general models such as context-free grammars, pushdown automata, and Turing machines, that characterize wider classes of languages, it is possible to obtain formal models that generate or recognize regular languages only. The resulting formalisms provide alternative representations of type 3 languages that may be significantly more concise than other models that share the same expressing power. The goal of this work is to investigate these formal systems from a descriptional complexity perspective, or, in other words, to study the relationships between their sizes, namely the number of symbols used to write down their descriptions. We also present some results related to the investigation of the famous question posed by Sakoda and Sipser in 1978, concerning the size blowups from nondeterministic finite automata to two-way deterministic finite automata

    Querying XML data streams from wireless sensor networks: an evaluation of query engines

    Get PDF
    As the deployment of wireless sensor networks increase and their application domain widens, the opportunity for effective use of XML filtering and streaming query engines is ever more present. XML filtering engines aim to provide efficient real-time querying of streaming XML encoded data. This paper provides a detailed analysis of several such engines, focusing on the technology involved, their capabilities, their support for XPath and their performance. Our experimental evaluation identifies which filtering engine is best suited to process a given query based on its properties. Such metrics are important in establishing the best approach to filtering XML streams on-the-fly
    • 

    corecore