72 research outputs found

    26. Theorietag Automaten und Formale Sprachen 23. Jahrestagung Logik in der Informatik: Tagungsband

    Get PDF
    Der Theorietag ist die Jahrestagung der Fachgruppe Automaten und Formale Sprachen der Gesellschaft für Informatik und fand erstmals 1991 in Magdeburg statt. Seit dem Jahr 1996 wird der Theorietag von einem eintägigen Workshop mit eingeladenen Vorträgen begleitet. Die Jahrestagung der Fachgruppe Logik in der Informatik der Gesellschaft für Informatik fand erstmals 1993 in Leipzig statt. Im Laufe beider Jahrestagungen finden auch die jährliche Fachgruppensitzungen statt. In diesem Jahr wird der Theorietag der Fachgruppe Automaten und Formale Sprachen erstmalig zusammen mit der Jahrestagung der Fachgruppe Logik in der Informatik abgehalten. Organisiert wurde die gemeinsame Veranstaltung von der Arbeitsgruppe Zuverlässige Systeme des Instituts für Informatik an der Christian-Albrechts-Universität Kiel vom 4. bis 7. Oktober im Tagungshotel Tannenfelde bei Neumünster. Während des Tre↵ens wird ein Workshop für alle Interessierten statt finden. In Tannenfelde werden • Christoph Löding (Aachen) • Tomás Masopust (Dresden) • Henning Schnoor (Kiel) • Nicole Schweikardt (Berlin) • Georg Zetzsche (Paris) eingeladene Vorträge zu ihrer aktuellen Arbeit halten. Darüber hinaus werden 26 Vorträge von Teilnehmern und Teilnehmerinnen gehalten, 17 auf dem Theorietag Automaten und formale Sprachen und neun auf der Jahrestagung Logik in der Informatik. Der vorliegende Band enthält Kurzfassungen aller Beiträge. Wir danken der Gesellschaft für Informatik, der Christian-Albrechts-Universität zu Kiel und dem Tagungshotel Tannenfelde für die Unterstützung dieses Theorietags. Ein besonderer Dank geht an das Organisationsteam: Maike Bradler, Philipp Sieweck, Joel Day. Kiel, Oktober 2016 Florin Manea, Dirk Nowotka und Thomas Wilk

    Clearing Restarting Automata

    Get PDF
    Restartovací automaty byly navrženy jako model pro redukční analýzu, která představuje lingvisticky motivovanou metodu pro kontrolu korektnosti věty. Cílem práce je studovat omezenější modely restartovacích automatů, které smí vymazat podřetězec nebo jej nahradit speciálním pomocným symbolem, jenom na základě omezeného lokálního kontextu tohoto podřetězce. Tyto restartovací automaty se nazývají clearing restarting automata. V práci jsou taktéž zkoumány uzávěrové vlastnosti těchto automatů, jejich vztah k Chomskeho hierarchii a možnosti učení těchto automatů na základě pozitivních a negativních příkladů.Restarting automata were introduced as a model for analysis by reduction which is a linguistically motivated method for checking correctness of a sentence. The goal of the thesis is to study more restricted models of restarting automata which based on a limited context can either delete a substring of the current content of its tape or replace a substring by a special symbol, which cannot be overwritten anymore, but it can be deleted later. Such restarting automata are called clearing restarting automata. The thesis investigates the properties of clearing restarting automata, their relation to Chomsky hierarchy and possibilities for machine learning of such automata from positive and negative samples.Department of Software and Computer Science EducationKatedra softwaru a výuky informatikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult

    New Results on Context-Free Tree Languages

    Get PDF
    Context-free tree languages play an important role in algebraic semantics and are applied in mathematical linguistics. In this thesis, we present some new results on context-free tree languages

    Alternative Automata-based Approaches to Probabilistic Model Checking

    Get PDF
    In this thesis we focus on new methods for probabilistic model checking (PMC) with linear temporal logic (LTL). The standard approach translates an LTL formula into a deterministic ω-automaton with a double-exponential blow up. There are approaches for Markov chain analysis against LTL with exponential runtime, which motivates the search for non-deterministic automata with restricted forms of non-determinism that make them suitable for PMC. For MDPs, the approach via deterministic automata matches the double-exponential lower bound, but a practical application might benefit from approaches via non-deterministic automata. We first investigate good-for-games (GFG) automata. In GFG automata one can resolve the non-determinism for a finite prefix without knowing the infinite suffix and still obtain an accepting run for an accepted word. We explain that GFG automata are well-suited for MDP analysis on a theoretic level, but our experiments show that GFG automata cannot compete with deterministic automata. We have also researched another form of pseudo-determinism, namely unambiguity, where for every accepted word there is exactly one accepting run. We present a polynomial-time approach for PMC of Markov chains against specifications given by an unambiguous Büchi automaton (UBA). Its two key elements are the identification whether the induced probability is positive, and if so, the identification of a state set inducing probability 1. Additionally, we examine the new symbolic Muller acceptance described in the Hanoi Omega Automata Format, which we call Emerson-Lei acceptance. It is a positive Boolean formula over unconditional fairness constraints. We present a construction of small deterministic automata using Emerson-Lei acceptance. Deciding, whether an MDP has a positive maximal probability to satisfy an Emerson-Lei acceptance, is NP-complete. This fact has triggered a DPLL-based algorithm for deciding positiveness

    Sampling Correctors

    Full text link
    In many situations, sample data is obtained from a noisy or imperfect source. In order to address such corruptions, this paper introduces the concept of a sampling corrector. Such algorithms use structure that the distribution is purported to have, in order to allow one to make "on-the-fly" corrections to samples drawn from probability distributions. These algorithms then act as filters between the noisy data and the end user. We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms as well as to achieve improved algorithms for those tasks. As a first step, we show how to design sampling correctors using proper learning algorithms. We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions. When correcting monotonicity, we show that this is indeed the case when also granted query access to the cumulative distribution function. We also obtain sampling correctors for monotonicity without this stronger type of access, provided that the distribution be originally very close to monotone (namely, at a distance O(1/log2n)O(1/\log^2 n)). In addition to that, we consider a restricted error model that aims at capturing "missing data" corruptions. In this model, we show that distributions that are close to monotone have sampling correctors that are significantly more efficient than achievable by the learning approach. We also consider the question of whether an additional source of independent random bits is required by sampling correctors to implement the correction process
    corecore