97 research outputs found

    On determinism versus nondeterminism for restarting automata

    Get PDF
    AbstractA restarting automaton processes a given word by executing a sequence of local simplifications until a simple word is obtained that the automaton then accepts. Such a computation is expressed as a sequence of cycles. A nondeterministic restarting automaton M is called correctness preserving, if, for each cycle u⊢Mcv, the string v belongs to the characteristic language LC(M) of M, if the string u does. Our first result states that for each type of restarting automaton X∈{R,RW,RWW,RL,RLW,RLWW}, if M is a nondeterministic X-automaton that is correctness preserving, then there exists a deterministic X-automaton M1 such that the characteristic languages LC(M1) and LC(M) coincide. When a restarting automaton M executes a cycle that transforms a string from the language LC(M) into a string not belonging to LC(M), then this can be interpreted as an error of M. By counting the number of cycles it may take M to detect this error, we obtain a measure for the influence that errors have on computations. Accordingly, this measure is called error detection distance. It turns out, however, that an X-automaton with bounded error detection distance is equivalent to a correctness preserving X-automaton, and therewith to a deterministic X-automaton. This means that nondeterminism increases the expressive power of X-automata only in combination with an unbounded error detection distance

    Clearing Restarting Automata

    Get PDF
    Restartovací automaty byly navrženy jako model pro redukční analýzu, která představuje lingvisticky motivovanou metodu pro kontrolu korektnosti věty. Cílem práce je studovat omezenější modely restartovacích automatů, které smí vymazat podřetězec nebo jej nahradit speciálním pomocným symbolem, jenom na základě omezeného lokálního kontextu tohoto podřetězce. Tyto restartovací automaty se nazývají clearing restarting automata. V práci jsou taktéž zkoumány uzávěrové vlastnosti těchto automatů, jejich vztah k Chomskeho hierarchii a možnosti učení těchto automatů na základě pozitivních a negativních příkladů.Restarting automata were introduced as a model for analysis by reduction which is a linguistically motivated method for checking correctness of a sentence. The goal of the thesis is to study more restricted models of restarting automata which based on a limited context can either delete a substring of the current content of its tape or replace a substring by a special symbol, which cannot be overwritten anymore, but it can be deleted later. Such restarting automata are called clearing restarting automata. The thesis investigates the properties of clearing restarting automata, their relation to Chomsky hierarchy and possibilities for machine learning of such automata from positive and negative samples.Department of Software and Computer Science EducationKatedra softwaru a výuky informatikyFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult

    Fair Testing

    Get PDF
    In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In fact, we present two distinct characterisations, which give rise to the same relation: an operational one based on a De Nicola-Hennessy-like testing modality which we call should-testing, and a denotational one based on a refined notion of failures. One of the distinguishing characteristics of the should-testing pre-congruence is that it abstracts from divergences in the same way as Milner¿s observation congruence, and as a consequence is strictly coarser than observation congruence. In other words, should-testing has a built-in fairness assumption. This is in itself a property long sought-after; it is in notable contrast to the well-known must-testing of De Nicola and Hennessy (denotationally characterised by a combination of failures and divergences), which treats divergence as catrastrophic and hence is incompatible with observation congruence. Due to these characteristics, should-testing supports modular reasoning and allows to use the proof techniques of observation congruence, but also supports additional laws and techniques. Moreover, we show decidability of should-testing (on the basis of the denotational characterisation). Finally, we demonstrate its advantages by the application to a number of examples, including a scheduling problem, a version of the Alternating Bit-protocol, and fair lossy communication channel

    Correctness Witness Validation by Abstract Interpretation

    Full text link
    Witnesses record automated program analysis results and make them exchangeable. To validate correctness witnesses through abstract interpretation, we introduce a novel abstract operation unassume. This operator incorporates witness invariants into the abstract program state. Given suitable invariants, the unassume operation can accelerate fixpoint convergence and yield more precise results. We demonstrate the feasibility of this approach by augmenting an abstract interpreter with unassume operators and evaluating the impact of incorporating witnesses on performance and precision. Using manually crafted witnesses, we can confirm verification results for multi-threaded programs with a reduction in effort ranging from 7% to 47% in CPU time. More intriguingly, we discover that using witnesses from model checkers can guide our analyzer to verify program properties that it could not verify on its own.Comment: 29 pages, 4 figures, 2 tables, extended version of the paper which is to appear at VMCAI 202

    Sampling Correctors

    Full text link
    In many situations, sample data is obtained from a noisy or imperfect source. In order to address such corruptions, this paper introduces the concept of a sampling corrector. Such algorithms use structure that the distribution is purported to have, in order to allow one to make "on-the-fly" corrections to samples drawn from probability distributions. These algorithms then act as filters between the noisy data and the end user. We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms as well as to achieve improved algorithms for those tasks. As a first step, we show how to design sampling correctors using proper learning algorithms. We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions. When correcting monotonicity, we show that this is indeed the case when also granted query access to the cumulative distribution function. We also obtain sampling correctors for monotonicity without this stronger type of access, provided that the distribution be originally very close to monotone (namely, at a distance O(1/log2n)O(1/\log^2 n)). In addition to that, we consider a restricted error model that aims at capturing "missing data" corruptions. In this model, we show that distributions that are close to monotone have sampling correctors that are significantly more efficient than achievable by the learning approach. We also consider the question of whether an additional source of independent random bits is required by sampling correctors to implement the correction process

    Faster Compact On-Line Lempel-Ziv Factorization

    Get PDF
    We present a new on-line algorithm for computing the Lempel-Ziv factorization of a string that runs in O(NlogN)O(N\log N) time and uses only O(Nlogσ)O(N\log\sigma) bits of working space, where NN is the length of the string and σ\sigma is the size of the alphabet. This is a notable improvement compared to the performance of previous on-line algorithms using the same order of working space but running in either O(Nlog3N)O(N\log^3N) time (Okanohara & Sadakane 2009) or O(Nlog2N)O(N\log^2N) time (Starikovskaya 2012). The key to our new algorithm is in the utilization of an elegant but less popular index structure called Directed Acyclic Word Graphs, or DAWGs (Blumer et al. 1985). We also present an opportunistic variant of our algorithm, which, given the run length encoding of size mm of a string of length NN, computes the Lempel-Ziv factorization on-line, in O(mmin{(loglogm)(loglogN)logloglogN,logmloglogm})O\left(m \cdot \min \left\{\frac{(\log\log m)(\log \log N)}{\log\log\log N}, \sqrt{\frac{\log m}{\log \log m}} \right\}\right) time and O(mlogN)O(m\log N) bits of space, which is faster and more space efficient when the string is run-length compressible

    Distributed Consensus, Revisited

    Get PDF
    We provide a novel model to formalize a well-known algorithm, by Chandra and Toueg, that solves Consensus among asynchronous distributed processes in the presence of a particular class of failure detectors (Diamond S or, equivalently, Omega), under the hypothesis that only a minority of processes may crash. The model is defined as a global transition system that is unambigously generated by local transition rules. The model is syntax-free in that it does not refer to any form of programming language or pseudo code. We use our model to formally prove that the algorithm is correct

    Acta Cybernetica : Volume 25. Number 1.

    Get PDF
    corecore