23 research outputs found

    Weighted Automata and Logics for Infinite Nested Words

    Full text link
    Nested words introduced by Alur and Madhusudan are used to capture structures with both linear and hierarchical order, e.g. XML documents, without losing valuable closure properties. Furthermore, Alur and Madhusudan introduced automata and equivalent logics for both finite and infinite nested words, thus extending B\"uchi's theorem to nested words. Recently, average and discounted computations of weights in quantitative systems found much interest. Here, we will introduce and investigate weighted automata models and weighted MSO logics for infinite nested words. As weight structures we consider valuation monoids which incorporate average and discounted computations of weights as well as the classical semirings. We show that under suitable assumptions, two resp. three fragments of our weighted logics can be transformed into each other. Moreover, we show that the logic fragments have the same expressive power as weighted nested word automata.Comment: LATA 2014, 12 page

    Mean-payoff Automaton Expressions

    Get PDF
    Quantitative languages are an extension of boolean languages that assign to each word a real number. Mean-payoff automata are finite automata with numerical weights on transitions that assign to each infinite path the long-run average of the transition weights. When the mode of branching of the automaton is deterministic, nondeterministic, or alternating, the corresponding class of quantitative languages is not robust as it is not closed under the pointwise operations of max, min, sum, and numerical complement. Nondeterministic and alternating mean-payoff automata are not decidable either, as the quantitative generalization of the problems of universality and language inclusion is undecidable. We introduce a new class of quantitative languages, defined by mean-payoff automaton expressions, which is robust and decidable: it is closed under the four pointwise operations, and we show that all decision problems are decidable for this class. Mean-payoff automaton expressions subsume deterministic mean-payoff automata, and we show that they have expressive power incomparable to nondeterministic and alternating mean-payoff automata. We also present for the first time an algorithm to compute distance between two quantitative languages, and in our case the quantitative languages are given as mean-payoff automaton expressions

    Weighted Modal Transition Systems

    Get PDF
    Specification theories as a tool in model-driven development processes of component-based software systems have recently attracted a considerable attention. Current specification theories are however qualitative in nature, and therefore fragile in the sense that the inevitable approximation of systems by models, combined with the fundamental unpredictability of hardware platforms, makes it difficult to transfer conclusions about the behavior, based on models, to the actual system. Hence this approach is arguably unsuited for modern software systems. We propose here the first specification theory which allows to capture quantitative aspects during the refinement and implementation process, thus leveraging the problems of the qualitative setting. Our proposed quantitative specification framework uses weighted modal transition systems as a formal model of specifications. These are labeled transition systems with the additional feature that they can model optional behavior which may or may not be implemented by the system. Satisfaction and refinement is lifted from the well-known qualitative to our quantitative setting, by introducing a notion of distances between weighted modal transition systems. We show that quantitative versions of parallel composition as well as quotient (the dual to parallel composition) inherit the properties from the Boolean setting.Comment: Submitted to Formal Methods in System Desig

    LIPIcs

    Get PDF
    A discounted-sum automaton (NDA) is a nondeterministic finite automaton with edge weights, which values a run by the discounted sum of visited edge weights. More precisely, the weight in the i-th position of the run is divided by lambda^i, where the discount factor lambda is a fixed rational number greater than 1. Discounted summation is a common and useful measuring scheme, especially for infinite sequences, which reflects the assumption that earlier weights are more important than later weights. Determinizing automata is often essential, for example, in formal verification, where there are polynomial algorithms for comparing two deterministic NDAs, while the equivalence problem for NDAs is not known to be decidable. Unfortunately, however, discounted-sum automata are, in general, not determinizable: it is currently known that for every rational discount factor 1 < lambda < 2, there is an NDA with lambda (denoted lambda-NDA) that cannot be determinized. We provide positive news, showing that every NDA with an integral factor is determinizable. We also complete the picture by proving that the integers characterize exactly the discount factors that guarantee determinizability: we show that for every non-integral rational factor lambda, there is a nondeterminizable lambda-NDA. Finally, we prove that the class of NDAs with integral discount factors enjoys closure under the algebraic operations min, max, addition, and subtraction, which is not the case for general NDAs nor for deterministic NDAs. This shows that for integral discount factors, the class of NDAs forms an attractive specification formalism in quantitative formal verification. All our results hold equally for automata over finite words and for automata over infinite words

    Comparator automata in quantitative verification

    Get PDF
    The notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by {\em comparator automata} ({\em comparators}, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values. We show that {aggregate functions} that can be represented with B\"uchi automaton result in comparators that are finite-state and accept by the B\"uchi condition as well. Such {\em ω\omega-regular comparators} further lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a finite representation of all counterexamples in the quantitative inclusion problem. We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is ω\omega-regular iff the discount-factor is an integer. Not every aggregate function, however, has an ω\omega-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither ω\omega-regular nor ω\omega-context-free. Given this result, we introduce the notion of {\em prefix-average} as a relaxation of limit-average aggregation, and show that it admits ω\omega-context-free comparators

    Near-Optimal Scheduling for LTL with Future Discounting

    Full text link
    We study the search problem for optimal schedulers for the linear temporal logic (LTL) with future discounting. The logic, introduced by Almagor, Boker and Kupferman, is a quantitative variant of LTL in which an event in the far future has only discounted contribution to a truth value (that is a real number in the unit interval [0, 1]). The precise problem we study---it naturally arises e.g. in search for a scheduler that recovers from an internal error state as soon as possible---is the following: given a Kripke frame, a formula and a number in [0, 1] called a margin, find a path of the Kripke frame that is optimal with respect to the formula up to the prescribed margin (a truly optimal path may not exist). We present an algorithm for the problem; it works even in the extended setting with propositional quality operators, a setting where (threshold) model-checking is known to be undecidable

    IST Austria Technical Report

    Get PDF
    Boolean notions of correctness are formalized by preorders on systems. Quantitative measures of correctness can be formalized by real-valued distance functions between systems, where the distance between implementation and specification provides a measure of “fit” or “desirability.” We extend the simulation preorder to the quantitative setting, by making each player of a simulation game pay a certain price for her choices. We use the resulting games with quantitative objectives to define three different simulation distances. The correctness distance measures how much the specification must be changed in order to be satisfied by the implementation. The coverage distance measures how much the im- plementation restricts the degrees of freedom offered by the specification. The robustness distance measures how much a system can deviate from the implementation description without violating the specification. We consider these distances for safety as well as liveness specifications. The distances can be computed in polynomial time for safety specifications, and for liveness specifications given by weak fairness constraints. We show that the distance functions satisfy the triangle inequality, that the distance between two systems does not increase under parallel composition with a third system, and that the distance between two systems can be bounded from above and below by distances between abstractions of the two systems. These properties suggest that our simulation distances provide an appropriate basis for a quantitative theory of discrete systems. We also demonstrate how the robustness distance can be used to measure how many transmission errors are tolerated by error correcting codes

    Quantitative reactive modeling and verification

    Get PDF
    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments
    corecore