29 research outputs found

    Two-Variable Universal Logic with Transitive Closure

    Get PDF
    We prove that the satisfiability problem for the two-variable, universal fragment of first-order logic with constants (or, alternatively phrased, for the Bernays-Schönfinkel class with two universally quantified variables) remains decidable after augmenting the fragment by the transitive closure of a single binary relation. We give a 2-NExpTime-upper bound and a 2-ExpTime-lower bound for the complexity of the problem. We also study the cases in which the number of constants is restricted. It appears that with two constants the considered fragment has the finite model property and NExpTime-complete satisfiability problem. Adding a third constant does not change the complexity but allows to construct infinity axioms. A fourth constant lifts the lower complexity bound to 2-ExpTime. Finally, we observe that we are close to the border between decidability and undecidability: adding a third variable or the transitive closure of a second binary relation lead to undecidability

    Non-deterministic Weighted Automata on Random Words

    Get PDF
    We present the first study of non-deterministic weighted automata under probabilistic semantics. In this semantics words are random events, generated by a Markov chain, and functions computed by weighted automata are random variables. We consider the probabilistic questions of computing the expected value and the cumulative distribution for such random variables. The exact answers to the probabilistic questions for non-deterministic automata can be irrational and are uncomputable in general. To overcome this limitation, we propose an approximation algorithm for the probabilistic questions, which works in exponential time in the automaton and polynomial time in the Markov chain. We apply this result to show that non-deterministic automata can be effectively determinised with respect to the standard deviation metric

    Approximate Learning of Limit-Average Automata

    Get PDF
    Limit-average automata are weighted automata on infinite words that use average to aggregate the weights seen in infinite runs. We study approximate learning problems for limit-average automata in two settings: passive and active. In the passive learning case, we show that limit-average automata are not PAC-learnable as samples must be of exponential-size to provide (with good probability) enough details to learn an automaton. We also show that the problem of finding an automaton that fits a given sample is NP-complete. In the active learning case, we show that limit-average automata can be learned almost-exactly, i.e., we can learn in polynomial time an automaton that is consistent with the target automaton on almost all words. On the other hand, we show that the problem of learning an automaton that approximates the target automaton (with perhaps fewer states) is NP-complete. The abovementioned results are shown for the uniform distribution on words. We briefly discuss learning over different distributions
    corecore