2,651 research outputs found

    Probabilistic parsing

    Get PDF
    Postprin

    Edit Distance for Pushdown Automata

    Get PDF
    The edit distance between two words w1,w2w_1, w_2 is the minimal number of word operations (letter insertions, deletions, and substitutions) necessary to transform w1w_1 to w2w_2. The edit distance generalizes to languages L1,L2\mathcal{L}_1, \mathcal{L}_2, where the edit distance from L1\mathcal{L}_1 to L2\mathcal{L}_2 is the minimal number kk such that for every word from L1\mathcal{L}_1 there exists a word in L2\mathcal{L}_2 with edit distance at most kk. We study the edit distance computation problem between pushdown automata and their subclasses. The problem of computing edit distance to a pushdown automaton is undecidable, and in practice, the interesting question is to compute the edit distance from a pushdown automaton (the implementation, a standard model for programs with recursion) to a regular language (the specification). In this work, we present a complete picture of decidability and complexity for the following problems: (1)~deciding whether, for a given threshold kk, the edit distance from a pushdown automaton to a finite automaton is at most kk, and (2)~deciding whether the edit distance from a pushdown automaton to a finite automaton is finite.Comment: An extended version of a paper accepted to ICALP 2015 with the same title. The paper has been accepted to the LMCS journa

    A random testing approach using pushdown automata

    Get PDF
    International audienceSince finite automata are in general strong abstractions of systems, many test cases which are automata traces generated uniformly at ran-dom, may be un-concretizable. This paper proposes a method extending the abovementioned testing approach to pushdown systems providing finer abstractions. Using combinatorial techniques guarantees the uniformity of generated traces. In addition, to improve the quality of the test suites, the combination of coverage criteria with random testing is investigated. The method is illustrated within both structural and model-based testing contexts

    Answering Regular Path Queries on Workflow Provenance

    Full text link
    This paper proposes a novel approach for efficiently evaluating regular path queries over provenance graphs of workflows that may include recursion. The approach assumes that an execution g of a workflow G is labeled with query-agnostic reachability labels using an existing technique. At query time, given g, G and a regular path query R, the approach decomposes R into a set of subqueries R1, ..., Rk that are safe for G. For each safe subquery Ri, G is rewritten so that, using the reachability labels of nodes in g, whether or not there is a path which matches Ri between two nodes can be decided in constant time. The results of each safe subquery are then composed, possibly with some small unsafe remainder, to produce an answer to R. The approach results in an algorithm that significantly reduces the number of subqueries k over existing techniques by increasing their size and complexity, and that evaluates each subquery in time bounded by its input and output size. Experimental results demonstrate the benefit of this approach

    Liveness-Based Garbage Collection for Lazy Languages

    Full text link
    We consider the problem of reducing the memory required to run lazy first-order functional programs. Our approach is to analyze programs for liveness of heap-allocated data. The result of the analysis is used to preserve only live data---a subset of reachable data---during garbage collection. The result is an increase in the garbage reclaimed and a reduction in the peak memory requirement of programs. While this technique has already been shown to yield benefits for eager first-order languages, the lack of a statically determinable execution order and the presence of closures pose new challenges for lazy languages. These require changes both in the liveness analysis itself and in the design of the garbage collector. To show the effectiveness of our method, we implemented a copying collector that uses the results of the liveness analysis to preserve live objects, both evaluated (i.e., in WHNF) and closures. Our experiments confirm that for programs running with a liveness-based garbage collector, there is a significant decrease in peak memory requirements. In addition, a sizable reduction in the number of collections ensures that in spite of using a more complex garbage collector, the execution times of programs running with liveness and reachability-based collectors remain comparable
    corecore