88,447 research outputs found

    Polynomial Time Nondimensionalisation of Ordinary Differential Equations via their Lie Point Symmetries

    Get PDF
    Lie group theory states that knowledge of a mm-parameters solvable group of symmetries of a system of ordinary differential equations allows to reduce by mm the number of equation. We apply this principle by finding dilatations and translations that are Lie point symmetries of considered ordinary differential system. By rewriting original problem in an invariant coordinates set for these symmetries, one can reduce the involved number of parameters. This process is classically call nondimensionalisation in dimensional analysis. We present an algorithm based on this standpoint and show that its arithmetic complexity is polynomial in input's size

    Cost-Size Semantics for Call-By-Value Higher-Order Rewriting

    Get PDF
    Higher-order rewriting is a framework in which higher-order programs can be described by transformation rules on expressions. A computation occurs by transforming an expression into another using such rules. This step-by-step computation model induced by rewriting naturally gives rise to a notion of complexity as the number of steps needed to reduce expressions to a normal form, i.e., an expression that cannot be reduced further. The study of complexity analysis focuses on the development of automatable techniques to provide bounds to this number. In this paper, we consider a form of higher-order rewriting with a call-by-value evaluation strategy, so as to model call-by-value programs. We provide a cost-size semantics: a class of algebraic interpretations to map terms to tuples which bound both the reduction cost and the size of normal forms

    Analysing Parallel Complexity of Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for publication in the Proceedings of the 32nd International Symposium on Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page

    On Complexity Bounds and Confluence of Parallel Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. Our approach to find lower bounds requires confluence of the parallel-innermost rewrite relation, thus we also provide effective sufficient criteria for proving confluence. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Under submission to Fundamenta Informaticae. arXiv admin note: substantial text overlap with arXiv:2208.0100

    Debugging of Web Applications with Web-TLR

    Full text link
    Web-TLR is a Web verification engine that is based on the well-established Rewriting Logic--Maude/LTLR tandem for Web system specification and model-checking. In Web-TLR, Web applications are expressed as rewrite theories that can be formally verified by using the Maude built-in LTLR model-checker. Whenever a property is refuted, a counterexample trace is delivered that reveals an undesired, erroneous navigation sequence. Unfortunately, the analysis (or even the simple inspection) of such counterexamples may be unfeasible because of the size and complexity of the traces under examination. In this paper, we endow Web-TLR with a new Web debugging facility that supports the efficient manipulation of counterexample traces. This facility is based on a backward trace-slicing technique for rewriting logic theories that allows the pieces of information that we are interested to be traced back through inverse rewrite sequences. The slicing process drastically simplifies the computation trace by dropping useless data that do not influence the final result. By using this facility, the Web engineer can focus on the relevant fragments of the failing application, which greatly reduces the manual debugging effort and also decreases the number of iterative verifications.Comment: In Proceedings WWV 2011, arXiv:1108.208

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012

    Certification of Complexity Proofs using CeTA

    Get PDF
    Nowadays certification is widely employed by automated termination tools for term rewriting, where certifiers support most available techniques. In complexity analysis, the situation is quite different. Although tools support certification in principle, current certifiers implement only the most basic technique, namely, suitably tamed versions of reduction orders. As a consequence, only a small fraction of the proofs generated by state-of-the-art complexity tools can be certified. To improve upon this situation, we formalized a framework for the certification of modular complexity proofs and incorporated it into CeTA. We report on this extension and present the newly supported techniques (match-bounds, weak dependency pairs, dependency tuples, usable rules, and usable replacement maps), resulting in a significant increase in the number of certifiable complexity proofs. During our work we detected conflicts in theoretical results as well as bugs in existing complexity tools

    The data-exchange chase under the microscope

    Full text link
    In this paper we take closer look at recent developments for the chase procedure, and provide additional results. Our analysis allows us create a taxonomy of the chase variations and the properties they satisfy. Two of the most central problems regarding the chase is termination, and discovery of restricted classes of sets of dependencies that guarantee termination of the chase. The search for the restricted classes has been motivated by a fairly recent result that shows that it is undecidable to determine whether the chase with a given dependency set will terminate on a given instance. There is a small dissonance here, since the quest has been for classes of sets of dependencies guaranteeing termination of the chase on all instances, even though the latter problem was not known to be undecidable. We resolve the dissonance in this paper by showing that determining whether the chase with a given set of dependencies terminates on all instances is coRE-complete. For the hardness proof we use a reduction from word rewriting systems, thereby also showing the close connection between the chase and word rewriting. The same reduction also gives us the aforementioned instance-dependent RE-completeness result as a byproduct. For one of the restricted classes guaranteeing termination on all instances, the stratified sets dependencies, we provide new complexity results for the problem of testing whether a given set of dependencies belongs to it. These results rectify some previous claims that have occurred in the literature.Comment: arXiv admin note: substantial text overlap with arXiv:1303.668

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto
    corecore