11 research outputs found

    The Role of Benchmarking in Symbolic Computation:(Position Paper)

    Get PDF
    There is little doubt that, in the minds of most symbolic computation researchers, the ideal paper consists of a problem statement, a new algorithm, a complexity analysis and preferably a few validating examples. There are many such great papers. This paradigm has served computer algebra well for many years, and indeed continues to do so where it is applicable. However, it is much less applicable to sparse problems, where there are many NP-hardness results, or to many problems coming from algebraic geometry, where the worst-case complexity seems to be rare.We argue that, in these cases, the field should take a leaf out of the practices of the SAT-solving community, and adopt systematic benchmarking, and benchmarking contests, as a way measuring (and stimulating) progress. This would involve a change of culture

    Digital Collections of Examples in Mathematical Sciences

    Get PDF
    Some aspects of Computer Algebra (notably Computation Group Theory and Computational Number Theory) have some good databases of examples, typically of the form "all the X up to size n". But most of the others, especially on the polynomial side, are lacking such, despite the utility they have demonstrated in the related fields of SAT and SMT solving. We claim that the field would be enhanced by such community-maintained databases, rather than each author hand-selecting a few, which are often too large or error-prone to print, and therefore difficult for subsequent authors to reproduce.Comment: Presented at 8th European Congress of Mathematician

    SMT-Solving Induction Proofs of Inequalities

    Full text link
    This paper accompanies a new dataset of non-linear real arithmetic problems for the SMT-LIB benchmark collection. The problems come from an automated proof procedure of Gerhold--Kauers, which is well suited for solution by SMT. The problems of this type have not been tackled by SMT-solvers before. We describe the proof technique and give one new such proof to illustrate it. We then describe the dataset and the results of benchmarking. The benchmarks on the new dataset are quite different to the existing ones. The benchmarking also brings forward some interesting debate on the use/inclusion of rational functions and algebraic numbers in the SMT-LIB.Comment: Presented at the 2022 SC-Square Worksho

    SMT-Solving Induction Proofs of Inequalities

    Get PDF

    A Poly-algorithmic Approach to Quantifier Elimination

    Full text link
    Cylindrical Algebraic Decomposition (CAD) was the first practical means for doing real quantifier elimination (QE), and is still a major method, with many improvements since Collins' original method. Nevertheless, its complexity is inherently doubly exponential in the number of variables. Where applicable, virtual term substitution (VTS) is more effective, turning a QE problem in nn variables to one in n1n-1 variables in one application, and so on. Hence there is scope for hybrid methods: doing VTS where possible then using CAD. This paper describes such a poly-algorithmic implementation, based on the second author's Ph.D. thesis. The version of CAD used is based on a new implementation of Lazard's recently-justified method, with some improvements to handle equational constraints

    The Role of Benchmarking in Symbolic Computation:(Position Paper)

    Get PDF
    There is little doubt that, in the minds of most symbolic computation researchers, the ideal paper consists of a problem statement, a new algorithm, a complexity analysis and preferably a few validating examples. There are many such great papers. This paradigm has served computer algebra well for many years, and indeed continues to do so where it is applicable. However, it is much less applicable to sparse problems, where there are many NP-hardness results, or to many problems coming from algebraic geometry, where the worst-case complexity seems to be rare.We argue that, in these cases, the field should take a leaf out of the practices of the SAT-solving community, and adopt systematic benchmarking, and benchmarking contests, as a way measuring (and stimulating) progress. This would involve a change of culture

    AllSynth: A BDD-Based Approach for Network Update Synthesis

    Get PDF
    The increasingly stringent dependability requirements on communication networks as well as the need to render these networks more adaptive to improve performance, demand for more automated approaches to operate networks. We present AllSynth, a symbolic synthesis tool for updating communication networks in a provably correct and efficient manner. AllSynth automatically synthesizes network update schedules which transiently ensure a wide range of policy properties expressed using linear temporal logic (LTL). In particular, in contrast to existing approaches, AllSynth symbolically computes and compactly represents all feasible and cost-optimal solutions. At its heart, AllSynth relies on a novel parameterized use of binary decision diagrams (BDDs) which greatly improves performance. Indeed, AllSynth not only provides formal correctness guarantees and outperforms existing state-of-the-art tools in terms of generality, but also in terms of runtime as documented by experiments on a benchmark of real-world network topologies

    The Hazard Value: A Quantitative Network Connectivity Measure Accounting for Failures

    Get PDF
    International audienceTo meet their stringent requirements in terms of performance and dependability, communication networks should be "well connected". While classic connectivity measures typically revolve around topological properties, e.g., related to cuts, these measures may not reflect well the degree to which a network is actually dependable. We introduce a more refined measure for network connectivity, the hazard value, which is developed to meet the needs of a real network operator. It accounts for crucial aspects affecting the dependability experienced in practice, including actual traffic patterns, distribution of failure probabilities, routing constraints, and alternatives for services with preferences therein. We analytically show that the hazard value fulfills several fundamental desirable properties that make it suitable for comparing different network topologies with one another, and for reasoning about how to efficiently enhance the robustness of a given network. We also present an optimised algorithm to compute the hazard value and an experimental evaluation against networks from the Internet Topology Zoo and classical datacenter topologies, such as fat trees and BCubes. This evaluation shows that the algorithm computes the hazard value within minutes for realistic networks, making it practically usable for network designers
    corecore