38,092 research outputs found

    Splitting Proofs for Interpolation

    Full text link
    We study interpolant extraction from local first-order refutations. We present a new theoretical perspective on interpolation based on clearly separating the condition on logical strength of the formula from the requirement on the com- mon signature. This allows us to highlight the space of all interpolants that can be extracted from a refutation as a space of simple choices on how to split the refuta- tion into two parts. We use this new insight to develop an algorithm for extracting interpolants which are linear in the size of the input refutation and can be further optimized using metrics such as number of non-logical symbols or quantifiers. We implemented the new algorithm in first-order theorem prover VAMPIRE and evaluated it on a large number of examples coming from the first-order proving community. Our experiments give practical evidence that our work improves the state-of-the-art in first-order interpolation.Comment: 26th Conference on Automated Deduction, 201

    Multiple Query Optimization on the D-Wave 2X Adiabatic Quantum Computer

    Get PDF
    The D-Wave adiabatic quantum annealer solves hard combinatorial optimization problems leveraging quantum physics. The newest version features over 1000 qubits and was released in August 2015. We were given access to such a machine, currently hosted at NASA Ames Research Center in California, to explore the potential for hard optimization problems that arise in the context of databases. In this paper, we tackle the problem of multiple query optimization (MQO). We show how an MQO problem instance can be transformed into a mathematical formula that complies with the restrictive input format accepted by the quantum annealer. This formula is translated into weights on and between qubits such that the configuration minimizing the input formula can be found via a process called adiabatic quantum annealing. We analyze the asymptotic growth rate of the number of required qubits in the MQO problem dimensions as the number of qubits is currently the main factor restricting applicability. We experimentally compare the performance of the quantum annealer against other MQO algorithms executed on a traditional computer. While the problem sizes that can be treated are currently limited, we already find a class of problem instances where the quantum annealer is three orders of magnitude faster than other approaches

    Genome signatures, self-organizing maps and higher order phylogenies: a parametric analysis

    Get PDF
    Genome signatures are data vectors derived from the compositional statistics of DNA. The self-organizing map (SOM) is a neural network method for the conceptualisation of relationships within complex data, such as genome signatures. The various parameters of the SOM training phase are investigated for their effect on the accuracy of the resulting output map. It is concluded that larger SOMs, as well as taking longer to train, are less sensitive in phylogenetic classification of unknown DNA sequences. However, where a classification can be made, a larger SOM is more accurate. Increasing the number of iterations in the training phase of the SOM only slightly increases accuracy, without improving sensitivity. The optimal length of the DNA sequence k-mer from which the genome signature should be derived is 4 or 5, but shorter values are almost as effective. In general, these results indicate that small, rapidly trained SOMs are generally as good as larger, longer trained ones for the analysis of genome signatures. These results may also be more generally applicable to the use of SOMs for other complex data sets, such as microarray data

    SAT Modulo Monotonic Theories

    Full text link
    We define the concept of a monotonic theory and show how to build efficient SMT (SAT Modulo Theory) solvers, including effective theory propagation and clause learning, for such theories. We present examples showing that monotonic theories arise from many common problems, e.g., graph properties such as reachability, shortest paths, connected components, minimum spanning tree, and max-flow/min-cut, and then demonstrate our framework by building SMT solvers for each of these theories. We apply these solvers to procedural content generation problems, demonstrating major speed-ups over state-of-the-art approaches based on SAT or Answer Set Programming, and easily solving several instances that were previously impractical to solve

    A Semantic Framework for Test Coverage (Extended Version)

    Get PDF
    Since testing is inherently incomplete, test selection is of vital importance. Coverage measures evaluate the quality of a test suite and help the tester select test cases with maximal impact at minimum cost. Existing coverage criteria for test suites are usually defined in terms of syntactic characteristics of the implementation under test or its specification. Typical black-box coverage metrics are state and transition coverage of the specification. White-box testing often considers statement, condition and path coverage. A disadvantage of this syntactic approach is that different coverage figures are assigned to systems that are behaviorally equivalent, but syntactically different. Moreover, those coverage metrics do not take into account that certain failures are more severe than others, and that more testing effort should be devoted to uncover the most important bugs, while less critical system parts can be tested less thoroughly. This paper introduces a semantic approach to test coverage. Our starting point is a weighted fault model, which assigns a weight to each potential error in an implementation. We define a framework to express coverage measures that express how well a test suite covers such a specification, taking into account the error weight. Since our notions are semantic, they are insensitive to replacing a specification by one with equivalent behaviour.We present several algorithms that, given a certain minimality criterion, compute a minimal test suite with maximal coverage. These algorithms work on a syntactic representation of weighted fault models as fault automata. They are based on existing and novel optimization\ud problems. Finally, we illustrate our approach by analyzing and comparing a number of test suites for a chat protocol
    • 

    corecore