23,236 research outputs found

    Complexity Bounds for Ordinal-Based Termination

    Full text link
    `What more than its truth do we know if we have a proof of a theorem in a given formal system?' We examine Kreisel's question in the particular context of program termination proofs, with an eye to deriving complexity bounds on program running times. Our main tool for this are length function theorems, which provide complexity bounds on the use of well quasi orders. We illustrate how to prove such theorems in the simple yet until now untreated case of ordinals. We show how to apply this new theorem to derive complexity bounds on programs when they are proven to terminate thanks to a ranking function into some ordinal.Comment: Invited talk at the 8th International Workshop on Reachability Problems (RP 2014, 22-24 September 2014, Oxford

    Using Program Synthesis for Program Analysis

    Get PDF
    In this paper, we identify a fragment of second-order logic with restricted quantification that is expressive enough to capture numerous static analysis problems (e.g. safety proving, bug finding, termination and non-termination proving, superoptimisation). We call this fragment the {\it synthesis fragment}. Satisfiability of a formula in the synthesis fragment is decidable over finite domains; specifically the decision problem is NEXPTIME-complete. If a formula in this fragment is satisfiable, a solution consists of a satisfying assignment from the second order variables to \emph{functions over finite domains}. To concretely find these solutions, we synthesise \emph{programs} that compute the functions. Our program synthesis algorithm is complete for finite state programs, i.e. every \emph{function} over finite domains is computed by some \emph{program} that we can synthesise. We can therefore use our synthesiser as a decision procedure for the synthesis fragment of second-order logic, which in turn allows us to use it as a powerful backend for many program analysis tasks. To show the tractability of our approach, we evaluate the program synthesiser on several static analysis problems.Comment: 19 pages, to appear in LPAR 2015. arXiv admin note: text overlap with arXiv:1409.492

    Well-Founded Semantics for Extended Datalog and Ontological Reasoning

    Get PDF
    The DatalogĀ± family of expressive extensions of Datalog has recently been introduced as a new paradigm for query answering over ontologies, which captures and extends several common description logics. It extends plain Datalog by features such as existentially quantified rule heads and, at the same time, restricts the rule syntax so as to achieve decidability and tractability. In this paper, we continue the research on DatalogĀ±. More precisely, we generalize the well-founded semantics (WFS), as the standard semantics for nonmonotonic normal programs in the database context, to DatalogĀ± programs with negation under the unique name assumption (UNA). We prove that for guarded DatalogĀ± with negation under the standard WFS, answering normal Boolean conjunctive queries is decidable, and we provide precise complexity results for this problem, namely, in particular, completeness for PTIME (resp., 2-EXPTIME) in the data (resp., combined) complexity

    Recursive Program Optimization Through Inductive Synthesis Proof Transformation

    Get PDF
    The research described in this paper involved developing transformation techniques which increase the efficiency of the noriginal program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof transformation system which, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs. We compare and contrast this approach with the more traditional approaches to program transformation, and highlight the benefits of proof transformation with regards to search, correctness, automatability and generality

    Near-optimal Bootstrapping of Hitting Sets for Algebraic Models

    Full text link
    The classical lemma of Ore-DeMillo-Lipton-Schwartz-Zippel [Ore22,DL78,Zip79,Sch80] states that any nonzero polynomial f(x1,ā€¦,xn)f(x_1,\ldots, x_n) of degree at most ss will evaluate to a nonzero value at some point on a grid SnāŠ†FnS^n \subseteq \mathbb{F}^n with āˆ£Sāˆ£>s|S| > s. Thus, there is an explicit hitting set for all nn-variate degree ss, size ss algebraic circuits of size (s+1)n(s+1)^n. In this paper, we prove the following results: - Let Ļµ>0\epsilon > 0 be a constant. For a sufficiently large constant nn and all s>ns > n, if we have an explicit hitting set of size (s+1)nāˆ’Ļµ(s+1)^{n-\epsilon} for the class of nn-variate degree ss polynomials that are computable by algebraic circuits of size ss, then for all ss, we have an explicit hitting set of size sexpā”āˆ˜expā”(O(logā”āˆ—s))s^{\exp \circ \exp (O(\log^\ast s))} for ss-variate circuits of degree ss and size ss. That is, if we can obtain a barely non-trivial exponent compared to the trivial (s+1)n(s+1)^{n} sized hitting set even for constant variate circuits, we can get an almost complete derandomization of PIT. - The above result holds when "circuits" are replaced by "formulas" or "algebraic branching programs". This extends a recent surprising result of Agrawal, Ghosh and Saxena [AGS18] who proved the same conclusion for the class of algebraic circuits, if the hypothesis provided a hitting set of size at most (sn0.5āˆ’Ī“)(s^{n^{0.5 - \delta}}) (where Ī“>0\delta>0 is any constant). Hence, our work significantly weakens the hypothesis of Agrawal, Ghosh and Saxena to only require a slightly non-trivial saving over the trivial hitting set, and also presents the first such result for algebraic branching programs and formulas.Comment: The main result has been strengthened significantly, compared to the older version of the paper. Additionally, the stronger theorem now holds even for subclasses of algebraic circuits, such as algebraic formulas and algebraic branching program
    • ā€¦
    corecore