108 research outputs found

    A Formalization of Polytime Functions

    Get PDF
    We present a deep embedding of Bellantoni and Cook's syntactic characterization of polytime functions. We prove formally that it is correct and complete with respect to the original characterization by Cobham that required a bound to be proved manually. Compared to the paper proof by Bellantoni and Cook, we have been careful in making our proof fully contructive so that we obtain more precise bounding polynomials and more efficient translations between the two characterizations. Another difference is that we consider functions on bitstrings instead of functions on positive integers. This latter change is motivated by the application of our formalization in the context of formal security proofs in cryptography. Based on our core formalization, we have started developing a library of polytime functions that can be reused to build more complex ones.Comment: 13 page

    Analysis in weak systems

    Get PDF
    The authors survey and comment their work on weak analysis. They describe the basic set-up of analysis in a feasible second-order theory and consider the impact of adding to it various forms of weak Konig's lemma. A brief discussion of the Baire categoricity theorem follows. It is then considered a strengthening of feasibility obtained (fundamentally) by the addition of a counting axiom and showed how it is possible to develop Riemann integration in the stronger system. The paper finishes with three questions in weak analysis.info:eu-repo/semantics/publishedVersio

    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    Full text link
    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey's hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity

    Techniques in weak analysis for conservation results

    Get PDF
    We review and describe the main techniques for setting up systems of weak analysis, i.e. formal systems of second-order arithmetic related to subexponential classes of computational complexity. These involve techniques of proof theory (e.g., Herbrand’s theorem and the cut-elimination theorem) and model theoretic techniques like forcing. The techniques are illustrated for the particular case of polytime computability. We also include a brief section where we list the known results in weak analysis.info:eu-repo/semantics/publishedVersio

    Interpretability in Robinson's Q

    Get PDF
    Edward Nelson published in 1986 a book defending an extreme formalist view of mathematics according to which there is an impassable barrier in the totality of exponentiation. On the positive side, Nelson embarks on a program of investigating how much mathematics can be interpreted in Raphael Robinson’s theory of arithmetic Q. In the shadow of this program, some very nice logical investigations and results were produced by a number of people, not only regarding what can be interpreted in Q but also what cannot be so interpreted. We explain some of these results and rely on them to discuss Nelson’s position.info:eu-repo/semantics/publishedVersio

    Compilability of Abduction

    Full text link
    Abduction is one of the most important forms of reasoning; it has been successfully applied to several practical problems such as diagnosis. In this paper we investigate whether the computational complexity of abduction can be reduced by an appropriate use of preprocessing. This is motivated by the fact that part of the data of the problem (namely, the set of all possible assumptions and the theory relating assumptions and manifestations) are often known before the rest of the problem. In this paper, we show some complexity results about abduction when compilation is allowed

    Consistency of circuit lower bounds with bounded theories

    Get PDF
    Proving that there are problems in PNP\mathsf{P}^\mathsf{NP} that require boolean circuits of super-linear size is a major frontier in complexity theory. While such lower bounds are known for larger complexity classes, existing results only show that the corresponding problems are hard on infinitely many input lengths. For instance, proving almost-everywhere circuit lower bounds is open even for problems in MAEXP\mathsf{MAEXP}. Giving the notorious difficulty of proving lower bounds that hold for all large input lengths, we ask the following question: Can we show that a large set of techniques cannot prove that NP\mathsf{NP} is easy infinitely often? Motivated by this and related questions about the interaction between mathematical proofs and computations, we investigate circuit complexity from the perspective of logic. Among other results, we prove that for any parameter k1k \geq 1 it is consistent with theory TT that computational class C⊈i.o.SIZE(nk){\mathcal C} \not \subseteq \textit{i.o.}\mathrm{SIZE}(n^k), where (T,C)(T, \mathcal{C}) is one of the pairs: T=T21T = \mathsf{T}^1_2 and C=PNP{\mathcal C} = \mathsf{P}^\mathsf{NP}, T=S21T = \mathsf{S}^1_2 and C=NP{\mathcal C} = \mathsf{NP}, T=PVT = \mathsf{PV} and C=P{\mathcal C} = \mathsf{P}. In other words, these theories cannot establish infinitely often circuit upper bounds for the corresponding problems. This is of interest because the weaker theory PV\mathsf{PV} already formalizes sophisticated arguments, such as a proof of the PCP Theorem. These consistency statements are unconditional and improve on earlier theorems of [KO17] and [BM18] on the consistency of lower bounds with PV\mathsf{PV}
    corecore