1,443 research outputs found
Computational reverse mathematics and foundational analysis
Reverse mathematics studies which subsystems of second order arithmetic are
equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main
philosophical application of reverse mathematics proposed thus far is
foundational analysis, which explores the limits of different foundations for
mathematics in a formally precise manner. This paper gives a detailed account
of the motivations and methodology of foundational analysis, which have
heretofore been largely left implicit in the practice. It then shows how this
account can be fruitfully applied in the evaluation of major foundational
approaches by a careful examination of two case studies: a partial realization
of Hilbert's program due to Simpson [1988], and predicativism in the extended
form due to Feferman and Sch\"{u}tte.
Shore [2010, 2013] proposes that equivalences in reverse mathematics be
proved in the same way as inequivalences, namely by considering only
-models of the systems in question. Shore refers to this approach as
computational reverse mathematics. This paper shows that despite some
attractive features, computational reverse mathematics is inappropriate for
foundational analysis, for two major reasons. Firstly, the computable
entailment relation employed in computational reverse mathematics does not
preserve justification for the foundational programs above. Secondly,
computable entailment is a complete relation, and hence employing it
commits one to theoretical resources which outstrip those available within any
foundational approach that is proof-theoretically weaker than
.Comment: Submitted. 41 page
Computational Complexity for Physicists
These lecture notes are an informal introduction to the theory of
computational complexity and its links to quantum computing and statistical
mechanics.Comment: references updated, reprint available from
http://itp.nat.uni-magdeburg.de/~mertens/papers/complexity.shtm
Testing for the ground (co-)reducibility property in term-rewriting systems
AbstractGiven a term-rewriting system R, a term t is ground-reducible by R if every ground instance tσ of it is R-reducible. A pair (t, s) of terms is ground-co-reducible by R if every ground instance (tσ, sσ] of it for which tσ and sσ are distinct is R-reducible. Ground (co-)reducibility has been proved to be the fundamental tool for mechanizing inductive proofs, together with the Knuth-Bendix completion procedure presented by Jouannaud and Kounalis (1986, 1989).Jouannaud and Kounalis (1986, 1989) also presented an algorithm for testing ground reducibility which is tractable in practical cases but restricted to left-linear term-rewriting systems. The solution of the ground (co-)reducibility problem, for the general case, turned out to be surprisingly complicated. Decidability of ground reducibility for arbitrary term-rewriting systems has been first proved by Plaisted (1985) and independently by Kapur (1987). However, the algorithms of Plaisted and Kapur amount to intractable computation, even in very simple cases.We present here a new algorithm for the general case which outperforms the algorithms of Plaisted and Kapur and even our previous algorithm in case of left-linear term-rewriting systems. We then show how to adapt it to check for ground co-reducibility
The intuitionistic fragment of computability logic at the propositional level
This paper presents a soundness and completeness proof for propositional
intuitionistic calculus with respect to the semantics of computability logic.
The latter interprets formulas as interactive computational problems,
formalized as games between a machine and its environment. Intuitionistic
implication is understood as algorithmic reduction in the weakest possible --
and hence most natural -- sense, disjunction and conjunction as
deterministic-choice combinations of problems (disjunction = machine's choice,
conjunction = environment's choice), and "absurd" as a computational problem of
universal strength. See http://www.cis.upenn.edu/~giorgi/cl.html for a
comprehensive online source on computability logic
The computational complexity of density functional theory
Density functional theory is a successful branch of numerical simulations of
quantum systems. While the foundations are rigorously defined, the universal
functional must be approximated resulting in a `semi'-ab initio approach. The
search for improved functionals has resulted in hundreds of functionals and
remains an active research area. This chapter is concerned with understanding
fundamental limitations of any algorithmic approach to approximating the
universal functional. The results based on Hamiltonian complexity presented
here are largely based on \cite{Schuch09}. In this chapter, we explain the
computational complexity of DFT and any other approach to solving electronic
structure Hamiltonians. The proof relies on perturbative gadgets widely used in
Hamiltonian complexity and we provide an introduction to these techniques using
the Schrieffer-Wolff method. Since the difficulty of this problem has been well
appreciated before this formalization, practitioners have turned to a host
approximate Hamiltonians. By extending the results of \cite{Schuch09}, we show
in DFT, although the introduction of an approximate potential leads to a
non-interacting Hamiltonian, it remains, in the worst case, an NP-complete
problem.Comment: Contributed chapter to "Many-Electron Approaches in Physics,
Chemistry and Mathematics: A Multidisciplinary View
- …