667 research outputs found
Formalizing Computability Theory via Partial Recursive Functions
We present an extension to the library of the Lean theorem
prover formalizing the foundations of computability theory. We use primitive
recursive functions and partial recursive functions as the main objects of
study, and we use a constructive encoding of partial functions such that they
are executable when the programs in question provably halt. Main theorems
include the construction of a universal partial recursive function and a proof
of the undecidability of the halting problem. Type class inference provides a
transparent way to supply G\"{o}del numberings where needed and encapsulate the
encoding details.Comment: 16 pages, accepted to ITP 201
Computation Environments, An Interactive Semantics for Turing Machines (which P is not equal to NP considering it)
To scrutinize notions of computation and time complexity, we introduce and
formally define an interactive model for computation that we call it the
\emph{computation environment}. A computation environment consists of two main
parts: i) a universal processor and ii) a computist who uses the computability
power of the universal processor to perform effective procedures. The notion of
computation finds it meaning, for the computist, through his
\underline{interaction} with the universal processor.
We are interested in those computation environments which can be considered
as alternative for the real computation environment that the human being is its
computist. These computation environments must have two properties: 1- being
physically plausible, and 2- being enough powerful.
Based on Copeland' criteria for effective procedures, we define what a
\emph{physically plausible} computation environment is.
We construct two \emph{physically plausible} and \emph{enough powerful}
computation environments: 1- the Turing computation environment, denoted by
, and 2- a persistently evolutionary computation environment, denoted by
, which persistently evolve in the course of executing the computations.
We prove that the equality of complexity classes and
in the computation environment conflicts with the
\underline{free will} of the computist.
We provide an axiomatic system for Turing computability and
prove that ignoring just one of the axiom of , it would not be
possible to derive from the rest of axioms.
We prove that the computist who lives inside the environment , can never
be confident that whether he lives in a static environment or a persistently
evolutionary one.Comment: 33 pages, interactive computation, P vs N
Computability and analysis: the legacy of Alan Turing
We discuss the legacy of Alan Turing and his impact on computability and
analysis.Comment: 49 page
Computability of probability measures and Martin-Lof randomness over metric spaces
In this paper we investigate algorithmic randomness on more general spaces
than the Cantor space, namely computable metric spaces. To do this, we first
develop a unified framework allowing computations with probability measures. We
show that any computable metric space with a computable probability measure is
isomorphic to the Cantor space in a computable and measure-theoretic sense. We
show that any computable metric space admits a universal uniform randomness
test (without further assumption).Comment: 29 page
Kolmogorov Complexity in perspective. Part I: Information Theory and Randomnes
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts in the same volume. Part I is dedicated to information theory and
the mathematical formalization of randomness based on Kolmogorov complexity.
This last application goes back to the 60's and 70's with the work of
Martin-L\"of, Schnorr, Chaitin, Levin, and has gained new impetus in the last
years.Comment: 40 page
Variations on the Theme of Conning in Mathematical Economics
The mathematization of economics is almost exclusively in terms of the mathematics of real analysis which, in turn, is founded on set theory (and the axiom of choice) and orthodox mathematical logic. In this paper I try to point out that this kind of mathematization is replete with economic infelicities. The attempt to extract these infelicities is in terms of three main examples: dynamics, policy and rational expectations and learning. The focus is on the role and reliance on standard xed point theorems in orthodox mathematical economics
Formalizing restriction categories
Restriction categories are an abstract axiomatic framework by Cockett and Lack for reasoning about (generalizations of the idea of) partiality of functions. In a restriction category, every map defines an endomap on its domain, the corresponding partial identity map. Restriction categories cover a number of examples of different flavors and are sound and complete with respect to the more synthetic and concrete partial map categories. A partial map category is based on a given category (of total maps) and a map in it is a map from a subobject of the domain. In this paper, we report on an Agda formalization of the first chapters of the theory of restriction categories, including the challenging completeness result. We explain the mathematics formalized, comment on the design decisions we made for the formalization, and illustrate them at work
Formalizing Termination Proofs under Polynomial Quasi-interpretations
Usual termination proofs for a functional program require to check all the
possible reduction paths. Due to an exponential gap between the height and size
of such the reduction tree, no naive formalization of termination proofs yields
a connection to the polynomial complexity of the given program. We solve this
problem employing the notion of minimal function graph, a set of pairs of a
term and its normal form, which is defined as the least fixed point of a
monotone operator. We show that termination proofs for programs reducing under
lexicographic path orders (LPOs for short) and polynomially quasi-interpretable
can be optimally performed in a weak fragment of Peano arithmetic. This yields
an alternative proof of the fact that every function computed by an
LPO-terminating, polynomially quasi-interpretable program is computable in
polynomial space. The formalization is indeed optimal since every
polynomial-space computable function can be computed by such a program. The
crucial observation is that inductive definitions of minimal function graphs
under LPO-terminating programs can be approximated with transfinite induction
along LPOs.Comment: In Proceedings FICS 2015, arXiv:1509.0282
- …