2,536 research outputs found
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
A Swiss Pocket Knife for Computability
This research is about operational- and complexity-oriented aspects of
classical foundations of computability theory. The approach is to re-examine
some classical theorems and constructions, but with new criteria for success
that are natural from a programming language perspective.
Three cornerstones of computability theory are the S-m-ntheorem; Turing's
"universal machine"; and Kleene's second recursion theorem. In today's
programming language parlance these are respectively partial evaluation,
self-interpretation, and reflection. In retrospect it is fascinating that
Kleene's 1938 proof is constructive; and in essence builds a self-reproducing
program.
Computability theory originated in the 1930s, long before the invention of
computers and programs. Its emphasis was on delimiting the boundaries of
computability. Some milestones include 1936 (Turing), 1938 (Kleene), 1967
(isomorphism of programming languages), 1985 (partial evaluation), 1989 (theory
implementation), 1993 (efficient self-interpretation) and 2006 (term register
machines).
The "Swiss pocket knife" of the title is a programming language that allows
efficient computer implementation of all three computability cornerstones,
emphasising the third: Kleene's second recursion theorem. We describe
experiments with a tree-based computational model aiming for both fast program
generation and fast execution of the generated programs.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
Cinnamons: A Computation Model Underlying Control Network Programming
We give the easily recognizable name "cinnamon" and "cinnamon programming" to
a new computation model intended to form a theoretical foundation for Control
Network Programming (CNP). CNP has established itself as a programming paradigm
combining declarative and imperative features, built-in search engine, powerful
tools for search control that allow easy, intuitive, visual development of
heuristic, nondeterministic, and randomized solutions. We define rigorously the
syntax and semantics of the new model of computation, at the same time trying
to keep clear the intuition behind and to include enough examples. The
purposely simplified theoretical model is then compared to both WHILE-programs
(thus demonstrating its Turing-completeness), and the "real" CNP. Finally,
future research possibilities are mentioned that would eventually extend the
cinnamon programming into the directions of nondeterminism, randomness, and
fuzziness.Comment: 7th Intl Conf. on Computer Science, Engineering & Applications
(ICCSEA 2017) September 23~24, 2017, Copenhagen, Denmar
Algorithmic complexity of quantum capacity
Recently the theory of communication developed by Shannon has been extended
to the quantum realm by exploiting the rules of quantum theory. This latter
stems on complex vector spaces. However complex (as well as real) numbers are
just idealizations and they are not available in practice where we can only
deal with rational numbers. This fact naturally leads to the question of
whether the developed notions of capacities for quantum channels truly catch
their ability to transmit information. Here we answer this question for the
quantum capacity. To this end we resort to the notion of semi-computability in
order to approximately (by rational numbers) describe quantum states and
quantum channel maps. Then we introduce algorithmic entropies (like algorithmic
quantum coherent information) and derive relevant properties for them. Finally
we define algorithmic quantum capacity and prove that it equals the standard
one
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
- …