435 research outputs found
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
Logic and operator algebras
The most recent wave of applications of logic to operator algebras is a young
and rapidly developing field. This is a snapshot of the current state of the
art.Comment: A minor chang
Hyperset Approach to Semi-structured Databases and the Experimental Implementation of the Query Language Delta
This thesis presents practical suggestions towards the implementation of the
hyperset approach to semi-structured databases and the associated query
language Delta. This work can be characterised as part of a top-down approach
to semi-structured databases, from theory to practice. The main original part
of this work consisted in implementation of the hyperset Delta query language
to semi-structured databases, including worked example queries. In fact, the
goal was to demonstrate the practical details of this approach and language.
The required development of an extended, practical version of the language
based on the existing theoretical version, and the corresponding operational
semantics. Here we present detailed description of the most essential steps of
the implementation. Another crucial problem for this approach was to
demonstrate how to deal in reality with the concept of the equality relation
between (hyper)sets, which is computationally realised by the bisimulation
relation. In fact, this expensive procedure, especially in the case of
distributed semi-structured data, required some additional theoretical
considerations and practical suggestions for efficient implementation. To this
end the 'local/global' strategy for computing the bisimulation relation over
distributed semi-structured data was developed and its efficiency was
experimentally confirmed.Comment: Technical Report (PhD thesis), University of Liverpool, Englan
A proof of strong normalisation using domain theory
Ulrich Berger presented a powerful proof of strong normalisation using
domains, in particular it simplifies significantly Tait's proof of strong
normalisation of Spector's bar recursion. The main contribution of this paper
is to show that, using ideas from intersection types and Martin-Lof's domain
interpretation of type theory one can in turn simplify further U. Berger's
argument. We build a domain model for an untyped programming language where U.
Berger has an interpretation only for typed terms or alternatively has an
interpretation for untyped terms but need an extra condition to deduce strong
normalisation. As a main application, we show that Martin-L\"{o}f dependent
type theory extended with a program for Spector double negation shift.Comment: 16 page
Quantum Cognition based on an Ambiguous Representation Derived from a Rough Set Approximation
Over the last years, in a series papers by Arrechi and others, a model for
the cognitive processes involved in decision making has been proposed and
investigated. The key element of this model is the expression of apprehension
and judgement, basic cognitive process of decision making, as an inverse Bayes
inference classifying the information content of neuron spike trains. For
successive plural stimuli, it has been shown that this inference, equipped with
basic non-algorithmic jumps, is affected by quantum-like characteristics. We
show here that such a decision making process is related consistently with
ambiguous representation by an observer within a universe of discourse. In our
work ambiguous representation of an object or a stimuli is defined by a pair of
maps from objects of a set to their representations, where these two maps are
interrelated in a particular structure. The a priori and a posteriori
hypotheses in Bayes inference are replaced by the upper and lower
approximation, correspondingly, for the initial data sets each derived with
respect to a map. We show further that due to the particular structural
relation between the two maps, the logical structure of such combined
approximations can only be expressed as an orthomodular lattice and therefore
can be represented by a quantum rather than a Boolean logic. To our knowledge,
this is the first investigation aiming to reveal the concrete logic structure
of inverse Bayes inference in cognitive processes.Comment: 23 pages, 8 figures, original research pape
- …