189 research outputs found
Some Thoughts on Hypercomputation
Hypercomputation is a relatively new branch of computer science that emerged
from the idea that the Church--Turing Thesis, which is supposed to describe
what is computable and what is noncomputable, cannot possible be true. Because
of its apparent validity, the Church--Turing Thesis has been used to
investigate the possible limits of intelligence of any imaginable life form,
and, consequently, the limits of information processing, since living beings
are, among others, information processors. However, in the light of
hypercomputation, which seems to be feasibly in our universe, one cannot impose
arbitrary limits to what intelligence can achieve unless there are specific
physical laws that prohibit the realization of something. In addition,
hypercomputation allows us to ponder about aspects of communication between
intelligent beings that have not been considered befor
Decision Problems for Partial Specifications: Empirical and Worst-Case Complexities
Partial specifications allow approximate models of systems such as Kripke structures, or labeled
transition systems to be created. Using the abstraction possible with these models, an avoidance
of the state-space explosion problem is possible, whilst still retaining a structure that can
have properties checked over it. A single partial specification abstracts a set of systems, whether
Kripke, labeled transition systems, or systems with both atomic propositions and named transitions.
This thesis deals in part with problems arising from a desire to efficiently evaluate
sentences of the modal ÎĽ-calculus over a partial specification.
Partial specifications also allow a single system to be modeled by a number of partial specifications,
which abstract away different parts of the system. Alternatively, a number of partial
specifications may represent different requirements on a system. The thesis also addresses the
question of whether a set of partial specifications is consistent, that is to say, whether a single
system exists that is abstracted by each member of the set. The effect of nominals, special
atomic propositions true on only one state in a system, is also considered on the problem of the
consistency of many partial specifications. The thesis also addresses the question of whether
the systems a partial specification abstracts are all abstracted by a second partial specification,
the problem of inclusion.
The thesis demonstrates how commonly used “specification patterns” – useful properties specified
in the modal ÎĽ-calculus, can be efficiently evaluated over partial specifications, and gives
upper and lower complexity bounds on the problems related to sets of partial specifications
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
Canonical Algebraic Generators in Automata Learning
Many methods for the verification of complex computer systems require the
existence of a tractable mathematical abstraction of the system, often in the
form of an automaton. In reality, however, such a model is hard to come up
with, in particular manually. Automata learning is a technique that can
automatically infer an automaton model from a system -- by observing its
behaviour. The majority of automata learning algorithms is based on the
so-called L* algorithm. The acceptor learned by L* has an important property:
it is canonical, in the sense that, it is, up to isomorphism, the unique
deterministic finite automaton of minimal size accepting a given regular
language. Establishing a similar result for other classes of acceptors, often
with side-effects, is of great practical importance. Non-deterministic finite
automata, for instance, can be exponentially more succinct than deterministic
ones, allowing verification to scale. Unfortunately, identifying a canonical
size-minimal non-deterministic acceptor of a given regular language is in
general not possible: it can happen that a regular language is accepted by two
non-isomorphic non-deterministic finite automata of minimal size. In
particular, it thus is unclear which one of the automata should be targeted by
a learning algorithm. In this thesis, we further explore the issue and identify
(sub-)classes of acceptors that admit canonical size-minimal representatives.Comment: PhD thesi
Canonical Algebraic Generators in Automata Learning
Many methods for the verification of complex computer systems require the existence of a tractable mathematical abstraction of the system, often in the form of an automaton. In reality, however, such a model is hard to come up with, in particular manually. Automata learning is a technique that can automatically infer an automaton model from a system -- by observing its behaviour. The majority of automata learning algorithms is based on the so-called L* algorithm. The acceptor learned by L* has an important property: it is canonical, in the sense that, it is, up to isomorphism, the unique deterministic finite automaton of minimal size accepting a given regular language. Establishing a similar result for other classes of acceptors, often with side-effects, is of great practical importance. Non-deterministic finite automata, for instance, can be exponentially more succinct than deterministic ones, allowing verification to scale. Unfortunately, identifying a canonical size-minimal non-deterministic acceptor of a given regular language is in general not possible: it can happen that a regular language is accepted by two non-isomorphic non-deterministic finite automata of minimal size. In particular, it thus is unclear which one of the automata should be targeted by a learning algorithm. In this thesis, we further explore the issue and identify (sub-)classes of acceptors that admit canonical size-minimal representatives.
In more detail, the contributions of this thesis are three-fold.
First, we expand the automata (learning) theory of Guarded Kleene Algebra with Tests (GKAT), an efficiently decidable logic expressive enough to model simple imperative programs. In particular, we present GL*, an algorithm that learns the unique size-minimal GKAT automaton for a given deterministic language, and prove that GL* is more efficient than an existing variation of L*. We implement both algorithms in OCaml, and compare them on example programs.
Second, we present a category-theoretical framework based on generators, bialgebras, and distributive laws, which identifies, for a wide class of automata with side-effects in a monad, canonical target models for automata learning. Apart from recovering examples from the literature, we discover a new canonical acceptor of regular languages, and present a unifying minimality result.
Finally, we show that the construction underlying our framework is an instance of a more general theory. First, we see that deriving a minimal bialgebra from a minimal coalgebra can be realized by applying a monad on a category of subobjects with respect to an epi-mono factorisation system. Second, we explore the abstract theory of generators and bases for algebras over a monad: we discuss bases for bialgebras, the product of bases, generalise the representation theory of linear maps, and compare our ideas to a coalgebra-based approach
- …