456 research outputs found
Towards an Axiomatization of Simple Analog Algorithms
International audienceWe propose a formalization of analog algorithms, extending the framework of abstract state machines to continuous-time models of computation
Semigroups with if-then-else and halting programs
The "ifâthenâelse" construction is one of the most elementary programming commands, and its abstract laws have been widely studied, starting with McCarthy. Possibly, the most obvious extension of this is to include the operation of composition of programs, which gives a semigroup of functions (total, partial, or possibly general binary relations) that can be recombined using ifâthenâelse. We show that this particular extension admits no finite complete axiomatization and instead focus on the case where composition of functions with predicates is also allowed (and we argue there is good reason to take this approach). In the case of total functions â modeling halting programs â we give a complete axiomatization for the theory in terms of a finite system of equations. We obtain a similar result when an operation of equality test and/or fixed point test is included
The Generic Model of Computation
Over the past two decades, Yuri Gurevich and his colleagues have formulated
axiomatic foundations for the notion of algorithm, be it classical,
interactive, or parallel, and formalized them in the new generic framework of
abstract state machines. This approach has recently been extended to suggest a
formalization of the notion of effective computation over arbitrary countable
domains. The central notions are summarized herein.Comment: In Proceedings DCM 2011, arXiv:1207.682
FO-Definability of Shrub-Depth
Shrub-depth is a graph invariant often considered as an extension of tree-depth to dense graphs. We show that the model-checking problem of monadic second-order logic on a class of graphs of bounded shrub-depth can be decided by AC^0-circuits after a precomputation on the formula. This generalizes a similar result on graphs of bounded tree-depth [Y. Chen and J. Flum, 2018]. At the core of our proof is the definability in first-order logic of tree-models for graphs of bounded shrub-depth
Abstract State Machines 1988-1998: Commented ASM Bibliography
An annotated bibliography of papers which deal with or use Abstract State
Machines (ASMs), as of January 1998.Comment: Also maintained as a BibTeX file at http://www.eecs.umich.edu/gasm
Explication as a Three-Step Procedure: the case of the Church-Turing Thesis
In recent years two different axiomatic characterizations of the intuitive concept of effective calculability have been proposed, one by Sieg and the other by Dershowitz and Gurevich. Analyzing them from the perspective of Carnapian explication, I argue that these two characterizations explicate the intuitive notion of effective calculability in two different ways. I will trace back these two ways to Turingâs and Kolmogorovâs informal analyses of the intuitive notion of calculability and to their respective outputs: the notion of computorability and the notion of algorithmability. I will then argue that, in order to adequately capture the conceptual differences between these two notions, the classical two-step picture of explication is not enough. I will present a more fine-grained three-step version of Carnapian explication, showing how with its help the difference between these two notions can be better understood and explained
Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis
Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logic-based languages, so-called description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human- and machine-readable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a trade-off between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web.
Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze data-sets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a data-set in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the so-called canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness.
In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data
Picturing classical and quantum Bayesian inference
We introduce a graphical framework for Bayesian inference that is
sufficiently general to accommodate not just the standard case but also recent
proposals for a theory of quantum Bayesian inference wherein one considers
density operators rather than probability distributions as representative of
degrees of belief. The diagrammatic framework is stated in the graphical
language of symmetric monoidal categories and of compact structures and
Frobenius structures therein, in which Bayesian inversion boils down to
transposition with respect to an appropriate compact structure. We characterize
classical Bayesian inference in terms of a graphical property and demonstrate
that our approach eliminates some purely conventional elements that appear in
common representations thereof, such as whether degrees of belief are
represented by probabilities or entropic quantities. We also introduce a
quantum-like calculus wherein the Frobenius structure is noncommutative and
show that it can accommodate Leifer's calculus of `conditional density
operators'. The notion of conditional independence is also generalized to our
graphical setting and we make some preliminary connections to the theory of
Bayesian networks. Finally, we demonstrate how to construct a graphical
Bayesian calculus within any dagger compact category.Comment: 38 pages, lots of picture
- âŠ