14 research outputs found
Towards MKM in the Large: Modular Representation and Scalable Software Architecture
MKM has been defined as the quest for technologies to manage mathematical
knowledge. MKM "in the small" is well-studied, so the real problem is to scale
up to large, highly interconnected corpora: "MKM in the large". We contend that
advances in two areas are needed to reach this goal. We need representation
languages that support incremental processing of all primitive MKM operations,
and we need software architectures and implementations that implement these
operations scalably on large knowledge bases.
We present instances of both in this paper: the MMT framework for modular
theory-graphs that integrates meta-logical foundations, which forms the base of
the next OMDoc version; and TNTBase, a versioned storage system for XML-based
document formats. TNTBase becomes an MMT database by instantiating it with
special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical
Knowledge Management: MKM 201
From Topics to Narrative Documents: Management and Personalization of Topic Collections
The paper proposes a document planning approach that structures topic-oriented materials into user-specific, narrative documents. This is achieved by introducing narrative flows into topic collections and by identifying variant relations between topics. Technically, topic collections are modeled as graphs, where nodes correspond to topics and edges denote semantic dependencies, narrative flows, and variant relations between the topics. These graphs are traversed to produce narrative documents. To personalise the traversal, user contexts are considered and define the users’ structure and content preferences. For illustration purposes the approach has been applied to a collection of learning resources
Kripke Semantics for Martin-L\"of's Extensional Type Theory
It is well-known that simple type theory is complete with respect to
non-standard set-valued models. Completeness for standard models only holds
with respect to certain extended classes of models, e.g., the class of
cartesian closed categories. Similarly, dependent type theory is complete for
locally cartesian closed categories. However, it is usually difficult to
establish the coherence of interpretations of dependent type theory, i.e., to
show that the interpretations of equal expressions are indeed equal. Several
classes of models have been used to remedy this problem. We contribute to this
investigation by giving a semantics that is standard, coherent, and
sufficiently general for completeness while remaining relatively easy to
compute with. Our models interpret types of Martin-L\"of's extensional
dependent type theory as sets indexed over posets or, equivalently, as
fibrations over posets. This semantics can be seen as a generalization to
dependent type theory of the interpretation of intuitionistic first-order logic
in Kripke models. This yields a simple coherent model theory, with respect to
which simple and dependent type theory are sound and complete
A practical module system for LF
Module systems for proof assistants provide administrative support for large developments when mechanizing the meta-theory of programming languages and logics. In this paper we describe a module system for the logical framework LF. It is based on two main primitives: signatures and signature morphisms, which provide a semantically transparent module level and permit to represent logic translations as homomorphisms. Modular LF is a conservative extension over LF, and defines an elaboration of modular into core LF signatures. We have implemented our design in the Twelf system and used it to modularize large parts of the Twelf example library
Representing Model Theory in a Type-Theoretical Logical Framework
AbstractWe give a comprehensive formal representation of first-order logic using the recently developed module system for the Twelf implementation of the Edinburgh Logical Framework LF. The module system places strong emphasis on signature morphisms as the main primitive concept, which makes it particularly useful to reason about structural translations, which occur frequently in proof and model theory.Syntax and proof theory are encoded in the usual way using LF's higher order abstract syntax and judgments-as-types paradigm, but using the module system to treat all connectives and quantifiers independently. The difficulty is to reason about the model theory, for which the mathematical foundation in which the models are expressed must be encoded itself. We choose a variant of Martin-Löf's type theory as this foundation and use it to axiomatize first-order model theoretic semantics. Then we can encode the soundness proof as a signature morphism from the proof theory to the model theory. We extend our results to models given in terms of set theory using an encoding of Zermelo-Fraenkel set theory in LF and giving a signature morphism from Martin-Löf type theory into it. These encodings can be checked mechanically by Twelf.Our results demonstrate the feasibility of comprehensively formalizing large scale representation theorems and thus promise significant future applications
The role of logical interpretations on program development
Stepwise refinement of algebraic specifications is a well known formal methodology for program development. However, traditional notions of refinement based on signature morphisms are often too rigid to capture a number of relevant transformations in the context of software design, reuse, and adaptation. This paper proposes a new approach to refinement in which signature morphisms are replaced by logical interpretations as a means to witness refinements. The approach is first presented in the context of equational logic, and later generalised to deductive systems of arbitrary dimension. This allows, for example, refining sentential into equational specifications and the latter into modal ones.The authors express their gratitude to the anonymous referees who raised a number of pertinent questions entailing a more precise characterisation of the paper's contributions and a clarification of their scope. This work was funded by HRDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT (Portuguese Foundation for Science and Technology) within project FCOMP-01-0124-FEDER-028923 (Nasoni) and the project PEst-C/MAT/UI4106/2011 with COMPETE number FCOMP-01-0124-FEDER-022690 (CIDMA-UA). The first author also acknowledges the financial assistance by the projects GetFun, reference FP7-PEOPLE-2012-IRSES, and NOCIONES IDE COMPLETUD, reference FFI2009-09345 (MICINN - Spain). A. Madeira was supported by the FCT within the project NORTE-01-0124-FEDER-000060
Exploring the landscapes of "computing": digital, neuromorphic, unconventional -- and beyond
The acceleration race of digital computing technologies seems to be steering
toward impasses -- technological, economical and environmental -- a condition
that has spurred research efforts in alternative, "neuromorphic" (brain-like)
computing technologies. Furthermore, since decades the idea of exploiting
nonlinear physical phenomena "directly" for non-digital computing has been
explored under names like "unconventional computing", "natural computing",
"physical computing", or "in-materio computing". This has been taking place in
niches which are small compared to other sectors of computer science. In this
paper I stake out the grounds of how a general concept of "computing" can be
developed which comprises digital, neuromorphic, unconventional and possible
future "computing" paradigms. The main contribution of this paper is a
wide-scope survey of existing formal conceptualizations of "computing". The
survey inspects approaches rooted in three different kinds of background
mathematics: discrete-symbolic formalisms, probabilistic modeling, and
dynamical-systems oriented views. It turns out that different choices of
background mathematics lead to decisively different understandings of what
"computing" is. Across all of this diversity, a unifying coordinate system for
theorizing about "computing" can be distilled. Within these coordinates I
locate anchor points for a foundational formal theory of a future
computing-engineering discipline that includes, but will reach beyond, digital
and neuromorphic computing.Comment: An extended and carefully revised version of this manuscript has now
(March 2021) been published as "Toward a generalized theory comprising
digital, neuromorphic, and unconventional computing" in the new open-access
journal Neuromorphic Computing and Engineerin