117 research outputs found

    CoFI: The Common Framework Initiative for Algebraic Specification and Development

    Get PDF
    An open collaborative effort has been initiated: to design acommon framework for algebraic specification and development of software. The rationale behind this initiative is that the lack of such a common framework greatly hinders the dissemination and application of researchresults in algebraic specification. In particular, the proliferationof specification languages, some differing in only quite minor ways from each other, is a considerable obstacle for the use of algebraic methods in industrial contexts, making it difficult to exploit standard examples, case studies and training material. A common framework with widespread acceptancethroughout the research community is urgently needed.The aim is to base the common framework as much as possible on a critical selection of features that have already been explored in various contexts. The common framework will provide a family of specificationlanguages at different levels: a central, reasonably expressive language, called CASL, for specifying (requirements, design, and architecture of) conventional software; restrictions of CASL to simpler languages, for use primarily in connection with prototyping and verification tools; and extensionsof CASL, oriented towards particular programming paradigms,such as reactive systems and object-based systems. It should also be possibleto embed many existing algebraic specification languages in members of the CASL family. A tentative design for CASL has already been proposed. Task groupsare studying its formal semantics, tool support, methodology, and other aspects, in preparation for the finalization of the design

    Towards MKM in the Large: Modular Representation and Scalable Software Architecture

    Full text link
    MKM has been defined as the quest for technologies to manage mathematical knowledge. MKM "in the small" is well-studied, so the real problem is to scale up to large, highly interconnected corpora: "MKM in the large". We contend that advances in two areas are needed to reach this goal. We need representation languages that support incremental processing of all primitive MKM operations, and we need software architectures and implementations that implement these operations scalably on large knowledge bases. We present instances of both in this paper: the MMT framework for modular theory-graphs that integrates meta-logical foundations, which forms the base of the next OMDoc version; and TNTBase, a versioned storage system for XML-based document formats. TNTBase becomes an MMT database by instantiating it with special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 201

    ASP, amalgamation, and the conceptual blending workflow

    Get PDF
    We present a framework for conceptual blending – a concept invention method that is advocated in cognitive science as a fundamental, and uniquely human engine for creative thinking. Herein, we employ the search capabilities of ASP to find commonalities among input concepts as part of the blending process, and we show how our approach fits within a generalised conceptual blending workflow. Specifically, we orchestrate ASP with imperative Python programming, to query external tools for theorem proving and colimit computation. We exemplify our approach with an example of creativity in mathematics. © Springer International Publishing Switzerland 2015.This work is supported by the 7th Framework Programme for Research of the European Commission funded COINVENT project (FET-Open grant number: 611553). M. Eppe is supported by the German Academic Exchange ServicePeer Reviewe

    Implicit complexity for coinductive data: a characterization of corecurrence

    Full text link
    We propose a framework for reasoning about programs that manipulate coinductive data as well as inductive data. Our approach is based on using equational programs, which support a seamless combination of computation and reasoning, and using productivity (fairness) as the fundamental assertion, rather than bi-simulation. The latter is expressible in terms of the former. As an application to this framework, we give an implicit characterization of corecurrence: a function is definable using corecurrence iff its productivity is provable using coinduction for formulas in which data-predicates do not occur negatively. This is an analog, albeit in weaker form, of a characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034

    Specifying with syntactic theory functors

    Get PDF
    We propose a framework, syntactic theory functors (STFs), for creating syntactic structuring mechanisms for specification languages. Good support for common reuse patterns is important for systematically developing specifications for large systems. Though immaterial to foundational theory, lack of support otherwise causes lengthy writing of boilerplate code or repeated adaptation from one context to another. We present STFs in the context of the Goguen & Burstall institution theory. This theory captures the essential structure of ontologies, modelling and formal specifications (OMS). In particular it provides powerful structuring mechanisms that are independent of the specification formalism, i.e., they are institution-independent. The presented STF framework is institution-independent as well. As such it encompasses many approaches to software and information systems. STFs subsume the standard institution-independent structuring mechanisms, and open up new ways of reusing existing and structuring new specifications. In this, STFs subsume and enrich the tool-set of ‘good practices’, which includes separation of concerns, ease of reuse of specification-text, and improved theorem proving support. STFs are aimed at structuring and reuse beyond the classical mechanisms. However, most STFs are institution-specific and support specific reuse patterns in that institution. With such institution-specific STFs it is possible to incrementally grow more complex institutions from simpler ones. This is very much needed when developing ontologies or specification languages for a new domain. In this paper, we motivate STFs with examples in Casl, the common standard algebraic specification language. We further demonstrate how STFs can ease specification through capturing repeated constructions once and for all as patterns formulated as STFs

    Architectural Refinement in HETS

    Get PDF
    The main objective of this work is to bring a number of improvements to the Heterogeneous Tool Set HETS, both from a theoretical and an implementation point of view. In the first part of the thesis we present a number of recent extensions of the tool, among which declarative specifications of logics, generalized theoroidal comorphisms, heterogeneous colimits and integration of the logic of the term rewriting system Maude. In the second part we concentrate on the CASL architectural refinement language, that we equip with a notion of refinement tree and with calculi for checking correctness and consistency of refinements. Soundness and completeness of these calculi is also investigated. Finally, we present the integration of the VSE refinement method in HETS as an institution comorphism. Thus, the proof manangement component of HETS remains unmodified

    Global semantic typing for inductive and coinductive computing

    Get PDF
    Inductive and coinductive types are commonly construed as ontological (Church-style) types, denoting canonical data-sets such as natural numbers, lists, and streams. For various purposes, notably the study of programs in the context of global semantics, it is preferable to think of types as semantical properties (Curry-style). Intrinsic theories were introduced in the late 1990s to provide a purely logical framework for reasoning about programs and their semantic types. We extend them here to data given by any combination of inductive and coinductive definitions. This approach is of interest because it fits tightly with syntactic, semantic, and proof theoretic fundamentals of formal logic, with potential applications in implicit computational complexity as well as extraction of programs from proofs. We prove a Canonicity Theorem, showing that the global definition of program typing, via the usual (Tarskian) semantics of first-order logic, agrees with their operational semantics in the intended model. Finally, we show that every intrinsic theory is interpretable in a conservative extension of first-order arithmetic. This means that quantification over infinite data objects does not lead, on its own, to proof-theoretic strength beyond that of Peano Arithmetic. Intrinsic theories are perfectly amenable to formulas-as-types Curry-Howard morphisms, and were used to characterize major computational complexity classes Their extensions described here have similar potential which has already been applied

    Interfacing concepts: Why declaration style shouldn't matter

    Get PDF
    A concept (or signature) describes the interface of a set of abstract types by listing the operations that should be supported for those types. When implementing a generic operation, such as sorting, we may then specify requirements such as “elements must be comparable” by requiring that the element type models the Comparable concept. We may also use axioms to describe behaviour that should be common to all models of a concept. However, the operations specified by the concept are not always the ones that are best suited for the implementation. For example, numbers and matrices may both be addable, but adding two numbers is conveniently done by using a return value, whereas adding a sparse and a dense matrix is probably best achieved by modifying the dense matrix. In both cases, though, we may want to pretend we're using a simple function with a return value, as this most closely matches the notation we know from mathematics. This paper presents two simple concepts to break the notational tie between implementation and use of an operation: functionalisation, which derives a set of canonical pure functions from a procedure; and mutification, which translates calls using the functionalised declarations into calls to the implemented procedure.publishedVersio

    A Formal and Tool-Equipped Approach for the Integration of State Diagrams and Formal Datatypes

    Get PDF
    International audienceSeparation of concerns or aspects is a way to deal with the increasing complexity of systems. The separate design of models for different aspects also promotes a better reusability level. However, an important issue is then to define means to integrate them into a global model. We present a formal and tool-equipped approach for the integration of dynamic models (behaviors expressed using state diagrams) and static models (formal data types) with the benefit to share advantages of both: graphical user-friendly models for behaviors, formal and abstract models for data types. Integration is achieved in a generic way so that it can deal with both different static specification languages (algebraic specifications, Z, B) and different dynamic specification semantic
    • …
    corecore