38,893 research outputs found

    Do we need dynamic semantics?

    Get PDF
    I suspect the answer to the question in the title of this paper is no. But the scope of my paper will be considerably more limited: I will be concerned with whether certain types of considerations that are commonly cited in favor of dynamic semantics do in fact push us towards a dynamic semantics. Ultimately, I will argue that the evidence points to a dynamics of discourse that is best treated pragmatically, rather than as part of the semantics

    Hurford Conditionals

    Get PDF
    Compare the following conditionals: 'If John is not in Paris, he is in France' versus 'If John is in France, he is not in Paris.' The second sounds entirely natural, whereas the first sounds quite strange. This contrast is puzzling, because these two conditionals have the same structure at a certain level of logical abstraction, namely 'If ¬p+, then p.' We argue that existing theories of informational oddness do not distinguish between these conditionals. We do not have an account of the divergence in judgments about the two, but we think this is a fascinating puzzle which we pose here in the hope others will be able to solve it

    Information, Processes and Games

    Full text link
    We survey the prospects for an Information Dynamics which can serve as the basis for a fundamental theory of information, incorporating qualitative and structural as well as quantitative aspects. We motivate our discussion with some basic conceptual puzzles: how can information increase in computation, and what is it that we are actually computing in general? Then we survey a number of the theories which have been developed within Computer Science, as partial exemplifications of the kind of fundamental theory which we seek: including Domain Theory, Dynamic Logic, and Process Algebra. We look at recent work showing new ways of combining quantitative and qualitative theories of information, as embodied respectively by Domain Theory and Shannon Information Theory. Then we look at Game Semantics and Geometry of Interaction, as examples of dynamic models of logic and computation in which information flow and interaction are made central and explicit. We conclude by looking briefly at some key issues for future progress.Comment: Appeared in Philosophy of Information, vol. 8 of Handbook of the Philosophy of Science, edited by Dov Gabbay and John Woods. arXiv admin note: substantial text overlap with arXiv:quant-ph/0312044 by other author

    Unifying Semantic Foundations for Automated Verification Tools in Isabelle/UTP

    Full text link
    The growing complexity and diversity of models used in the engineering of dependable systems implies that a variety of formal methods, across differing abstractions, paradigms, and presentations, must be integrated. Such an integration relies on unified semantic foundations for the various notations, and co-ordination of a variety of automated verification tools. The contribution of this paper is Isabelle/UTP, an implementation of Hoare and He's Unifying Theories of Programming, a framework for unification of formal semantics. Isabelle/UTP permits the mechanisation of computational theories for diverse paradigms, and their use in constructing formalised semantic models. These can be further applied in the development of verification tools, harnessing Isabelle's proof automation facilities. Several layers of mathematical foundations are developed, including lenses to model variables and state spaces as algebraic objects, alphabetised predicates and relations to model programs, including algebraic and axiomatic semantics, proof tools for Hoare logic and refinement calculus, and UTP theories to encode computational paradigms.Comment: 40 pages, Accepted for Science of Computer Programming, June 202

    Modality and expressibility

    Get PDF
    When embedding data are used to argue against semantic theory A and in favor of semantic theory B, it is important to ask whether A could make sense of those data. It is possible to ask that question on a case-by-case basis. But suppose we could show that A can make sense of all the embedding data which B can possibly make sense of. This would, on the one hand, undermine arguments in favor of B over A on the basis of embedding data. And, provided that the converse does not hold—that is, that A can make sense of strictly more embedding data than B can—it would also show that there is a precise sense in which B is more constrained than A, yielding a pro tanto simplicity-based consideration in favor of B. In this paper I develop tools which allow us to make comparisons of this kind, which I call comparisons of potential expressive power. I motivate the development of these tools by way of exploration of the recent debate about epistemic modals. Prominent theories which have been developed in response to embedding data turn out to be strictly less expressive than the standard relational theory, a fact which necessitates a reorientation in how to think about the choice between these theories

    Using Methods of Declarative Logic Programming for Intelligent Information Agents

    Full text link
    The search for information on the web is faced with several problems, which arise on the one hand from the vast number of available sources, and on the other hand from their heterogeneity. A promising approach is the use of multi-agent systems of information agents, which cooperatively solve advanced information-retrieval problems. This requires capabilities to address complex tasks, such as search and assessment of sources, query planning, information merging and fusion, dealing with incomplete information, and handling of inconsistency. In this paper, our interest is in the role which some methods from the field of declarative logic programming can play in the realization of reasoning capabilities for information agents. In particular, we are interested in how they can be used and further developed for the specific needs of this application domain. We review some existing systems and current projects, which address information-integration problems. We then focus on declarative knowledge-representation methods, and review and evaluate approaches from logic programming and nonmonotonic reasoning for information agents. We discuss advantages and drawbacks, and point out possible extensions and open issues.Comment: 66 pages, 1 figure, to be published in "Theory and Practice of Logic Programming

    Modular Action Language ALM

    Full text link
    The paper introduces a new modular action language, ALM, and illustrates the methodology of its use. It is based on the approach of Gelfond and Lifschitz (1993; 1998) in which a high-level action language is used as a front end for a logic programming system description. The resulting logic programming representation is used to perform various computational tasks. The methodology based on existing action languages works well for small and even medium size systems, but is not meant to deal with larger systems that require structuring of knowledge. ALM is meant to remedy this problem. Structuring of knowledge in ALM is supported by the concepts of module (a formal description of a specific piece of knowledge packaged as a unit), module hierarchy, and library, and by the division of a system description of ALM into two parts: theory and structure. A theory consists of one or more modules with a common theme, possibly organized into a module hierarchy based on a dependency relation. It contains declarations of sorts, attributes, and properties of the domain together with axioms describing them. Structures are used to describe the domain's objects. These features, together with the means for defining classes of a domain as special cases of previously defined ones, facilitate the stepwise development, testing, and readability of a knowledge base, as well as the creation of knowledge representation libraries. To appear in Theory and Practice of Logic Programming (TPLP).Comment: 65 pages, 7 figures. To appear in Theory and Practice of Logic Programming (TPLP

    All-Path Reachability Logic

    Full text link
    This paper presents a language-independent proof system for reachability properties of programs written in non-deterministic (e.g., concurrent) languages, referred to as all-path reachability logic. It derives partial-correctness properties with all-path semantics (a state satisfying a given precondition reaches states satisfying a given postcondition on all terminating execution paths). The proof system takes as axioms any unconditional operational semantics, and is sound (partially correct) and (relatively) complete, independent of the object language. The soundness has also been mechanized in Coq. This approach is implemented in a tool for semantics-based verification as part of the K framework (http://kframework.org

    Generic unified modelling process for developing semantically rich, dynamic and temporal models

    Get PDF
    Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models

    Herstellung eines Phaffia rhodozyma : Stamms mit verstärkter Astaxanthin-Synthese über gezielte genetische Modifikation chemisch mutagenisierter Stämme

    Get PDF
    Ziel dieser Arbeit war es erstmals durch eine Kombination aus chemischer Mutagenese und gezielter genetischer Modifikation (hier: „metabolic engineering“) einen Phaffia-Stamm herzustellen, welcher über die Mutagenese hinaus über eine weiter verstärkte Astaxanthin-Synthese verfügt. Die von „DSM Nutritional Products“ bereitgestellten chemischen Mutanten wurden analysiert und über einen Selektionsprozess auf Pigmentstabilität und Wachstum hin optimiert, da die Stämme aus cryogenisierter Dauerkultur starke Pigmentinstabilitäten und ein verzögertes Wachstum aufwiesen. Über eine exploratorische Phase wurde die Carotinoidsynthese analysiert und festgestellt, dass in den Mutanten keine Einzelreaktionen betroffen sind, welche für die Heraufregulierung der Carotinoidsynthese in den Mutanten verantwortlich sind. Hierbei wurden Limitierungen identifiziert und diese durch Transformation von Expressionsplasmiden mit geeigneten Genen aufgehoben, um damit eine noch effizientere Metabolisierung von Astaxanthin-Vorstufen hin zu Astaxanthin zu erreichen. Eine Überexpression der Phytoensynthase/Lycopinzyklase crtYB resultierte in einem gesteigerten Carotinoidgehalt bei gleichbleibendem Astaxanthin- Anteil. Durch eine zweite Transformation mit einer Expressionskassette für die Astaxanthin-Synthase asy konnte der Carotinoidgehalt weiter gesteigert und zusätzlich eine Limitierung der Metabolisierung von Astaxanthin-Vorstufen behoben werden, sodass die Transformante nahezu alle Intermediate der Astaxanthinsynthese zu Astaxanthin metabolisieren konnte (Gassel et al. 2013). Es konnte gezeigt werden, dass auch in den Mutanten, aus Experimenten mit dem Wildtyp bekannte, Limitierungen identifiziert und ausgeglichen werden konnten
    • …
    corecore