135,656 research outputs found

    Foundational certification of data-flow analyses

    Get PDF
    Data-flow analyses, such as live variables analysis, available expressions analysis etc., are usefully specifiable as type systems. These are sound and, in the case of distributive analysis frameworks, complete wrt. appropriate natural semantics on abstract properties. Applications include certification of analyses and "optimization" of functional correctness proofs alongside programs. On the example of live variables analysis, we show that analysis type systems are applied versions of more foundational Hoare logics describing either the same abstract property semantics as the type system (liveness states) or a more concrete natural semantics on transition traces of a suitable kind (future defs and uses). The rules of the type system are derivable in the Hoare logic for the abstract property semantics and those in turn in the Hoare logic for the transition trace semantics. This reduction of the burden of trusting the certification vehicle can be compared to foundational proof-carrying code, where general-purpose program logics are preferred to special-purpose type systems and universal logic to program logics. We also look at conditional liveness analysis to see that the same foundational development is also possible for conditional data-flow analyses proceeding from type systems for combined “standard state and abstract property” semantics.Fundação para a Ciência e a Tecnologia (FCT

    What Are Polymorphically-Typed Ambients?

    Full text link
    Abstract: The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We consider an Ambient Calculus where ambients transport and exchange programs rather that just inert data. We propose different senses in which such a calculus can be said to be polymorphically typed, and design accordingly a polymorphic type system for it. Our type system assigns types to embedded programs and what we call behaviors to processes; a denotational semantics of behaviors is then proposed, here called trace semantics, underlying much of the remaining analysis. We state and prove a Subject Reduction property for our polymorphically typed calculus. Based on techniques borrowed from finite automata theory, type-checking of fully type-annotated processes is shown to be decidable; the time complexity of our decision procedure is exponential (this is a worst-case in theory, arguably not encountered in practice). Our polymorphically-typed calculus is a conservative extension of the typed Ambient Calculus originally proposed by Cardelli and Gordon

    Intensional and Extensional Semantics of Bounded and Unbounded Nondeterminism

    Get PDF
    We give extensional and intensional characterizations of nondeterministic functional programs: as structure preserving functions between biorders, and as nondeterministic sequential algorithms on ordered concrete data structures which compute them. A fundamental result establishes that the extensional and intensional representations of non-deterministic programs are equivalent, by showing how to construct a unique sequential algorithm which computes a given monotone and stable function, and describing the conditions on sequential algorithms which correspond to continuity with respect to each order. We illustrate by defining may and must-testing denotational semantics for a sequential functional language with bounded and unbounded choice operators. We prove that these are computationally adequate, despite the non-continuity of the must-testing semantics of unbounded nondeterminism. In the bounded case, we prove that our continuous models are fully abstract with respect to may and must-testing by identifying a simple universal type, which may also form the basis for models of the untyped lambda-calculus. In the unbounded case we observe that our model contains computable functions which are not denoted by terms, by identifying a further "weak continuity" property of the definable elements, and use this to establish that it is not fully abstract

    A Logical Foundation for Environment Classifiers

    Full text link
    Taha and Nielsen have developed a multi-stage calculus {\lambda}{\alpha} with a sound type system using the notion of environment classifiers. They are special identifiers, with which code fragments and variable declarations are annotated, and their scoping mechanism is used to ensure statically that certain code fragments are closed and safely runnable. In this paper, we investigate the Curry-Howard isomorphism for environment classifiers by developing a typed {\lambda}-calculus {\lambda}|>. It corresponds to multi-modal logic that allows quantification by transition variables---a counterpart of classifiers---which range over (possibly empty) sequences of labeled transitions between possible worlds. This interpretation will reduce the "run" construct---which has a special typing rule in {\lambda}{\alpha}---and embedding of closed code into other code fragments of different stages---which would be only realized by the cross-stage persistence operator in {\lambda}{\alpha}---to merely a special case of classifier application. {\lambda}|> enjoys not only basic properties including subject reduction, confluence, and strong normalization but also an important property as a multi-stage calculus: time-ordered normalization of full reduction. Then, we develop a big-step evaluation semantics for an ML-like language based on {\lambda}|> with its type system and prove that the evaluation of a well-typed {\lambda}|> program is properly staged. We also identify a fragment of the language, where erasure evaluation is possible. Finally, we show that the proof system augmented with a classical axiom is sound and complete with respect to a Kripke semantics of the logic

    Finding The Lazy Programmer's Bugs

    Get PDF
    Traditionally developers and testers created huge numbers of explicit tests, enumerating interesting cases, perhaps biased by what they believe to be the current boundary conditions of the function being tested. Or at least, they were supposed to. A major step forward was the development of property testing. Property testing requires the user to write a few functional properties that are used to generate tests, and requires an external library or tool to create test data for the tests. As such many thousands of tests can be created for a single property. For the purely functional programming language Haskell there are several such libraries; for example QuickCheck [CH00], SmallCheck and Lazy SmallCheck [RNL08]. Unfortunately, property testing still requires the user to write explicit tests. Fortunately, we note there are already many implicit tests present in programs. Developers may throw assertion errors, or the compiler may silently insert runtime exceptions for incomplete pattern matches. We attempt to automate the testing process using these implicit tests. Our contributions are in four main areas: (1) We have developed algorithms to automatically infer appropriate constructors and functions needed to generate test data without requiring additional programmer work or annotations. (2) To combine the constructors and functions into test expressions we take advantage of Haskell's lazy evaluation semantics by applying the techniques of needed narrowing and lazy instantiation to guide generation. (3) We keep the type of test data at its most general, in order to prevent committing too early to monomorphic types that cause needless wasted tests. (4) We have developed novel ways of creating Haskell case expressions to inspect elements inside returned data structures, in order to discover exceptions that may be hidden by laziness, and to make our test data generation algorithm more expressive. In order to validate our claims, we have implemented these techniques in Irulan, a fully automatic tool for generating systematic black-box unit tests for Haskell library code. We have designed Irulan to generate high coverage test suites and detect common programming errors in the process

    A counterfactual analysis of adnominal modifiers

    Get PDF
    In this article, I shall argue for a counterfactual analysis of the semantics of some ad-nominal modifiers. This analysis formalizes the intuition that adnominal modifiers are always restrictive in some sense. Technically, the proposal is formalized with an opera-tor that applies to two intensional entities of type and returns as the value the same type of semantic entity (type: ). In terms of how the rule works, it resem-bles Predicate Modification since it requires a special rule. However, it does not inter-sect the two sets in question. Rather, the rule yields a set of entities that are not neces-sarily a subset of the entities specified by the common noun in the actual world. I call this semantic procedure Restrictive Modification (RM). Essential reasoning is given as follows: the property of being x that has the modifier property and if in all closest worlds w in which x had a crucial property that all CN entities have, then x would have the CN property in w. For example, in the case of stone lion, it denotes the prop-erty of being x made of stone such that if x were to possess a crucial property that a re-al lion has (say, the property of being alive with flesh and blood), then x would be a real lion. This reasoning applies to a variety of adjective types. Some problematic ex-amples such as house key and ice water remain, and they are a reserved for a future study

    Extended RDF: Computability and Complexity Issues

    Get PDF
    ERDF stable model semantics is a recently proposed semantics for ERDF ontologies and a faithful extension of RDFS semantics on RDF graphs. In this paper, we elaborate on the computability and complexity issues of the ERDF stable model semantics. Based on the undecidability result of ERDF stable model semantics, decidability under this semantics cannot be achieved, unless ERDF ontologies of restricted syntax are considered. Therefore, we propose a slightly modified semantics for ERDF ontologies, called ERDF #n- stable model semantics. We show that entailment under this semantics is, in general, decidable and also extends RDFS entailment. Equivalence statements between the two semantics are provided. Additionally, we provide algorithms that compute the ERDF #n-stable models of syntax-restricted and general ERDF ontologies. Further, we provide complexity results for the ERDF #nstable model semantics on syntax-restricted and general ERDF ontologies. Finally, we provide complexity results for the ERDF stable model semantics on syntax-restricted ERDF ontologies

    Two kinds of procedural semantics for privative modification

    Get PDF
    In this paper we present two kinds of procedural semantics for privative modification. We do this for three reasons. The first reason is to launch a tough test case to gauge the degree of substantial agreement between a constructivist and a realist interpretation of procedural semantics; the second is to extend Martin-L ̈f’s Constructive Type Theory to privative modification, which is characteristic of natural language; the third reason is to sketch a positive characterization of privation

    Reasoning in the OWL 2 Full Ontology Language using First-Order Automated Theorem Proving

    Full text link
    OWL 2 has been standardized by the World Wide Web Consortium (W3C) as a family of ontology languages for the Semantic Web. The most expressive of these languages is OWL 2 Full, but to date no reasoner has been implemented for this language. Consistency and entailment checking are known to be undecidable for OWL 2 Full. We have translated a large fragment of the OWL 2 Full semantics into first-order logic, and used automated theorem proving systems to do reasoning based on this theory. The results are promising, and indicate that this approach can be applied in practice for effective OWL reasoning, beyond the capabilities of current Semantic Web reasoners. This is an extended version of a paper with the same title that has been published at CADE 2011, LNAI 6803, pp. 446-460. The extended version provides appendices with additional resources that were used in the reported evaluation

    The Sigma-Semantics: A Comprehensive Semantics for Functional Programs

    Get PDF
    A comprehensive semantics for functional programs is presented, which generalizes the well-known call-by-value and call-by-name semantics. By permitting a separate choice between call-by value and call-by-name for every argument position of every function and parameterizing the semantics by this choice we abstract from the parameter-passing mechanism. Thus common and distinguishing features of all instances of the sigma-semantics, especially call-by-value and call-by-name semantics, are highlighted. Furthermore, a property can be validated for all instances of the sigma-semantics by a single proof. This is employed for proving the equivalence of the given denotational (fixed-point based) and two operational (reduction based) definitions of the sigma-semantics. We present and apply means for very simple proofs of equivalence with the denotational sigma-semantics for a large class of reduction-based sigma-semantics. Our basis are simple first-order constructor-based functional programs with patterns
    corecore