2,695 research outputs found

    Model Theory and Entailment Rules for RDF Containers, Collections and Reification

    Get PDF
    An RDF graph is, at its core, just a set of statements consisting of subjects, predicates and objects. Nevertheless, since its inception practitioners have asked for richer data structures such as containers (for open lists, sets and bags), collections (for closed lists) and reification (for quoting and provenance). Though this desire has been addressed in the RDF primer and RDF Schema specification, they are explicitely ignored in its model theory. In this paper we formalize the intuitive semantics (as suggested by the RDF primer, the RDF Schema and RDF semantics specifications) of these compound data structures by two orthogonal extensions of the RDFS model theory (RDFCC for RDF containers and collections, and RDFR for RDF reification). Second, we give a set of entailment rules that is sound and complete for the RDFCC and RDFR model theories. We show that complexity of RDFCC and RDFR entailment remains the same as that of simple RDF entailment

    Ontological Dependence, Spatial Location, and Part Structure

    Get PDF
    This paper discusses attributively limited concrete objects such as disturbances (holes, folds, scratches etc), tropes, and attitudinal objects, which lack the sort of spatial location or part structures expected of them as concrete objects. The paper proposes an account in terms of (quasi-Fregean) abstraction, which has so far been applied only to abstract objects

    New Equations for Neutral Terms: A Sound and Complete Decision Procedure, Formalized

    Get PDF
    The definitional equality of an intensional type theory is its test of type compatibility. Today's systems rely on ordinary evaluation semantics to compare expressions in types, frustrating users with type errors arising when evaluation fails to identify two `obviously' equal terms. If only the machine could decide a richer theory! We propose a way to decide theories which supplement evaluation with `ν\nu-rules', rearranging the neutral parts of normal forms, and report a successful initial experiment. We study a simple -calculus with primitive fold, map and append operations on lists and develop in Agda a sound and complete decision procedure for an equational theory enriched with monoid, functor and fusion laws

    Reify Your Collection Queries for Modularity and Speed!

    Full text link
    Modularity and efficiency are often contradicting requirements, such that programers have to trade one for the other. We analyze this dilemma in the context of programs operating on collections. Performance-critical code using collections need often to be hand-optimized, leading to non-modular, brittle, and redundant code. In principle, this dilemma could be avoided by automatic collection-specific optimizations, such as fusion of collection traversals, usage of indexing, or reordering of filters. Unfortunately, it is not obvious how to encode such optimizations in terms of ordinary collection APIs, because the program operating on the collections is not reified and hence cannot be analyzed. We propose SQuOpt, the Scala Query Optimizer--a deep embedding of the Scala collections API that allows such analyses and optimizations to be defined and executed within Scala, without relying on external tools or compiler extensions. SQuOpt provides the same "look and feel" (syntax and static typing guarantees) as the standard collections API. We evaluate SQuOpt by re-implementing several code analyses of the Findbugs tool using SQuOpt, show average speedups of 12x with a maximum of 12800x and hence demonstrate that SQuOpt can reconcile modularity and efficiency in real-world applications.Comment: 20 page

    The Unreality of Realization

    Get PDF
    This paper argues against the realization principle, which reifies the realization relation between lower-level and higher-level properties. It begins with a review of some principles of naturalistic metaphysics. Then it criticizes some likely reasons for embracing the realization principle, and finally it argues against the principle directly. The most likely reasons for embracing the principle depend on the dubious assumption that special science theories cannot be true unless special science predicates designate properties. The principle itself turns out to be false because the realization relation fails the naturalistic test for reality: it makes no causal difference to the world.1 1This paper resulted from work done at John Heil's 2006 Mind and Metaphysics NEH Summer Seminar at Washington University in St. Louis. An early version of it was presented in a special symposium on realization at the 2007 meeting of the Southern Society for Philosophy and Psychology. I owe thanks to all the participants in both events for helpful discussions, and I owe particular thanks to Ken Aizawa, Torin Alter, Jason Ford, Carl Gillett, John Heil, Nicholas Helms, Pete Mandik, John Post, Gene Witmer, Michelle Wrenn, Tad Zawidzki, and two anonymous referees for the AJP

    Future Orientation on an Event-Relative Semantics for Modals

    Get PDF

    Normalization by evaluation for call-by-push-value and polarized lambda calculus

    Get PDF
    We observe that normalization by evaluation for simply-typed lambda-calculus with weak coproducts can be carried out in a weak bi-cartesian closed category of presheaves equipped with a monad that allows us to perform case distinction on neutral terms of sum type. The placement of the monad influences the normal forms we obtain: for instance, placing the monad on coproducts gives us eta-long beta-pi normal forms where pi refers to permutation of case distinctions out of elimination positions. We further observe that placing the monad on every coproduct is rather wasteful, and an optimal placement of the monad can be determined by considering polarized simple types inspired by focalization. Polarization classifies types into positive and negative, and it is sufficient to place the monad at the embedding of positive types into negative ones. We consider two calculi based on polarized types: pure call-by-push-value (CBPV) and polarized lambda-calculus, the natural deduction calculus corresponding to focalized sequent calculus. For these two calculi, we present algorithms for normalization by evaluation. We further discuss different implementations of the monad and their relation to existing normalization proofs for lambda-calculus with sums. Our developments have been partially formalized in the Agda proof assistant

    Survey over Existing Query and Transformation Languages

    Get PDF
    A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability of many current Semantic Web approaches to cope with data available in such diverging representation formalisms as XML, RDF, or Topic Maps. A common query language is the first step to allow transparent access to data in any of these formats. To further the understanding of the requirements and approaches proposed for query languages in the conventional as well as the Semantic Web, this report surveys a large number of query languages for accessing XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from all these areas. From the detailed survey of these query languages, a common classification scheme is derived that is useful for understanding and differentiating languages within and among all three areas

    A new approach to the semantics of model diagrams

    Get PDF
    Sometimes, a diagram can say more than a thousand lines of code. But, sadly, most of the time, software engineers give up on diagrams after the design phase, and all real work is done in code. The supremacy of code over diagrams would be leveled if diagrams were code. This paper suggests that model and instance diagrams, or, which amounts to the same, class and object diagrams, become first level entities in a suitably expressive programming language, viz., type theory. The proposed semantics of diagrams is compositional and self-describing, i.e., reflexive, or metacircular. Moreover, it is well suited for metamodelling and model driven engineering, as it is possible to prove model transformations correct in type theory. The encoding into type theory has the additional benefit of making diagrams immediately useful, given an implementation of type theory

    Typeful Normalization by Evaluation

    Get PDF
    We present the first typeful implementation of Normalization by Evaluation for the simply typed lambda-calculus with sums and control operators: we guarantee type preservation and eta-long (modulo commuting conversions), beta-normal forms using only Generalized Algebraic Data Types in a general-purpose programming language, here OCaml; and we account for sums and control operators with Continuation-Passing Style. First, we implement the standard NbE algorithm for the implicational fragment in a typeful way that is correct by construction. We then derive its call-by-value continuation-passing counterpart, that maps a lambda-term with sums and call/cc into a CPS term in normal form, which we express in a typed dedicated syntax. Beyond showcasing the expressive power of GADTs, we emphasize that type inference gives a smooth way to re-derive the encodings of the syntax and typing of normal forms in Continuation-Passing Style
    • …
    corecore