9,366 research outputs found

    Type-elimination-based reasoning for the description logic SHIQbs using decision diagrams and disjunctive datalog

    Get PDF
    We propose a novel, type-elimination-based method for reasoning in the description logic SHIQbs including DL-safe rules. To this end, we first establish a knowledge compilation method converting the terminological part of an ALCIb knowledge base into an ordered binary decision diagram (OBDD) which represents a canonical model. This OBDD can in turn be transformed into disjunctive Datalog and merged with the assertional part of the knowledge base in order to perform combined reasoning. In order to leverage our technique for full SHIQbs, we provide a stepwise reduction from SHIQbs to ALCIb that preserves satisfiability and entailment of positive and negative ground facts. The proposed technique is shown to be worst case optimal w.r.t. combined and data complexity and easily admits extensions with ground conjunctive queries.Comment: 38 pages, 3 figures, camera ready version of paper accepted for publication in Logical Methods in Computer Scienc

    Interpolation in local theory extensions

    Full text link
    In this paper we study interpolation in local extensions of a base theory. We identify situations in which it is possible to obtain interpolants in a hierarchical manner, by using a prover and a procedure for generating interpolants in the base theory as black-boxes. We present several examples of theory extensions in which interpolants can be computed this way, and discuss applications in verification, knowledge representation, and modular reasoning in combinations of local theories.Comment: 31 pages, 1 figur

    Higher-Order Termination: from Kruskal to Computability

    Get PDF
    Termination is a major question in both logic and computer science. In logic, termination is at the heart of proof theory where it is usually called strong normalization (of cut elimination). In computer science, termination has always been an important issue for showing programs correct. In the early days of logic, strong normalization was usually shown by assigning ordinals to expressions in such a way that eliminating a cut would yield an expression with a smaller ordinal. In the early days of verification, computer scientists used similar ideas, interpreting the arguments of a program call by a natural number, such as their size. Showing the size of the arguments to decrease for each recursive call gives a termination proof of the program, which is however rather weak since it can only yield quite small ordinals. In the sixties, Tait invented a new method for showing cut elimination of natural deduction, based on a predicate over the set of terms, such that the membership of an expression to the predicate implied the strong normalization property for that expression. The predicate being defined by induction on types, or even as a fixpoint, this method could yield much larger ordinals. Later generalized by Girard under the name of reducibility or computability candidates, it showed very effective in proving the strong normalization property of typed lambda-calculi..

    Tolerancing and Sheet Bending in Small Batch Part Manufacturing

    Get PDF
    Tolerances indicate geometrical limits between which a component is expected to perform its function adequately. They are used for instance for set-up selection in process planning and for inspection. Tolerances must be accounted for in sequencing and positioning procedures for bending of sheet metal parts. In bending, the shape of a part changes not only locally, but globally as well. Therefore, sheet metal part manufacturing presents some specific problems as regards reasoning about tolerances. The paper focuses on the interpretation and conversion of tolerances as part of a sequencing procedure for bending to be used in an integrated CAPP system

    A geometrical model for the Monte Carlo simulation of the TrueBeam linac

    Get PDF
    Monte Carlo (MC) simulation of linacs depends on the accurate geometrical description of the head. The geometry of the Varian TrueBeam (TB) linac is not available to researchers. Instead, the company distributes phase-space files (PSFs) of the flattening-filter-free (FFF) beams tallied upstream the jaws. Yet, MC simulations based on third party tallied PSFs are subject to limitations. We present an experimentally-based geometry developed for the simulation of the FFF beams of the TB linac. The upper part of the TB linac was modeled modifying the Clinac 2100 geometry. The most important modification is the replacement of the standard flattening filters by ad hoc thin filters which were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6MV and 10MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements for radiation fields ranging from 3X3 to 40X40 cm2. Indicators of agreement between the experimental data and the simulation results obtained with the proposed geometrical model were the dose differences, the root-mean-square error and the gamma index. The same comparisons were done for dose profiles obtained from MC simulations using the second generation of PSFs distributed by Varian for the TB linac. Results of comparisons show a good agreement of the dose for the ansatz geometry similar to that obtained for the simulations with the TB PSFs for all fields considered, except for the 40X40 cm2 field where the ansatz geometry was able to reproduce the measured dose more accurately. Our approach makes possible to: (i) adapt the initial beam parameters to match measured dose profiles; (ii) reduce the statistical uncertainty to arbitrarily low values; and (iii) assess systematic uncertainties by employing different MC codes

    Eternity and Vision in Boethius

    Get PDF
    Boethius and Augustine of Hippo are two of the fountainheads from which the long tradition of regarding God’s existence as timelessly eternal has flowed, a tradition which has influenced not only Christianity, but Judaism and Islam, too. But though the two have divine eternality in common, I shall argue that in other respects, in certain crucial respects, they differ significantly over how they articulate that notio

    The real projective spaces in homotopy type theory

    Full text link
    Homotopy type theory is a version of Martin-L\"of type theory taking advantage of its homotopical models. In particular, we can use and construct objects of homotopy theory and reason about them using higher inductive types. In this article, we construct the real projective spaces, key players in homotopy theory, as certain higher inductive types in homotopy type theory. The classical definition of RP(n), as the quotient space identifying antipodal points of the n-sphere, does not translate directly to homotopy type theory. Instead, we define RP(n) by induction on n simultaneously with its tautological bundle of 2-element sets. As the base case, we take RP(-1) to be the empty type. In the inductive step, we take RP(n+1) to be the mapping cone of the projection map of the tautological bundle of RP(n), and we use its universal property and the univalence axiom to define the tautological bundle on RP(n+1). By showing that the total space of the tautological bundle of RP(n) is the n-sphere, we retrieve the classical description of RP(n+1) as RP(n) with an (n+1)-cell attached to it. The infinite dimensional real projective space, defined as the sequential colimit of the RP(n) with the canonical inclusion maps, is equivalent to the Eilenberg-MacLane space K(Z/2Z,1), which here arises as the subtype of the universe consisting of 2-element types. Indeed, the infinite dimensional projective space classifies the 0-sphere bundles, which one can think of as synthetic line bundles. These constructions in homotopy type theory further illustrate the utility of homotopy type theory, including the interplay of type theoretic and homotopy theoretic ideas.Comment: 8 pages, to appear in proceedings of LICS 201

    A Computational Interpretation of Context-Free Expressions

    Full text link
    We phrase parsing with context-free expressions as a type inhabitation problem where values are parse trees and types are context-free expressions. We first show how containment among context-free and regular expressions can be reduced to a reachability problem by using a canonical representation of states. The proofs-as-programs principle yields a computational interpretation of the reachability problem in terms of a coercion that transforms the parse tree for a context-free expression into a parse tree for a regular expression. It also yields a partial coercion from regular parse trees to context-free ones. The partial coercion from the trivial language of all words to a context-free expression corresponds to a predictive parser for the expression
    • …
    corecore