135 research outputs found

    Where Fail-Safe Default Logics Fail

    Full text link
    Reiter's original definition of default logic allows for the application of a default that contradicts a previously applied one. We call failure this condition. The possibility of generating failures has been in the past considered as a semantical problem, and variants have been proposed to solve it. We show that it is instead a computational feature that is needed to encode some domains into default logic

    LoLa: a modular ontology of logics, languages and translations

    Get PDF
    The Distributed Ontology Language (DOL), currently being standardised within the OntoIOp (Ontology Integration and Interoperability) activity of ISO/TC 37/SC 3, aims at providing a unified framework for (i) ontologies formalised in heterogeneous logics, (ii) modular ontologies, (iii) links between ontologies, and (iv) annotation of ontologies.\ud \ud This paper focuses on the LoLa ontology, which formally describes DOL's vocabulary for logics, ontology languages (and their serialisations), as well as logic translations. Interestingly, to adequately formalise the logical relationships between these notions, LoLa itself needs to be axiomatised heterogeneously---a task for which we choose DOL. Namely, we use the logic RDF for ABox assertions, OWL for basic axiomatisations of various modules concerning logics, languages, and translations, FOL for capturing certain closure rules that are not expressible in OWL (For the sake of tool availability it is still helpful not to map everything to FOL.), and circumscription for minimising the extension of concepts describing default translations

    Modularity in answer set programs

    Get PDF
    Answer set programming (ASP) is an approach to rule-based constraint programming allowing flexible knowledge representation in variety of application areas. The declarative nature of ASP is reflected in problem solving. First, a programmer writes down a logic program the answer sets of which correspond to the solutions of the problem. The answer sets of the program are then computed using a special purpose search engine, an ASP solver. The development of efficient ASP solvers has enabled the use of answer set programming in various application domains such as planning, product configuration, computer aided verification, and bioinformatics. The topic of this thesis is modularity in answer set programming. While modern programming languages typically provide means to exploit modularity in a number of ways to govern the complexity of programs and their development process, relatively little attention has been paid to modularity in ASP. When designing a module architecture for ASP, it is essential to establish full compositionality of the semantics with respect to the module system. A balance is sought between introducing restrictions that guarantee the compositionality of the semantics and enforce a good programming style in ASP, and avoiding restrictions on the module hierarchy for the sake of flexibility of knowledge representation. To justify a replacement of a module with another, that is, to be able to guarantee that changes made on the level of modules do not alter the semantics of the program when seen as an entity, a notion of equivalence for modules is provided. In close connection with the development of the compositional module architecture, a transformation from verification of equivalence to search for answer sets is developed. The translation-based approach makes it unnecessary to develop a dedicated tool for the equivalence verification task by allowing the direct use of existing ASP solvers. Translations and transformations between different problems, program classes, and formalisms are another central theme in the thesis. To guarantee efficiency and soundness of the translation-based approach, certain syntactical and semantical properties of transformations are desirable, in terms of translation time, solution correspondence between the original and the transformed problem, and locality/globality of a particular transformation. In certain cases a more refined notion of minimality than that inherent in ASP can make program encodings more intuitive. Lifschitz' parallel and prioritized circumscription offer a solution in which certain atoms are allowed to vary or to have fixed values while others are falsified as far as possible according to priority classes. In this thesis a linear and faithful transformation embedding parallel and prioritized circumscription into ASP is provided. This enhances the knowledge representation capabilities of answer set programming by allowing the use of existing ASP solvers for computing parallel and prioritized circumscription

    05171 Abstracts Collection -- Nonmonotonic Reasoning, Answer Set Programming and Constraints

    Get PDF
    From 24.04.05 to 29.04.05, the Dagstuhl Seminar 05171 ``Nonmonotonic Reasoning, Answer Set Programming and Constraints\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Congruent Weak Conformance

    Get PDF
    This research addresses the problem of verifying implementations against specifications through an innovative logic approach. Congruent weak conformance, a formal relationship between agents and specifications, has been developed and proven to be a congruent partial order. This property arises from a set of relations called weak conformations. The largest, called weak conformance, is analogous to Milner\u27s observational equivalence. Weak conformance is not an equivalence, however, but rather an ordering relation among processes. Weak conformance allows behaviors in the implementation that are unreachable in the specification. Furthermore, it exploits output concurrencies and allows interleaving of extraneous output actions in the implementation. Finally, reasonable restrictions in CCS syntax strengthen weak conformance to a congruence, called congruent weak conformance. At present, congruent weak conformance is the best known formal relation for verifying implementations against specifications. This precongruence derives maximal flexibility and embodies all weaknesses in input, output, and no-connect signals while retaining a fully replaceable conformance to the specification. Congruent weak conformance has additional utility in verifying transformations between systems of incompatible semantics. This dissertation describes a hypothetical translator from the informal simulation semantics of VHDL to the bisimulation semantics of CCS. A second translator is described from VHDL to a broadcast-communication version of CCS. By showing that they preserve congruent weak conformance, both translators are verified

    Counterfactuals 2.0 Logic, Truth Conditions, and Probability

    Get PDF
    The present thesis focuses on counterfactuals. Specifically, we will address new questions and open problems that arise for the standard semantic accounts of counterfactual conditionals. The first four chapters deal with the Lewisian semantic account of counterfactuals. On a technical level, we contribute by providing an equivalent algebraic semantics for Lewis' variably strict conditional logics, which is notably absent in the literature. We introduce a new kind of algebra and differentiate between local and global versions of each of Lewis' variably strict conditional logics. We study the algebraic properties of Lewis' logics and the structure theory of our newly introduced algebras. Additionally, we employ a new algebraic construction, based on the framework of Boolean algebras of conditionals, to provide an alternative semantics for Lewisian counterfactual conditionals. This semantic account allows us to establish new truth conditions for Lewisian counterfactuals, implying that Lewisian counterfactuals are definable conditionals, and each counterfactual can be characterized as a modality of a corresponding probabilistic conditional. We further extend these results by demonstrating that each Lewisian counterfactual can also be characterized as a modality of the corresponding Stalnaker conditional. The resulting formal semantic framework is much more expressive than the standard one and, in addition to providing new truth conditions for counterfactuals, it also allows us to define a new class of conditional logics falling into the broader framework of weak logics. On the philosophical side, we argue that our results shed new light on the understanding of Lewisian counterfactuals and prompt a conceptual shift in this field: Lewisian counterfactual dependence can be understood as a modality of probabilistic conditional dependence or Stalnakerian conditional dependence. In other words, whether a counterfactual connection occurs between A and B depends on whether it is "necessary" for a Stalnakerian/probabilistic dependence to occur between A and B. We also propose some ways to interpret the kind of necessity involved in this interpretation. The remaining two chapters deal with the probability of counterfactuals. We provide an answer to the question of how we can characterize the probability that a Lewisian counterfactual is true, which is an open problem in the literature. We show that the probability of a Lewisian counterfactual can be characterized in terms of belief functions from Dempster-Shafer theory of evidence, which are a super-additive generalization of standard probability. We define an updating procedure for belief functions based on the imaging procedure and show that the probability of a counterfactual A > B amounts to the belief function of B imaged on A. This characterization strongly relies on the logical results we proved in the previous chapters. Moreover, we also solve an open problem concerning the procedure to assign a probability to complex counterfactuals in the framework of causal modelling semantics. A limitation of causal modelling semantics is that it cannot account for the probability of counterfactuals with disjunctive antecedents. Drawing on the same previous works, we define a new procedure to assign a probability to counterfactuals with disjunctive antecedents in the framework of causal modelling semantics. We also argue that our procedure is satisfactory in that it yields meaningful results and adheres to some conceptually intuitive constraints one may want to impose when computing the probability of counterfactuals

    On the Existence of Characterization Logics and Fundamental Properties of Argumentation Semantics

    Get PDF
    Given the large variety of existing logical formalisms it is of utmost importance to select the most adequate one for a specific purpose, e.g. for representing the knowledge relevant for a particular application or for using the formalism as a modeling tool for problem solving. Awareness of the nature of a logical formalism, in other words, of its fundamental intrinsic properties, is indispensable and provides the basis of an informed choice. One such intrinsic property of logic-based knowledge representation languages is the context-dependency of pieces of knowledge. In classical propositional logic, for example, there is no such context-dependence: whenever two sets of formulas are equivalent in the sense of having the same models (ordinary equivalence), then they are mutually replaceable in arbitrary contexts (strong equivalence). However, a large number of commonly used formalisms are not like classical logic which leads to a series of interesting developments. It turned out that sometimes, to characterize strong equivalence in formalism L, we can use ordinary equivalence in formalism L0: for example, strong equivalence in normal logic programs under stable models can be characterized by the standard semantics of the logic of here-and-there. Such results about the existence of characterizing logics has rightly been recognized as important for the study of concrete knowledge representation formalisms and raise a fundamental question: Does every formalism have one? In this thesis, we answer this question with a qualified “yes”. More precisely, we show that the important case of considering only finite knowledge bases guarantees the existence of a canonical characterizing formalism. Furthermore, we argue that those characterizing formalisms can be seen as classical, monotonic logics which are uniquely determined (up to isomorphism) regarding their model theory. The other main part of this thesis is devoted to argumentation semantics which play the flagship role in Dung’s abstract argumentation theory. Almost all of them are motivated by an easily understandable intuition of what should be acceptable in the light of conflicts. However, although these intuitions equip us with short and comprehensible formal definitions it turned out that their intrinsic properties such as existence and uniqueness, expressibility, replaceability and verifiability are not that easily accessible. We review the mentioned properties for almost all semantics available in the literature. In doing so we include two main axes: namely first, the distinction between extension-based and labelling-based versions and secondly, the distinction of different kind of argumentation frameworks such as finite or unrestricted ones
    • …
    corecore