103 research outputs found

    Combining open and closed world reasoning for the semantic web

    Get PDF
    Dissertação para obtenção do Grau de Doutor em InformáticaOne important problem in the ongoing standardization of knowledge representation languages for the Semantic Web is combining open world ontology languages, such as the OWL-based ones, and closed world rule-based languages. The main difficulty of such a combination is that both formalisms are quite orthogonal w.r.t. expressiveness and how decidability is achieved. Combining non-monotonic rules and ontologies is thus a challenging task that requires careful balancing between expressiveness of the knowledge representation language and the computational complexity of reasoning. In this thesis, we will argue in favor of a combination of ontologies and nonmonotonic rules that tightly integrates the two formalisms involved, that has a computational complexity that is as low as possible, and that allows us to query for information instead of calculating the whole model. As our starting point we choose the mature approach of hybrid MKNF knowledge bases, which is based on an adaptation of the Stable Model Semantics to knowledge bases consisting of ontology axioms and rules. We extend the two-valued framework of MKNF logics to a three-valued logics, and we propose a well-founded semantics for non-disjunctive hybrid MKNF knowledge bases. This new semantics promises to provide better efficiency of reasoning,and it is faithful w.r.t. the original two-valued MKNF semantics and compatible with both the OWL-based semantics and the traditional Well- Founded Semantics for logic programs. We provide an algorithm based on operators to compute the unique model, and we extend SLG resolution with tabling to a general framework that allows us to query a combination of non-monotonic rules and any given ontology language. Finally, we investigate concrete instances of that procedure w.r.t. three tractable ontology languages, namely the three description logics underlying the OWL 2 pro les.Fundação para a Ciência e Tecnologia - grant contract SFRH/BD/28745/200

    Aggregated fuzzy answer set programming

    Get PDF
    Fuzzy Answer Set programming (FASP) is an extension of answer set programming (ASP), based on fuzzy logic. It allows to encode continuous optimization problems in the same concise manner as ASP allows to model combinatorial problems. As a result of its inherent continuity, rules in FASP may be satisfied or violated to certain degrees. Rather than insisting that all rules are fully satisfied, we may only require that they are satisfied partially, to the best extent possible. However, most approaches that feature partial rule satisfaction limit themselves to attaching predefined weights to rules, which is not sufficiently flexible for most real-life applications. In this paper, we develop an alternative, based on aggregator functions that specify which (combination of) rules are most important to satisfy. We extend upon previous work by allowing aggregator expressions to define partially ordered preferences, and by the use of a fixpoint semantics

    A Goal-Directed Implementation of Query Answering for Hybrid MKNF Knowledge Bases

    Full text link
    Ontologies and rules are usually loosely coupled in knowledge representation formalisms. In fact, ontologies use open-world reasoning while the leading semantics for rules use non-monotonic, closed-world reasoning. One exception is the tightly-coupled framework of Minimal Knowledge and Negation as Failure (MKNF), which allows statements about individuals to be jointly derived via entailment from an ontology and inferences from rules. Nonetheless, the practical usefulness of MKNF has not always been clear, although recent work has formalized a general resolution-based method for querying MKNF when rules are taken to have the well-founded semantics, and the ontology is modeled by a general oracle. That work leaves open what algorithms should be used to relate the entailments of the ontology and the inferences of rules. In this paper we provide such algorithms, and describe the implementation of a query-driven system, CDF-Rules, for hybrid knowledge bases combining both (non-monotonic) rules under the well-founded semantics and a (monotonic) ontology, represented by a CDF Type-1 (ALQ) theory. To appear in Theory and Practice of Logic Programming (TPLP

    Heterogeneous reasoning in dynamic environments

    Get PDF
    We would like to thank K. Schekotihin and the anonymous reviewers for their comments, which helped improving this paper. G. Brewka, S. Ellmauthaler, and J. Puhrer were partially supported by the German Research Foundation (DFG) under grants BR-1817/7-1/2 and FOR 1513. R. Goncalves, M. Knorr and J. Leite were partially supported by Fundacao para a Ciencia e a Tecnologia (FCT) under project NOVA LINCS (UID/CEC/04516/2013). Moreover, R. Goncalves was partially supported by FCT grant SFRH/BPD/100906/2014 and M. Knorr by FCT grant SFRH/BPD/86970/2012.Managed multi-context systems (mMCSs) allow for the integration of heterogeneous knowledge sources in a modular and very general way. They were, however, mainly designed for static scenarios and are therefore not well-suited for dynamic environments in which continuous reasoning over such heterogeneous knowledge with constantly arriving streams of data is necessary. In this paper, we introduce reactive multi-context systems (rMCSs), a framework for reactive reasoning in the presence of heterogeneous knowledge sources and data streams. We show that rMCSs are indeed well-suited for this purpose by illustrating how several typical problems arising in the context of stream reasoning can be handled using them, by showing how inconsistencies possibly occurring in the integration of multiple knowledge sources can be handled, and by arguing that the potential non-determinism of rMCSs can be avoided if needed using an alternative, more skeptical well-founded semantics instead with beneficial computational properties. We also investigate the computational complexity of various reasoning problems related to rMCSs. Finally, we discuss related work, and show that rMCSs do not only generalize mMCSs to dynamic settings, but also capture/extend relevant approaches w.r.t. dynamics in knowledge representation and stream reasoning.publishe

    An encompassing framework for Paraconsistent Logic Programs

    Get PDF
    AbstractWe propose a framework which extends Antitonic Logic Programs [Damásio and Pereira, in: Proc. 6th Int. Conf. on Logic Programming and Nonmonotonic Reasoning, Springer, 2001, p. 748] to an arbitrary complete bilattice of truth-values, where belief and doubt are explicitly represented. Inspired by Ginsberg and Fitting's bilattice approaches, this framework allows a precise definition of important operators found in logic programming, such as explicit and default negation. In particular, it leads to a natural semantical integration of explicit and default negation through the Coherence Principle [Pereira and Alferes, in: European Conference on Artificial Intelligence, 1992, p. 102], according to which explicit negation entails default negation. We then define Coherent Answer Sets, and the Paraconsistent Well-founded Model semantics, generalizing many paraconsistent semantics for logic programs. In particular, Paraconsistent Well-Founded Semantics with eXplicit negation (WFSXp) [Alferes et al., J. Automated Reas. 14 (1) (1995) 93–147; Damásio, PhD thesis, 1996]. The framework is an extension of Antitonic Logic Programs for most cases, and is general enough to capture Probabilistic Deductive Databases, Possibilistic Logic Programming, Hybrid Probabilistic Logic Programs, and Fuzzy Logic Programming. Thus, we have a powerful mathematical formalism for dealing simultaneously with default, paraconsistency, and uncertainty reasoning. Results are provided about how our semantical framework deals with inconsistent information and with its propagation by the rules of the program

    Dual Forgetting Operators in the Context of Weakest Sufficient and Strongest Necessary Conditions

    Full text link
    Forgetting is an important concept in knowledge representation and automated reasoning with widespread applications across a number of disciplines. A standard forgetting operator, characterized in [Lin and Reiter'94] in terms of model-theoretic semantics and primarily focusing on the propositional case, opened up a new research subarea. In this paper, a new operator called weak forgetting, dual to standard forgetting, is introduced and both together are shown to offer a new more uniform perspective on forgetting operators in general. Both the weak and standard forgetting operators are characterized in terms of entailment and inference, rather than a model theoretic semantics. This naturally leads to a useful algorithmic perspective based on quantifier elimination and the use of Ackermman's Lemma and its fixpoint generalization. The strong formal relationship between standard forgetting and strongest necessary conditions and weak forgetting and weakest sufficient conditions is also characterized quite naturally through the entailment-based, inferential perspective used. The framework used to characterize the dual forgetting operators is also generalized to the first-order case and includes useful algorithms for computing first-order forgetting operators in special cases. Practical examples are also included to show the importance of both weak and standard forgetting in modeling and representation

    Proceedings of the 11th Workshop on Nonmonotonic Reasoning

    Get PDF
    These are the proceedings of the 11th Nonmonotonic Reasoning Workshop. The aim of this series is to bring together active researchers in the broad area of nonmonotonic reasoning, including belief revision, reasoning about actions, planning, logic programming, argumentation, causality, probabilistic and possibilistic approaches to KR, and other related topics. As part of the program of the 11th workshop, we have assessed the status of the field and discussed issues such as: Significant recent achievements in the theory and automation of NMR; Critical short and long term goals for NMR; Emerging new research directions in NMR; Practical applications of NMR; Significance of NMR to knowledge representation and AI in general
    corecore